<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Rethinking Trust in Responsible AI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marina Tropmann-Frick</string-name>
          <email>marina.tropmann-frick@haw-hamburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Gille</string-name>
          <email>michael.gille@haw-hamburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Susanne Draheim</string-name>
          <email>susanne.draheim@haw-hamburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philine Pommerencke</string-name>
          <email>philine.pommerencke@haw-hamburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maximilian Kiener</string-name>
          <email>maximilian.kiener@tuhh.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jonas Bozenhard</string-name>
          <email>jonas.bozenhard@tuhh.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Hamburg University of Applied Sciences</institution>
          ,
          <addr-line>Berliner Tor 7, 20099 Hamburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Hamburg University of Technology, Institute for Ethics in Technology</institution>
          ,
          <addr-line>am Schwarzenberg-Campus 3, 21073 Hamburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Trust is widely recognized as a core principle of Responsible AI, yet its interpretation varies significantly across disciplines. This paper examines how computer science, sociology, philosophy, and law conceptualize trust in AI systems, highlighting both tensions and complementarities. From a computer science perspective, trust is often approached as a set of system-level properties that should be formalized and evaluated with metrics. In contrast, the social sciences and humanities emphasize its relational, normative, and institutional dimensions. We argue that trust cannot be reduced to a single system property or technical measure, as it emerges from social-technical interactions involving users, developers, legal norms, and social expectations. To support interdisciplinary dialogue, we propose treating trust as a boundary concept that enables cooperation across epistemic communities acknowledging conceptual diferences. Trust is widely recognized as a core principle of Responsible AI, yet its interpretation varies significantly across disciplines. This paper examines how computer science, sociology, philosophy, and law conceptualize trust in AI systems, highlighting both tensions and complementarities. From a computer science perspective, trust is often approached as a set of system-level properties that should be formalized and evaluated with metrics. In contrast, the social sciences and humanities emphasize its relational, normative, and institutional dimensions. We argue that trust cannot be reduced to a single system property or technical measure, as it emerges from social-technical interactions involving users, developers, legal norms, and social expectations. To support interdisciplinary dialogue, we propose treating trust as a boundary concept that enables cooperation across epistemic communities acknowledging conceptual diferences.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Responsible AI</kwd>
        <kwd>trust</kwd>
        <kwd>trustworthy AI</kwd>
        <kwd>AI governance</kwd>
        <kwd>boundary concept</kwd>
        <kwd>interdisciplinarity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Trust is widely invoked as an anchor concept of Responsible AI by regulators, researchers, and
developers alike. Yet across disciplines, trust is conceptualized in strikingly diferent ways: as a technical
attribute and measurable property, as a normative stance, or as a social relationship, sometimes with
economic connotations. Conceptual equivocality handicaps eforts to design, assess, and govern AI
systems responsibly. We contend that the absence of a shared conceptual foundation and a lack of
reflective awareness of diferences impedes interdisciplinary collaboration and, ultimately, weakens the
governance of AI systems. This paper aims to highlight diverse epistemic meanings of the term trust.
By doing so, we take an initial step toward clarifying how trust is understood and employed across
Responsible AI discourses. Rather than proposing a single, unified definition of trust, we suggest treating
it as a boundary concept [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which acknowledges the various conceptualizations across disciplines and
thereby supports innovation and mutual learning among researchers from diferent fields. Our goal is
to lay the groundwork for a more integrated and reflexive approach to trust in AI contexts. We focus on
interdisciplinary perspectives to reconceptualize trust as a boundary concept, i.e. one that can facilitate
dialogue across disciplines without erasing conceptual diferences.
      </p>
      <p>Trust is often treated as if it were an integrative concept. We contrast this view by sketching its role
in divergent epistemic frameworks, opening space for interdisciplinary discourse and critique and,
at the same time, making the misalignment between (quantitative) indicators/metrics and the plural
meanings of trust transparent. The meaning of trust depends on which discipline gets to define it, with
considerable intradisciplinary diferences. We want to frame trust as a site of negotiation, emphasizing
process over consensus, and challenging the reductionist use of metrics. Our contribution is guided
by the question, how trust can be understood and operationalized as a boundary concept that enables
communication across disciplinary approaches to AI governance.</p>
      <p>
        To provide a structured basis for interdisciplinary analysis, we use the VERIFAI framework of
Responsible AI as a shared reference point [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. While trust itself may not be reducible to any single property,
several system-level dimensions are commonly associated with trustworthiness and responsibility
in both technical and non-technical contexts. As illustrated in Figure 1, we identify six dimensions
of Responsible AI: Trustworthy system behavior, referring to predictability, reliability, and stability
under uncertainty; ethical and legal alignment, including fairness, accountability, and compliance with
laws and sustainability goals; explainability, ensuring that models and decisions are understandable
to relevant users; privacy preservation, through legal compliance and technical protection against
inference or leakage; security, addressing resilience to adversarial, poisoning, and extraction attacks;
human-centeredness, promoting meaningful human oversight, agency, and alignment with user
expectations.
      </p>
      <p>
        Responsible AI refers in this context to the development, deployment, and governance of AI systems in
a manner that is in line with ethical values, legal norms, and societal expectations. While definitions
vary a lot across diferent scientific communities, we adopt the following as a provisional framing
of Responsible AI [
        <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
        ]: Responsible AI is human-centered and ensures users’ trust through ethical
ways of decision making. The decision-making must be fair, accountable, not biased, with good
intentions, non-discriminating, and consistent with societal laws and norms. Responsible AI ensures that
automated decisions are explainable to users while always preserving users privacy through a secure
implementation.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Disciplinary Perspectives on Trust in AI</title>
      <p>
        The concept of trust, embedded in the discourse surrounding Responsible AI, is interpreted and
implemented very diferently across disciplines. To unpack this complexity, we examine how trust is
conceptualized and approached within four disciplinary domains central to AI governance: computer
and data science, sociology, philosophy, and law. Although the terminology of trustworthy AI has
been institutionalized in EU policy discourse (see, e.g. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and Recital 1 AI Act), this paper
deliberately employs the broader notion of responsible AI. The reason is twofold: first, trustworthiness risks
suggesting that trust is a measurable property of technology, whereas interdisciplinary scholarship
underscores its relational and context-dependent nature; second, responsibility better captures the
dynamic socio-technical, legal, and ethical practices through which legitimacy and accountability are
constructed. The choice of terminology thus aims to extend the debate beyond compliance-driven
checklists towards a more reflexive, process-oriented understanding. This section outlines the divergent
yet complementary perspectives, highlighting where they converge, where they difer, and how they
might inform a more integrated interdisciplinary understanding of trust in AI. Each of the subsections
is authored by a domain expert, representing their respective disciplinary perspective.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Computer/Data Science</title>
        <p>
          The Computer and Data Science field addresses trust in AI as a set of properties that can be formalized,
measured, evaluated and optimized [
          <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
          ]. This perspective allows for a division of the monolithic
concept of trust into a composite of technically observable and preferably formally specified quantitative
metrics. Metrics are not neutral, but metric choice, thresholding, evaluation and interpretation depend
on the AI model, data usage and analytical purpose. Responsible AI properties encode assumptions
about acceptable trade-ofs, are often multi-objective and cannot be optimized simultaneously without
conflict. As such, metric-based evaluation must be integrated with formal verification methods and
domain-specific constraints.
        </p>
        <p>
          We group the metrics according to core dimensions of Responsible AI (Figure 1). Direct formalizations
and technical metrics are not readily available for dimensions such as trustworthiness and
humancenteredness, which remain only partially measurable [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Accordingly, computer science research
tends to focus on the four inner dimensions where quantitative evaluation is more established. Fairness
metrics [
          <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
          ] aim to quantify whether model outputs exhibit statistical parity across groups or
individuals. Group-level metrics typically evaluate the statistical distribution of predictions with respect to
sensitive attributes and ground truth labels. Individual-level fairness, by contrast, relies on consistency
scores or counterfactual analysis, which estimate whether similar individuals receive similar outcomes
under controlled perturbations of non-permissible features. Explainability and transparency [
          <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
          ]
are evaluated by assessing the interpretability of model behavior. Local explanation techniques such as
SHAP and LIME approximate the marginal influence of input features on specific model outputs. Metrics
such as faithfulness and stability assess whether these approximations capture actual model behavior
under perturbation. In complex models such as deep neural networks, explanation quality becomes
sensitive to architecture and gradient behavior, and is often evaluated through post hoc attribution
or diagnostic probing. Privacy and robustness form another group of metrics [
          <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
          ]. Diferential
privacy provides a formal privacy notion that bounds the information gain about individuals in the
training data. Robustness, on the other hand, is measured via attack success rates under adversarial
perturbations or via out-of-distribution generalization errors. Both require simulation of worst-case or
bounded adversarial scenarios to quantify system reliability under non-standard input conditions.
Some of these metrics are partially incompatible: improving group fairness may reduce individual
fairness; increasing robustness may reduce accuracy; adding privacy guarantees may impact explainability.
Metric design therefore implicitly encodes value trade-ofs and requires interpretation via domain
experts.
        </p>
        <p>Trust, while often linked to a system behavior, cannot be reduced to a single quantifiable or
formalizable system property. Although technical dimensions such as fairness, robustness and explainability
contribute to the dimension of trustworthiness, they capture only partial aspects of a much broader
concept. Trust is context-dependent and conditioned by user expectations, domain-specific risks and
boundaries, and environmental settings. From the computer science perspective this creates a
fundamental limitation. Trust cannot be fully captured by technical metrics, it requires input from multiple
disciplines.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Sociology</title>
        <p>
          The lack of a uniform conceptualization of the term trust across diferent scientific disciplines also
applies to sociology itself [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Introduced by Georg Simmel (1858–1918), the concept of trust has
since been explored by numerous sociologists [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. To examine all these conceptualizations in detail
would exceed the scope of this paper. Therefore, two theoretical approaches that are widely recognized
as classics in the sociology of trust, will be shortly summarized in the following and then applied to
the field of trust in AI. The first approach comes from Niklas Luhmann, who was one of the first to
systematically examine and analyse the concept of trust introduced by Simmel. The second approach
summarized is from Anthony Giddens, who further elaborated Luhmann’s conceptualization of trust.
Following the definition of Luhmann [ 16], trust can be seen as a mechanism to reduce complexity and
enable people to make decisions and take actions in a complex (social) environment which does not
give the individual enough knowledge to be secure about the future and consequences of their decision.
As far as the degree of available information is concerned, trust cannot arise out of nothing, nor is
it necessary when the situation is completely clear and certain. It builds upon past experiences and
social interactions, which show some kind of continuity that suggests that the future is, yet uncertain,
predictable in some way. Against this background, Luhmann [16] makes it clear that trust can be seen as
a conscious act of will in which the individual decides to not try to seek more information to gain total
security about the outcome, but to trust that the trustee will behave as expected. With this decision,
the trustor takes a personal risk, as they make themself vulnerable against a breach of trust from the
trustee.
        </p>
        <p>Considering the complexity of our modern world, in which people do not only interact in a personal
context, but more often are part or representative of a larger social system or institution, Luhmann [16]
diferentiates between interpersonal trust and institutional trust. While the former is directed at another
individual, which is personally known by the trustor, the latter is directed towards an organization or
abstract institution, such as law, money or science. Anthony Giddens [17] further elaborates Luhmann’s
concept and notes that interpersonal trust and systemic trust can also be related to each other, as
persons can act as representatives of a system. Trust in another individual can be either interpersonal
(meaning that the trust relies on past interactions between trustor and trustee and is directed towards
the person), or systemic (i.e., trust is based on the trustor’s general trust in the organisation or system
to which the trustee belongs and is then transferred to the person). It can also be the other way round:
One might not trust a system (e.g. a hospital), but one special person belonging to it (e.g. physician).
Applying the sociological theory of trust to the question what trust towards AI means, it is necessary to
ifrst determine the perspective. It is too unspecific to generally talk about “Trust towards AI” without
specifying the context. For example, one could look at the trust a user has towards an LLM-based
chatbot, applying the concept of interpersonal trust, or we could talk about trust in the organization
that produced that chatbot, which would be the systemic trust. Therefore, a multifaceted approach is
imperative, necessitating the consideration of a range of factors and context conditions. The question
of whether to place trust in the provider or manufacturer of the AI is a salient one. The question of
whether to place trust in the competence of the developers who designed and trained the system is a
salient one. The question of whether to place trust in the specific AI tool in use is a salient one. The
reliability and representativeness of the training data must be established. Alternatively, should we
place our trust in the respective output generated in response to a specific prompt? [18]
This diferentiated approach is particularly necessary because, in practice, a general attitude of
overor under-trust towards AI systems is often observed. Such sweeping generalizations are frequently
influenced by individual prior knowledge, experience or narratives conveyed by the media. Instead,
the necessity lies in the establishment of ’calibrated trust’, which can be defined as a situation-specific
evaluation of the reliability of an AI system within a designated application context, alongside the
determination of the extent to which eforts should be invested in the critical monitoring of the respective
outcomes [18].</p>
        <p>From this perspective, the formation of trust in a technical system, such as AI, is inherently embedded
in socio-technical interactions. This is an ongoing negotiation process between humans and machines,
in which questions of (human) identity and control are redefined through interaction [ 19, 20]. A
fundamental design objective is thus to devise and operationalize a socio-technical negotiating process
between humans and machines, in addition to learning from data and its documentation. In this
understanding, trust is not a prerequisite, but rather arises in the course of interaction – through the
coupling and co-evolution of human and AI actors [21].</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Philosophy</title>
        <p>In the philosophy of trust, the most widely accepted definition invokes three conditions for trusting:
The trustor relies on the trustee to be (a) competent and (b) willing or motivated to do what we trust
them to do (based on shared moral norms) and is (c) exposed to some level of risk or vulnerability by
doing so [22]. Trust relates in interesting ways, among others, to the concepts of reliance, transparency,
and responsibility.</p>
        <p>First, while trust involves a form of reliance, it is not reducible to it. Trust is more than predicting
that someone will act a certain way, it is a morally loaded stance. As Baier [23] notes, “trusting can be
betrayed, or at least let down, and not just disappointed.” This distinction is crucial: reliance leads to
disappointment when expectations are not met; trust leads to betrayal when shared moral expectations
are violated. Disappointment reflects a failed prediction; betrayal reflects a broken moral commitment.
Thus, whereas reliance is descriptive and predictive, trust is fundamentally normative: rooted in shared
values and obligations.</p>
        <p>Second, trust requires at least some level of transparency to be warranted. The more one knows about
the trustee’s competence and motivations, the better one can assess their trustworthiness and the risks
involved in placing trust. However, trust conceptually still requires some degree of uncertainty too
(e.g. about the competency and will of the trustee and external factors that might prevent the trustee
from fulfilling what one has entrusted to them) and, thus, a leap of faith, as it were. Therefore, full
transparency eliminates the need for trust, reducing it to a matter of risk calculation [22].
Third, trust entails a distinctive form of responsibility grounded in answerability. When we trust
someone, we do not merely expect outcomes, we expect that they can be called to answer for their
actions in light of shared norms [24, 25]. This means the trusted party is not just expected to act reliably,
but to explain or justify their conduct if questioned. Unlike mere reliance, which does not presume
moral engagement, trust invokes a relational obligation: the trustee is answerable to the trustor, and
moral answerability is an important part of what gives trust its normative depth.</p>
        <p>Overall, then, we argue that, from a philosophical perspective, applying the concept of trust to AI can
be misleading. AI systems are not full-fledged moral agents with a will, and it is therefore inaccurate to
say that an AI is “willing” or “motivated” to do what we trust it to do. When an AI fails to meet our
expectations, we may be disappointed by its malfunctioning, but we would not say that it has betrayed
us or violated moral norms. For this reason, it is more appropriate to apply the categories of reliance
or reliability to AI, rather than the concept of trust. However, we can still apply trust to the people
behind AI, namely, the designers, developers, and deployers of these systems. Accordingly, the notion
of “trustworthy AI” must not conflate two distinct dimensions: first, the technical task of building
reliable, transparent systems that minimize the risk of disappointment: what we might call reliable AI;
and second, the moral and political responsibility of the human agents involved: those who can be held
answerable for failures, biases, or harms.
2.4. Law
Among recent legislative attempts to address AI, the EU’s AI Act stands out as the regulatory approach
most comprehensively anchored in notions of trust [26, 27]. The AI Act·s legal basis, Article (Art.)
114 of the Treaty on the Functioning of the EU, aims to ensure the smooth functioning of the internal
market. This leads the EU to lean on trust considerations as both a legal and economic condition of
integrating its digital single market [28]. In this context, trust is not merely a matter of consumer
confidence or technical compliance; it functions as a foundational principle for enabling cross-border
exchanges of AI technologies under conditions of perceived legitimacy and shared risk tolerance.
Establishing such a harmonized regulatory framework (Art. 1 AI Act; Recital 1), the EU seeks to
preclude regulatory fragmentation among its member states and attempts to embed trustworthiness as
a structural precondition for market participation, transforming trust from a difuse socio-political and
economic expectation into a legal construct with intended market-shaping efects.</p>
        <p>The notion of trust is deployed by the European Commission to guide the design and operation of AI
systems, and to promote societal acceptance of these technologies across EU member states [29]. In its
ambition to establish “a legal framework for trustworthy AI ”, the European Commission envisages the
development of a human-centric “ecosystem of trust” [30]. In line with this statement, the EU AI Act
elevates trust (and trustworthiness) to the status of a guiding principle and conceptualizes it not merely
as an ethical aspiration but a structural element of legitimacy that underpins the EU’s entire regulatory
strategy for AI. The concept of trust is applied to market proponents and AI products as well as to the
(EU internal) market for AI as a whole.</p>
        <p>
          The term trustworthiness, despite being so central, is only sparsely embedded in the binding provisions
of the Act, appearing explicitly only in Art. 1 (statement of purpose) and Art. 95 (voluntary codes of
conduct). Its formal legal status is, therefore, somewhat difuse. To determine the trustworthiness of AI
systems, metrics are recommended, intended to promote trust [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Internal metrics, while not binding,
enable firms to demonstrate measurable progress and accountability, a possible factor in gaining trust.
The AI Act operates on the premise that trust emerges where risk is either absent or noticeably mitigated
(Art. 1 (1) AI Act; recitals 65, 164). With this risk-based approach to regulating innovative technology (AI
Act, recital 5), the EU therefore determines the absence or mitigation of risks and harm as a requisite for
trustworthiness (AI Act, recital 25-27), in line with the discussion beyond the AI Act [31]. Nevertheless,
it is questionable whether legality automatically establishes or strengthens trust [26]. The same holds
for the acceptability of risks, which will not always lead to trustworthiness of AI systems, at best being a
necessary condition, not a suficient one. On closer inspection, conceptual ambiguity, legal uncertainty
and practical dificulties impede consistent trust governance [ 32], which is further exacerbated by the
central importance of trust concepts, albeit with diferent emphases, in, e.g., the General Data Protection
Regulation, the Digital Markets Act and the Digital Services Act, which are outside the scope of this
contribution.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Inter- and Cross-disciplinary Discussion and Outlook</title>
      <p>Trust, while central to Responsible AI discourse, is interpreted and operationalized diferently across
disciplines. From a computer science perspective, trust is primarily linked to measurable system
properties such as fairness, robustness, and explainability. Sociology, in contrast, frames trust as an
evolving relationship embedded in context, shaped by values, practices, and institutional arrangements.
Philosophical perspective emphasizes trust as an epistemic stance, based on expectations, intentions
and moral responsibility. Legal perspectives link trust to accountability and institutional frameworks,
especially in the context of the European AI regulation.</p>
      <p>These divergent conceptualizations emphasize diferent objects and/or perspectives (like technical
systems, human actors or institutions and processes), rely on diferent levels of assumptions (like formal,
interpretive or normative), and pursue diferent aims. Such diferences in the notion of trust make it
nearly impossible to develop a unified theoretical definition, but combined they ofer complementary
insights that are essential for understanding trust in AI. Therefore, we propose conceptualizing trust
as a boundary concept that is suficiently flexible to establish interdisciplinary collaboration without
a specific definitional consensus. This boundary concept approach is particularly well suited to be
enriched by additional perspectives from, e.g., psychology and economics, which would provide further
insights into individual and systemic dimensions of trust.</p>
      <p>We see potential in the development of hybrid approaches that systematically combine formal,
technical and contextual analyses. Such approaches may involve the co-design of evaluation frameworks,
interdisciplinary review mechanisms and participatory methods incorporating diverse stakeholder
perspectives. This objective can be seen as a driver to enable a calibration of trust to the level of user
confidence in an AI system proportional to the capabilities, limitations, application context and social
environment.</p>
      <p>This analysis is part of an ongoing efort to develop a deeply interdisciplinary approach not only for
trust in AI, but also for related complex concepts such as human-centeredness, accountability, and social
acceptability in socio-technical AI systems. We aim to move beyond discipline-specific definitions and
instead build a shared framework with respect to conceptual diferences enabling joint system design
and evaluation.</p>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-4 and Deepl translator for grammar and
spelling check. After using these tools, the authors reviewed and edited the content as needed and take
full responsibility for the publication’s content.
[16] N. Luhmann, Vertrauen: Ein mechanismus der reduktion sozialer komplexität, UTB, 2000.
[17] A. Giddens, J. Schulte, Konsequenzen der moderne, Suhrkamp, 1995.
[18] U. Schmid, Trustworthy artificial intelligence: comprehensible, transparent and correctable,
Hannes Werthner· Carlo Ghezzi· Jef Kramer· Julian Nida-Rümelin· Bashar Nuseibeh· Erich Prem·
(2024) 151.
[19] H. C. White, Identity and control: How social formations emerge, Princeton university press, 2008.
[20] R. Häußling, C. Härpfer, M. Schmitt, Soziologie der Künstlichen Intelligenz: Perspektiven der</p>
      <p>Relationalen Soziologie und Netzwerkforschung, transcript Verlag, 2024.
[21] C. Härper, Von der kunst des lernens: Einige bemerkungen zur intentionalität von in-und output
(2024).
[22] C. McLeod, E. N. Zalta, Trust in stanford encyclopedia of philosophy, Metaphysics Research Lab,</p>
      <p>Stanford University (2006).
[23] A. Baier, Trust and antitrust, ethics 96 (1986) 231–260.
[24] M. Kiener, Varieties of answerability, in: The Routledge Handbook of Philosophy of Responsibility,</p>
      <p>Routledge, 2023, pp. 204–216.
[25] M. Kiener, Strict moral answerability, Ethics 134 (2024) 360–386.
[26] A. Tamò-Larrieux, C. Guitton, S. Mayer, C. Lutz, Regulating for trust: Can law establish trust in
artificial intelligence?, Regulation &amp; Governance 18 (2024) 780–801.
[27] B. Lund, Z. Orhan, N. R. Mannuru, R. V. K. Bevara, B. Porter, M. K. Vinaih, P. Bhaskara, Standards,
frameworks, and legislation for artificial intelligence (ai) transparency, AI and Ethics (2025) 1–17.
[28] A. Engel, Licence to regulate: Article 114 tfeu as choice of legal basis in the digital single market,
in: New Directions in Digitalisation: Perspectives from EU Competition Law and the Charter of
Fundamental Rights, Springer Nature Switzerland Cham, 2024, pp. 13–28.
[29] B. Beckert, The european way of doing artificial intelligence: The state of play implementing
trustworthy ai, in: 2021 60th FITCE communication days congress for ICT professionals: Industrial
data–cloud, low latency and privacy (FITCE), IEEE, 2021, pp. 1–8.
[30] E. Union, Proposal for a regulation of the european parliament and of the council laying down
harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union
legislative acts, COM/2021/206final (2021) 1–107.
[31] J. Newman, A taxonomy of trustworthiness for artificial intelligence, CLTC: North Charleston,</p>
      <p>SC, USA 1 (2023).
[32] M. Kattnig, A. Angerschmid, T. Reichel, R. Kern, Assessing trustworthy ai: Technical and legal
perspectives of fairness in ai, Computer Law &amp; Security Review 55 (2024) 106053.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>I. Löwy</surname>
          </string-name>
          ,
          <article-title>The strength of loose concepts-boundary concepts, federative experimental strategies and disciplinary growth: The case of immunology</article-title>
          ,
          <source>History of science 30</source>
          (
          <year>1992</year>
          )
          <fpage>371</fpage>
          -
          <lpage>396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Göllner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tropmann-Frick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Brumen</surname>
          </string-name>
          ,
          <article-title>Towards a definition of a responsible artificial intelligence, in: Information modelling and knowledge bases XXXV</article-title>
          , IOS Press,
          <year>2024</year>
          , pp.
          <fpage>40</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Göllner</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Tropmann-Frick, Bridging the gap between theory and practice: Towards responsible ai evaluation</article-title>
          .,
          <source>in: CHAI@ KI</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Goellner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tropmann-Frick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Brumen</surname>
          </string-name>
          ,
          <article-title>Responsible artificial intelligence: A structured literature review</article-title>
          ,
          <source>arXiv preprint arXiv:2403.06910</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>A. HLEG</surname>
          </string-name>
          , High-level
          <source>expert group on artificial intelligence.</source>
          (
          <year>2019</year>
          ).
          <article-title>ethics guidelines for trustworthy ai</article-title>
          , European Commission. Available at: https://ec. europa. eu/digital-single-market/en/news/ethicsguidelines-trustworthy-
          <source>ai</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gittens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yener</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yung</surname>
          </string-name>
          ,
          <article-title>An adversarial perspective on accuracy, robustness, fairness, and privacy: multilateral-tradeofs in trustworthy ml</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>120850</fpage>
          -
          <lpage>120865</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          , et al.,
          <article-title>Dual humanness and trust in conversational ai: A person-centered approach</article-title>
          ,
          <source>Computers in Human Behavior</source>
          <volume>119</volume>
          (
          <year>2021</year>
          )
          <fpage>106727</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehrabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Morstatter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lerman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galstyan</surname>
          </string-name>
          ,
          <article-title>A survey on bias and fairness in machine learning</article-title>
          ,
          <source>ACM computing surveys (CSUR) 54</source>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Speicher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Heidari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Grgic-Hlaca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Gummadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Weller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. B.</given-names>
            <surname>Zafar</surname>
          </string-name>
          ,
          <article-title>A unified approach to quantifying algorithmic unfairness: Measuring individual &amp;group unfairness via inequality indices</article-title>
          ,
          <source>in: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>2239</fpage>
          -
          <lpage>2248</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Luss</surname>
          </string-name>
          , P.-Y. Chen,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dhurandhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sattigeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shanmugam</surname>
          </string-name>
          , C.-C. Tu,
          <article-title>Leveraging latent features for local explanations</article-title>
          ,
          <source>in: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery &amp; data mining</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1139</fpage>
          -
          <lpage>1149</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Alvarez Melis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Jaakkola</surname>
          </string-name>
          ,
          <article-title>Towards robust interpretability with self-explaining neural networks</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>31</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Madry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Makelov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tsipras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vladu</surname>
          </string-name>
          ,
          <article-title>Towards deep learning models resistant to adversarial attacks</article-title>
          ,
          <source>arXiv preprint arXiv:1706.06083</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shokri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stronati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shmatikov</surname>
          </string-name>
          ,
          <article-title>Membership inference attacks against machine learning models</article-title>
          ,
          <source>in: 2017 IEEE symposium on security and privacy (SP)</source>
          , IEEE,
          <year>2017</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Evers</surname>
          </string-name>
          ,
          <article-title>Vertrauen eine soziologische betrachtung</article-title>
          ,
          <source>in: Vertrauen und Wandel sozialer Dienstleistungsorganisationen: Eine figurationssoziologische Analyse</source>
          , Springer,
          <year>2017</year>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>G. Möllering,</surname>
          </string-name>
          <article-title>The nature of trust: From georg simmel to a theory of expectation, interpretation and suspension</article-title>
          ,
          <source>Sociology</source>
          <volume>35</volume>
          (
          <year>2001</year>
          )
          <fpage>403</fpage>
          -
          <lpage>420</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>