<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>From transparent to translucent decisions in Human-AI teams</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maëlle François</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elodie Bouzekri</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gilles Coppin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thierry Coye de Brunélis</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IMT Atlantique, Lab-STICC, UMR CNRS 6285</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Thales Defense Mission Systems</institution>
          ,
          <addr-line>Valbonne</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Univ Brest, Lab-STICC</institution>
          ,
          <addr-line>CNRS</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>As sensors evolve, they enable the sensing of increasing amounts of data to support operators of critical systems, such as Sound Navigation and Ranging (SONAR) operators. AI-based classification systems are investigated as new team members to support decision-making based on large amounts of data. Transparency of Human-AI teams enables understanding of behavior and accuracy of team members by displaying more or less information. Transparency models define diferent levels and diferent types of information to convey within the interaction within the team. Focusing on another aspect to achieve the same goal of team decision performance, we propose the translucency model, which proposes enabling in-depth exploration of information that leads to the team's decision and their relationships.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Translucency</kwd>
        <kwd>Transparency</kwd>
        <kwd>Decision-making</kwd>
        <kwd>Classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In critical complex systems, such as those used in the military domain, the need for user assistance in
performing specific tasks has become essential. Advances in sensor quality and increased deployment
have significantly expanded the amount of data collected from the environment, resulting in a larger
volume of information available to the users. Even though data pre-processing serves as an initial
iflter, the amount of displayed data remains substantial. Therefore, it is necessary to support users in
processing information efectively. In this paper, we focus on user AI-based assistance for a classification
task. This AI-based assistance refers to a classification system for decision support which aims to
enhance the decision-making process of users by providing classification recommendations that help
them make the most satisfying decision within a specific context [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. More specifically, from a technical
standpoint, we define a classification system as an algorithmic or rule-based system designed to assign
input data to predefined categories or classes based on specific features or patterns.
      </p>
      <p>
        The user may distrust and disuse the system (i.e., underestimating the system’s capacities) or
overtrust and misuse it (i.e., overestimating the system’s capacities) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], both of which impair the efectiveness
of decision-making support. The user may reject a correct recommendation or accept an incorrect
one. To avoid these pitfalls, the system must provide the user with an appropriate level of trust. One
factor that supports informed decision-making is ‘transparency.’ Transparency involves conveying
information about the behavior and accuracy of both the individuals and the team formed by the user
and the decision-support system [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. However, we argue that a transparency mechanism that conveys
or withholds certain types and amounts of information to support diferent levels of transparency is
limited.
      </p>
      <p>We propose to explore a new approach called ‘translucency’ that aims to support individuals’
understanding of information with contextualization elements such as decision-making context, teamwork,
history, system capacities, or accuracy. Transparency aims to present this information as it is, without
the possibility of accessing alternatives or actual relationships between diferent information, while
remaining sober in the information display. In the translucency model, we propose enabling in-depth
exploration of alternatives and their relationships with other information.</p>
      <p>In the remainder of the paper, we present the translucency model, building on the limitations of
existing transparency models. This model is illustrated using a future decision-support classification
system for Sound Navigation and Ranging (SONAR) operations.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Transparent classification systems for decision support</title>
      <p>
        The term ‘transparency’ is a metaphor borrowed from the field of physics, referring to the property of a
material of “being easy to see through” (Cambridge Dictionary) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. While this physical definition of
the concept of transparency is widely accepted in this field, it appears to be more dificult to grasp in the
domain of human-machine interaction, specifically when characterizing an AI-based decision support
system. Indeed, in this field, transparency remains a debated concept, with no clear consensus on its
definition or design. Transparency is a characteristic of the communication between an AI agent and a
human agent. Models of transparency describe how an AI agent should cooperate with a human, and
especially define what information should be conveyed. Additionally, some research has explored how
this information can be represented and delivered through the interface. The following section focuses
on what it means for a system to be transparent through three perspectives. Haresamudram and al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
formerly proposed three types of transparency, which are “algorithmic transparency”, “interaction
transparency,” and “social transparency,” that work together to make AI-based systems transparent.
      </p>
      <p>
        Haresamudram and al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] formerly proposed three types of transparency, which are “algorithmic
transparency”, “interaction transparency,” and “social transparency,” that work together to make
AIbased systems transparent. They define “algorithmic transparency” as the ability of machine learning
algorithms to ofer information about how they interact with the data they process, and to provide
explanations for the decisions they make, even when operating on data volumes and through mechanisms
that are beyond human capacity to manage or fully understand. Thus, this type of transparency is
implemented in the field of Explainable AI (XAI). More specifically, with regard to XAI, Richard et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
propose that classification systems are deemed transparent when they are understandable (i.e., conveyed
information and concepts are part of users’ knowledge), interpretable (i.e., user can make sense of the
results and of the underlying process), traceable (i.e., absence of a stochastic process both in the learning
system and in the classifier), and revisable (i.e., the user can provide feedback to the classification
system, which is taken into account by the system to improve the results). Interaction transparency
refers to the mutual exchange of information between the AI-based system and the user. The example
provided by Haresamudram et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] involves embodied systems, such as smartwatches equipped
with sensors that collect physiological data from users and provide them with recommendations. This
form of transparency aligns with the definition of bidirectional transparency proposed by Chen et
al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and Lyon et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For them, the agent (in our case, the AI-based classification system) must
provide what they refer to as ‘transparency information’—including the agent’s intent, reasoning, future
plans, uncertainties, as well as the humans’ intents, constraints, objectives, and their shared tasks and
interactions- and where the system is able to collect data on the user’s cognitive state, including stress
and fatigue levels. Finally, social transparency refers to the ethical obligation regarding data privacy.
      </p>
      <p>
        It should be noted that the terms transparency and explainability are used in many subfields of
artificial intelligence, robotics, and autonomous systems with slightly diferent meanings. As Patidar et
al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] suggest, we contend that an explainable AI system significantly enhances the transparency of
this framework.
      </p>
      <p>
        In this paper, we focus on the transparency of the system for user understanding and interpretability.
Therefore, in the following sections, the term ‘transparency’ refers to a system characteristic that allows
users to access data (i.e., perceiving the information on the display or being able to perceive it) that
explains the process supporting its decisions and is represented in a manner easily understandable by
humans [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In contrast, the term explainability denotes the extent to which the information made
transparently available to users can be readily interpreted by them. It requires describing the causality
behind a system’s decision [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        If a consensus on the definition of transparency has yet to be reached in the literature, the same holds
for its design, particularly in terms of how it should be concretely applied to classification systems. It
remains unclear what information should be provided and how it should be presented. Most researchers
have focused on the representation of a specific type of information from the classification systems,
which are explanations regarding XAI. As mentioned earlier, XAI aims to address the “black box”
problem (i.e., “the challenge of understanding and interpreting how complex machine learning models,
particularly deep learning algorithms, make decisions” [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]) in complex AI models by making their
decision-making processes understandable to humans. Various explainability methods, such as post-hoc
explainers, have been developed, ofering diferent approaches for visualizing explanations.
Nevertheless, there is limited empirical evidence on whether these “interpretable models” and explanation
representation are actually understandable and usable by users [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ].
      </p>
      <p>
        Most empirical studies [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ] focus on designing static explanations, typically by comparing
user interfaces that present varying ‘levels of explanation.’ These levels refer to the granularity of
the explanations. It refers to the level of information detail provided to users to justify or clarify
an AI system’s decision-making process. Granularity can range from high-level (coarse-grained)
explanations, which ofer general insights into model behavior, to low-level (fine-grained) explanations,
which provide detailed, instance-specific justifications, such as feature contributions or intermediate
processing steps. These studies compare interfaces that provide varying levels of explanations about
the decisions made by the classification system, focusing on which level of granularity has the greatest
impact on factors such as performance, trust, and workload. It is also worth noting that some studies
indicate that performance declines when transparency is high (i.e., the amount of displayed information
exceeds a certain threshold). However, researchers have yet to provide a clear explanation for this
efect—specifically, whether the drop is due to cognitive overload or a lack of training, both of which
may result from poor design representation [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. In this static approach, a fixed level of explanation
granularity is imposed on the user without allowing them to explore further. This approach assumes
that there is a single level that best aligns with the user’s needs. This approach is unsatisfactory because
it overlooks the individual variability of our mental models used to make decisions. As some researchers
argue, system transparency must consider user preferences and individual diferences. Vered et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
propose a model called the demand-driven transparency model, which aligns with this viewpoint. They
argue that users should be able to choose the level of transparency based on their needs. On their
interface, the level of transparency is either predetermined and imposed on the user or adjustable
through a button click by the user.
      </p>
      <p>To go beyond these binary representations of information, we propose a new approach for exploring
decisions made by the classification system through a depth-based layout, which we refer to as the
translucent approach.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Translucent classification systems for decision support</title>
      <sec id="sec-3-1">
        <title>3.1. A new approach to represent decisions: Translucency</title>
        <p>
          We mentioned that there is a need to design a classification system qualified as transparent to enhance the
performance of the human-classification system team. Achieving this transparency requires displaying
additional information to the user. However, current design solutions for information representation
still do not fully address or resolve the problem of information overload, where the volume of incoming
data exceeds the operator’s capacity to process it efectively. To make a decision, the user has to deal
with diverse information from various sources, particularly from the classification system behavior
(i.e., a final decision and explanations related to it), information from the other teammates, information
derived from the user’s operation, and information related to the tactical and environmental context.
To mitigate this information overload, it is essential to implement strategies that streamline information
presentation and enhance decision-making processes. Although not directly related to the field of
decision-making, Harrison et al. [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] explored an approach to information display by organizing
information into depth-based layouts. They developed a concept of layered displays which aimed to
“better support both focusing attention on a single interface object (without distraction from other
objects) and dividing or time sharing attention between multiple objects (to preserve context or global
awareness)”. The user interface was composed with what they called ’semi-transparent’ windows,
menus, dialogue boxes, screens, or other objects where semi-transparency fits into a design space
of “layered” interface objects (see Figure 1) [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Our concept of translucency aligns with this
semitransparent approach. Our concept seeks to go beyond the mere juxtaposition of graphical objects by
treating objects as information and incorporating semantic criteria to establish relationships between
them.
        </p>
        <p>We consider translucency as a new approach to represent information to support decisions.The
concept of translucency in decision representation can take several forms. It is important to distinguish
between representations aimed at assisting users during the decision-making process and those that
communicate a decision after it has been made.</p>
        <p>
          The term ‘translucency’ usually refers to the property of being ‘translucent,’ which is defined as
“allowing some light to pass through” (Cambridge Dictionary) [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. In the field of optics, translucency
is defined as “the property of a specimen by which it transmits light difusely without allowing a clear
view of objects through it” [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. A translucent material is characterized by the scattering of light within
it, alongside the efects of reflection and refraction. This scattering arises from the presence of particles
that interact with the light, causing it to difuse in various directions. Due to their inherent properties of
absorption, refraction, and scattering, these particles are capable of difusing incident light, resulting in
a softened or blurred transmission, which is the main defining feature of translucency [
          <xref ref-type="bibr" rid="ref21">21, 22</xref>
          ]. Diferent
levels of translucency can be distinguished. The level of translucency depends on the particle density
of the material. The higher the material’s density, the lower the level of translucency; conversely, the
lower the density, the higher the level of translucency [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Depending on the degree of translucency,
visual perception will vary. When looking at an object through a translucent filter with a low degree of
translucency, one can still perceive its colors and contours, but details become dificult to discern. This
makes it hard for the observer to recognize the object, though its intrinsic shape remains identifiable.
As a result, the viewer is likely to search for details. In contrast, viewing an object through a highly
translucent filter causes colors to appear less saturated and contours more blurred. This makes it nearly
impossible for the observer to recognize the object. Consequently, attention is drawn to the sharp part
of the image, which simply serves to guide the viewer toward the most relevant information [23].
        </p>
        <p>The blur efect has been applied to information representation in the field of human-machine interfaces
as a depth and selective cue, helping to guide the user’s attention when large amounts of information
are displayed [24]. Experimental studies show that, when combined with other depth cues such as
transparency and contrast cues like color [25], blur becomes an efective means of directing the user’s
focus.</p>
        <p>We consider a decision-making scenario where a human agent must make a decision and compare it
with those of their teammates, which include both AI agents and human agents within the framework
of Human-Autonomy Teaming. To converge toward a shared decision, the human agent needs access
to suficient information supporting each option. In their research, Guarino et al. [ 26] define a list of
information that supports the interpretation of a decision, which they refer to as meta-information.
This supporting information provides the human with contextualization, allowing for a better
interpretation of the decisions made by their teammates. They define meta-information as ”qualifiers” that
contextualize information and ”therefore can critically influence how a decision-maker will process,
understand, and act on that information.” For example, uncertainty, reliability, or characteristics of the
source of the decision are meta-information.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Use case: a translucent representation of decisions to assist a SONAR operator in a classification task</title>
        <p>Our research is conducted within a military context. We focus on the classification task performed by
SONAR operators. They must develop a clear understanding of their surrounding environment, which
involves accurately identifying the class of each suspicious noise source. SONAR operators have distinct
and complementary roles, with each operator having a unique perspective of the environment based on
the sensors they use. In order to classify these noise sources, they need to extract and interpret relevant
information through graphical displays (i.e., data extracted from sensors and processed through various
methods) and sound analysis, then share this information along with the preliminary classification
derived from their combination with the other operators. The classification proposed by the operators
is discussed among them and refined if consensus is not achieved. In addition to communication among
teammates, it is now essential to consider another form of communication: the collaboration between
an operator and a classification system designed to assist in the classification task. This human-machine
collaboration aims to enhance the operator’s decision-making process. Therefore, it appears that an
operator must manage a substantial amount of information from various sources and of diferent
natures. Furthermore, this classification task involves making decisions made under time constraints,
with uncertain and incomplete information, and where the risks and consequences of errors can be
significant. In addition to these factors, stress, tiredness, and data overload make it challenging to
perform classification and prioritize tasks efectively and accurately.</p>
        <p>To illustrate our concept of translucency, we consider a simplified classification task in which a
human agent, OP1, must classify an object, O1. OP1 collaborates with two teammates—another human
agent, OP2, and an artificial agent, OP3—to make the final classification decision. OP3 refers to the
classification system. The object, O1, can belong to one of three possible classes: ‘friend’, ‘neutral’
or ’suspect’. The interface used by OP1 displays the decisions made by the teammates regarding O1
through a translucent representation. We propose a preliminary design to illustrate the dissonance
and consonance among these decisions (see Figure 2, Display view). We consider a translucent disk
representing the decisions of OP1, OP2, and OP3, who can agree or disagree about the class of the object.
In this configuration we decided to represent the class of the object through the color and the shape to
refer to North Atlantic Treaty Organization (NATO) formalism, the source of the decision through the
value of θ (i.e, θ ∈ [π/2;-π/2] for human agent decisions and θ ∈ [-π/2; π/2]) and the decision dissonance
or consonance through the value of r (i.e, the distance between the center of the disk and the decision).</p>
        <p>In this configuration, we illustrate diferent cases of disagreement. For instance, the top-righ figure
shows the OP1 decision as being consonant with the OP2 decision and therefore represented as closer
together (i.e., with a smaller r value). However the OP1 decision and OP3 decision are dissonant and
therefore represented as more distant (i.e., with a larger r value). Here, we display the decisions from
OP2 and OP3 with translucency to indicate to OP1 the extent to which they support its own decision.</p>
        <p>This preliminary design is a conceptual sketch that does not yet demonstrate the relationship between
the decisions and their contextual information.</p>
        <p>We assume that this model of translucency for information representation could subtly guide the
operator’s attention and decision-making process by presenting information in a translucent way. It will
encourage the operator to engage with the underlying structure when necessary without imposing it
upon them. Therefore, this model leverages the ‘nudge’ efect, potentially enhancing both the eficiency
and efectiveness of decision-making. It signals the presence of an internal structure associated with
the emerging information without overwhelming the operator with the content of this structure while
allowing access if the operator deems it necessary.</p>
        <p>This model of translucency for information representation is still in its early stages and naturally
requires further refinement and conceptual development. We aim to conduct an initial exploratory
experiment to test our preliminary hypothesis, primarily comparing the efects of sharp and blurred
stimuli on users and testing whether making information translucent can intrigue this ladder and create
a nudge efect. We expect that translucent decision representation will impact factors such as cognitive
overload, performance, or situation awareness.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>
        The growing need for transparency in critical, complex systems to support informed decision-making
has highlighted a significant challenge of information representation. The literature shows that this
issue has garnered the interest of several researchers and is still under investigation. The existing
transparency models and designs in classification systems have generally shown positive efects on trust
and human-system classification performance [
        <xref ref-type="bibr" rid="ref16">16, 27</xref>
        ]. However, these results should be interpreted
with caution, as comparisons are challenging due to inconsistencies in the definition of transparency
and significant variability in protocol design. However, one notable observation is that the proposed
approaches to designing transparency have certain limitations. Transparency often requires displaying
more information, but studies have shown that when too much information is presented, users become
overloaded, leading to a decline in the performance of the human-system team [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Interestingly, no
clear explanations are provided to account for these findings. We believe that the real solution lies not
in fixing a specific level of transparency but rather in allowing users the flexibility to explore and access
the information they need to interpret decisions efectively. The preliminary approach of translucency
we developed is designed to tackle this issue. We have embedded our concept of translucency into the
ifeld of decision support for operators of critical and complex systems, where they must manage vast
amounts of information, interact with other operators, and engage with a classification system to make
informed decisions. The goal is to introduce a new method of presenting information that allows for a
detailed and broad exploration of decision options and how they relate to other information.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-4 to check grammar and spelling. After using this
tool, the authors reviewed and edited the content as needed and took full responsibility for the publication’s
content.
[22] D. Gigilashvili, J.-B. Thomas, J. Y. Hardeberg, M. Pedersen, Translucency perception : A review,</p>
      <p>Journal of Vision 21 (2021). doi:https://doi.org/10.1167/jov.21.8.4.
[23] P. Martin, Le flou est-il quantifiable?: étude du flou-net de profondeur en photographie et en
cinéma, Ph.D. thesis, Saint-Etienne, 2001.
[24] G. Colby, L. Scholl, Transparency and blur as selective cues for complex visual information, in:</p>
      <p>Image Handling and Reproduction Systems Integration, volume 1460, SPiE, 1991, pp. 114–125.
[25] R. Kosara, S. Miksch, H. Hauser, J. Schrammel, V. Giller, M. Tscheligi, Useful properties of semantic
depth of field for better f+ c visualization, in: ACM International Conference Proceeding Series,
volume 22, 2002, pp. 205–210.
[26] S. L. Guarino, J. D. Pfautz, Z. Cox, E. Roth, Modeling human reasoning about meta-information,</p>
      <p>International Journal of Approximate Reasoning 50 (2009) 437–449.
[27] F. Rajabiyazdi, G. A. Jamieson, A review of transparency (seeing-into) models, in: 2020 IEEE
International Conference on Systems, Man, and Cybernetics (SMC), 2020, pp. 302–308.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <article-title>Models of man: social and rational; mathematical essays on rational human behavior in society setting</article-title>
          , New York: Wiley,
          <year>1957</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>See</surname>
          </string-name>
          , Trust in Automation:
          <article-title>Designing for Appropriate Reliance</article-title>
          , Human
          <string-name>
            <surname>Factors</surname>
          </string-name>
          (
          <year>2004</year>
          )
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. Y. C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Lakhmani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stowers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Selkowitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Wright</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Barnes</surname>
          </string-name>
          ,
          <article-title>Situation awareness-based agent transparency and human-autonomy teaming efectiveness</article-title>
          ,
          <source>Theoretical Issues in Ergonomics Science</source>
          <volume>19</volume>
          (
          <year>2018</year>
          )
          <fpage>259</fpage>
          -
          <lpage>282</lpage>
          . doi:
          <volume>10</volume>
          .1080/1463922X.
          <year>2017</year>
          .
          <volume>1315750</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Lyons</surname>
          </string-name>
          ,
          <article-title>Being transparent about transparency: A model for human-robot interaction</article-title>
          ,
          <source>in: 2013 AAAI spring symposium series</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] Cambridge Dictionary, transparency, in: Cambridge University Press and Assessment, Cambridge Dictionary,
          <year>2025</year>
          . URL: https://dictionary.cambridge.org/us/dictionary/english/transparency.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Haresamudram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Larsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Heintz</surname>
          </string-name>
          ,
          <article-title>Three levels of AI transparency</article-title>
          ,
          <source>Computer</source>
          (
          <year>2023</year>
          )
          <fpage>93</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Richard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mayag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Talbot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tsoukias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Meinard</surname>
          </string-name>
          ,
          <article-title>Transparency of classification systems for clinical decision support,</article-title>
          <year>2020</year>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Patidar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Prajapati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Solanki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Suthar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>Transparency in AI Decision Making: A Survey of Explainable AI Methods and Applications, Advances of Robotic Technology (</article-title>
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>I. S.</surname>
          </string-name>
          <article-title>A. a. others, Ieee standard for transparency of autonomous systems</article-title>
          , IEEE Std (
          <year>2022</year>
          )
          <fpage>7001</fpage>
          -
          <lpage>2021</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>H.-F. Cheng</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <article-title>O'connell</article-title>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Harper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Explaining decisionmaking algorithms through UI: Strategies to help non-expert stakeholders</article-title>
          ,
          <source>in: Proceedings of the 2019 chi conference on human factors in computing systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vermeulen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kankanhalli</surname>
          </string-name>
          , Trends and Trajectories for Explainable,
          <source>Accountable and Intelligible Systems: An HCI Research Agenda, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          . doi:
          <volume>10</volume>
          .1145/3173574.3174156.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Doshi-Velez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>A roadmap for a rigorous science of interpretabilityv</article-title>
          ,
          <source>arXiv preprint arXiv:1702.08608</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Vorm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Modeling user information needs to enable successful human-machine teams: Designing transparency for autonomous systems</article-title>
          , in: International Conference on HumanComputer Interaction,
          <year>2020</year>
          , pp.
          <fpage>445</fpage>
          -
          <lpage>465</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Helldin</surname>
          </string-name>
          , U. Ohlander, G. Falkman,
          <string-name>
            <given-names>M.</given-names>
            <surname>Riveiro</surname>
          </string-name>
          , Transparency of Automated Combat Classification, in: D.
          <string-name>
            <surname>Harris</surname>
          </string-name>
          (Ed.),
          <source>Engineering Psychology and Cognitive Ergonomics</source>
          , Springer International Publishing, Cham,
          <year>2014</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>33</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>319</fpage>
          - 07515-
          <issue>0</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Westin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Borst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hilburn</surname>
          </string-name>
          ,
          <article-title>Automation transparency and personalized decision support: Air trafic controller interaction with a resolution advisory system</article-title>
          ,
          <source>IFAC-PapersOnLine</source>
          <volume>49</volume>
          (
          <year>2016</year>
          )
          <fpage>201</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhaskara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Skinner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Loft</surname>
          </string-name>
          , Agent Transparency:
          <article-title>A Review of Current Theory and Evidence</article-title>
          ,
          <source>IEEE Transactions on Human-Machine Systems</source>
          <volume>50</volume>
          (
          <year>2020</year>
          )
          <fpage>215</fpage>
          -
          <lpage>224</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vered</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Howe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sonenberg</surname>
          </string-name>
          , E. Velloso,
          <article-title>Demand-driven transparency for monitoring intelligent agents</article-title>
          ,
          <source>IEEE Transactions on Human-Machine Systems</source>
          <volume>50</volume>
          (
          <year>2020</year>
          )
          <fpage>264</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Harrison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ishii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Vicente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Buxton</surname>
          </string-name>
          ,
          <article-title>Transparent layered user interfaces: An evaluation of a display design to enhance focused and divided attention</article-title>
          ,
          <source>in: Proceedings of the SIGCHI conference on Human factors in computing systems</source>
          ,
          <year>1995</year>
          , pp.
          <fpage>317</fpage>
          -
          <lpage>324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19] Cambridge Dictionary, translucent, in: Cambridge University Press and Assessment, Cambridge Dictionary,
          <year>2025</year>
          . URL: https://dictionary.cambridge.org/us/dictionary/english/translucent.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>E284-17, Standard Terminology of Appearance, Standard</article-title>
          , ASTM Internationam,
          <year>2022</year>
          . doi:https: //doi.org/10.1520/E0284- 17.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Walter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zickler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Adelson</surname>
          </string-name>
          , K. Bala,
          <article-title>Looking against the light: How perception of translucency depends on lightning direction</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>14</volume>
          (
          <year>2014</year>
          ). doi:https://doi.org/ 10.1167/14.3.17.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>