<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Transparent Recommender Systems via Argumentation Frameworks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elena Stefancova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Comenius University Bratislava</institution>
          ,
          <country country="SK">Slovakia</country>
        </aff>
      </contrib-group>
      <fpage>58</fpage>
      <lpage>64</lpage>
      <abstract>
        <p>As artificial intelligence becomes more widespread, ensuring its trustworthiness is increasingly important. This work focuses on enhancing transparency in recommender systems by analyzing key fairness dimensions and their trade-ofs, aiming to improve user trust. We also propose a novel synthetic data generation method to study changes in internal representations, ofering insights into system behavior and decision-making. Ongoing work explores explainability, potentially via argumentation frameworks, to further support transparent and accountable recommendations.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;recommender systems</kwd>
        <kwd>fairness</kwd>
        <kwd>explainability</kwd>
        <kwd>argumentation frameworks</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Recent research has shifted from purely accuracy-oriented recommender systems towards more
trustworthy models that incorporate principles such as fairness, transparency, explainability, and
robustness [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. A trustworthy recommender system must not only deliver relevant results but also be secure,
responsible, and understandable to stakeholders. Transparency and explainability are especially critical,
as they improve user confidence and system accountability [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The development of trustworthy systems spans multiple stages, including robust and explainable data
representation, fair and transparent recommendation generation, and ethically grounded evaluation
practices. Challenges such as noisy or biased data remain significant, and synthetic data generation has
emerged as a potential solution for controlled experimentation and improving reliability [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>This growing body of work highlights the importance of designing recommendation pipelines that
are not only efective, but also aligned with broader ethical and regulatory frameworks, such as the
EU’s Artificial Intelligence Act.</p>
      <sec id="sec-2-1">
        <title>2.1. Fairness</title>
        <p>
          Fairness in recommender systems has emerged as a critical aspect of trustworthy AI, aiming to mitigate
various forms of bias embedded in the data, models, and feedback loops [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. These biases can take the
form of data (e.g., exposure, cold-start, popularity), model (e.g., ranking), or feedback biases [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. While
fairness-aware techniques can reduce discrimination and improve the treatment of underrepresented
groups, they may introduce trade-ofs with traditional accuracy metrics [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. In some cases, fairness
eforts also lead to beneficial side efects, such as improved diversity or better coverage of the long
tail [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>
          The concept of fairness is highly contextual and multi-dimensional [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Numerous definitions have
been proposed, including group fairness, individual fairness, process fairness, and outcome fairness [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
However, most research focuses on a single, static definition of fairness, often failing to reflect the
complexity of real-world deployments with multiple stakeholder groups. Fairness considerations
typically focus more on providers (e.g., item suppliers) than consumers (e.g., users), even though
both perspectives are necessary for holistic assessment [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. While some studies consider personalized
fairness based on users’ preferences or histories [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], these approaches rarely combine multiple fairness
definitions or adapt to dynamic user contexts.
        </p>
        <p>
          Fairness-aware recommendation methods span the full system pipeline, from pre-processing (e.g., data
relabeling or reweighting), in-processing (e.g., fairness constraints during training), to post-processing
(e.g., re-ranking) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. These methods can be static, aiming for fairness across a population, or dynamic,
tailoring fairness to each user interaction [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. The choice between static and dynamic fairness often
depends on the application context—for example, newsletter recommendations benefit from static
batch approaches, while real-time interfaces require adaptive strategies. Evaluation metrics also vary,
including global fairness, group proportional fairness, mean reciprocal rank fairness, and others, each
capturing diferent trade-ofs and fairness goals [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Despite recent progress, challenges remain due
to the multiplicity of fairness definitions, limited real-world deployments, and insuficient evaluation
across diverse stakeholder needs [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Explainability</title>
        <p>
          Explainability plays a crucial role in fostering the user trust, enabling users and developers alike to
assess system behavior, identify biases, and ensure fair and ethical use [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          Explainable AI (XAI) encompasses techniques that either make models inherently interpretable
(white-box models) or provide post-hoc explanations for opaque, black-box systems [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Techniques
such as LIME [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] and SHAP [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] ofer local or global insights into decision-making processes. These
approaches serve purposes ranging from model debugging to improving user confidence and meeting
regulatory demands [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Nevertheless, post-hoc explainers often come with trade-ofs in fidelity and
may not fully reflect the internal model logic [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>
          Within recommender systems, explainability is approached through both interpretable models (e.g.,
matrix factorization, knowledge-based approaches) and post-hoc methods (e.g., attention mechanisms,
rule extraction). A well-established taxonomy includes user-based, item-based, feature-based, and
opinion-based explanations [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. These explanations contribute not only to transparency but also to
system persuasiveness, user satisfaction, and efectiveness [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>
          These provide users with reasons why one item was recommended over another, supporting more
informed decision-making and improving user comprehension of ranking logic. By framing
recommendations in relative terms, these methods align system outputs with human reasoning and enhance user
interaction [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
        </p>
        <p>The continued exploration of explainability frameworks remains essential for building recommender
systems that are not only efective but also responsible and comprehensible.</p>
        <p>
          Argumentation frameworks have increasingly been recognized as a powerful tool for enhancing
explainability in artificial intelligence systems. These frameworks provide a structured way to model and
evaluate conflicting information, allowing AI models to generate human-understandable explanations
by simulating argumentative dialogues or reasoning processes [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. In the context of explainable
AI, argumentation frameworks serve to clarify decision-making processes by explicitly representing
supporting and opposing arguments related to specific predictions or recommendations [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. This
approach not only aids transparency but also facilitates trust by enabling users to engage interactively
with the system’s reasoning, thereby improving interpretability and user acceptance.
        </p>
        <p>
          Moreover, argumentation-based explainability has been applied to various domains, including
recommender systems, where it supports multi-stakeholder perspectives by incorporating diverse viewpoints
and fairness considerations into the explanation generation process [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. Despite its potential,
challenges remain in scaling argumentation frameworks to complex models and integrating them seamlessly
with existing AI architectures, highlighting ongoing research eforts to balance computational eficiency
with explanatory depth.
        </p>
        <p>Overall, argumentation frameworks represent a promising avenue for advancing explainability by
enabling AI systems to provide nuanced, interactive, and context-aware justifications for their outputs.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Motivation</title>
      <p>My motivation to engage deeply with this research topic stems from a strong alignment between my
personal interests and the strategic focus of my research groups. Over the past nine years, I have
cultivated extensive experience in recommender systems, beginning with my Master’s thesis and
subsequently exploring the ethical dimensions of these systems in a professional context. During this
period, I concentrated on identifying and mitigating the negative efects of recommender systems, such
as filter bubbles and related biases.</p>
      <p>During my doctoral studies at Comenius University in Bratislava, Slovakia, I became a member of a
research group specializing in knowledge representation and explainable artificial intelligence, which
remains my primary academic afiliation. Concurrently, I had the opportunity to spend an academic year
at the University of Colorado Boulder, USA, supported by a Fulbright Award. There, I collaborated with
a research group focused on recommender systems, particularly investigating fairness considerations
from the perspectives of multiple stakeholders. Our work also emphasizes the critical need for model
transparency and efective communication regarding the rationale behind recommendations, especially
when fairness constraints influence the outcomes.</p>
      <p>Currently, my research agenda prioritized advancing fairness-aware recommendation systems while
progressively shifting focus towards explainability. Having completed the initial phase centered on
fairness, I am dedicated to developing explainability techniques and refining argumentation frameworks,
ultimately aiming to complete my doctoral dissertation.</p>
      <p>Participating in the doctoral consortium would provide invaluable feedback, particularly on aspects</p>
    </sec>
    <sec id="sec-4">
      <title>4. Project Proposal</title>
      <p>related to explainability and the integration of argumentation frameworks, thereby enriching the rigor
and impact of my research.
My project proposal consists of several steps.</p>
      <p>Firstly, fairness integration. Employing a hybrid multi-stage recommendation architecture where an
initial personalized recommendation list is re-ranked to incorporate fairness constraints. This approach
supports multiple, dynamic fairness definitions via allocation and choice mechanisms, enabling modular,
scalable, and partly explainable adjustments. The research investigates trade-ofs between fairness and
accuracy and the joint efects of various fairness notions, including provider-side, consumer-side, group,
and individual fairness.</p>
      <p>Secondly, developing a synthetic data generation method inspired by matrix factorization techniques
to simulate realistic user-item interactions and controlled biases. This allows systematic evaluation of
fairness-aware algorithms under diverse conditions and supports experimentation on fairness-accuracy
trade-ofs with adjustable sensitive features and bias parameters.</p>
      <p>Lastly, enhancing explainability by leveraging knowledge representation methods such as ontologies,
argumentation frameworks, and comparative explanations. The work focuses on communicating why
certain items are recommended, especially when fairness mechanisms afect rankings. Comparative
explanations provide transparency regarding deviations introduced by fairness-aware re-ranking. User
studies are planned to evaluate explanation efectiveness, assessing metrics such as trust, transparency,
and user satisfaction.</p>
      <p>Employing the argumentation frameworks allows the system to articulate not only why certain items
were recommended but also how competing fairness considerations and stakeholder preferences were
balanced. Moreover, argumentation frameworks facilitate the generation of dynamic, context-aware
explanations that can be tailored to diferent stakeholder groups, thereby improving the
communicative clarity and relevance of explanations. Through this explicit reasoning process, users can better
comprehend the trade-ofs and decisions embedded in the recommendation, which fosters acceptance
of the system’s outputs.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Preliminary Results</title>
      <p>
        We explored the research questions through SCRUF-D, a dynamic multi-agent framework for
fairnessaware recommendation [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Unlike static approaches, SCRUF-D models fairness as a dynamic property,
using multiple fairness agents, each representing a diferent fairness objective (e.g., group proportionality,
utility). Agents are allocated using mechanisms such as Lottery, Least Fair, or Weighted, and their
preferences are aggregated through social choice methods (e.g., Borda, Copeland, Rescoring).
      </p>
      <p>
        Our experiments on real-world (e.g., MovieLens [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], Microlending [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]) and synthetic datasets
show that SCRUF-D can efectively balance multiple, heterogeneous fairness definitions with minimal
accuracy trade-ofs. Our research has shown that it supports both group and individual provider-side
fairness [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ].
      </p>
      <p>
        To enable controlled experimentation, we developed LAFS, a synthetic data generation method based
on latent factor simulation [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. LAFS simulates user/item features, biases, and sensitive attributes,
making it well-suited for testing fairness interventions in recommendation pipelines.
      </p>
      <p>SCRUF-D demonstrates strong flexibility across allocation and ranking mechanisms, enabling nuanced
fairness control across dynamic and multi-objective contexts.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>This work contributes to the development of transparent and fair recommender systems by addressing
multiple aspects of trustworthiness, including fairness-aware re-ranking, dynamic multi-objective
optimization, and synthetic data generation. The SCRUF-D framework enables modular and flexible
integration of fairness objectives, while the LAFS method allows for systematic evaluation through
controlled synthetic datasets.</p>
      <p>While these contributions ofer a foundation for fair and accountable recommendations, future work
is shifting focus toward the dimension of explainability. In particular, we are exploring
argumentationbased frameworks as a structured approach for modeling, communicating, and justifying the reasoning
behind recommendations. Argumentation ofers a promising pathway for incorporating stakeholder
perspectives, resolving conflicts between competing fairness goals, and providing interactive,
contextaware explanations. Ultimately, we aim to integrate fairness and explainability into a unified framework
that supports transparent, intelligible, and socially responsible recommendation processes.</p>
      <p>In particular, the enhancement of explainability through the incorporation of argumentation
frameworks would significantly benefit from the opportunity to participate in the KR Doctoral Consortium
and to receive expert feedback.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>Funded by the EU NextGenerationEU through the Recovery and Resilience Plan for Slovakia under the
project No. 09I05-03-V02-00064.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT in order to: Grammar and spelling
check. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed and
take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)</article-title>
          ,
          <source>Technical Report, European Commission</source>
          ,
          <year>2021</year>
          . URL: https://artificialintelligenceact.eu.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ricci</surname>
          </string-name>
          ,
          <article-title>Trustworthy recommender systems</article-title>
          ,
          <source>ACM Trans. Intell. Syst. Technol</source>
          .
          <volume>15</volume>
          (
          <year>2024</year>
          ). URL: https://doi.org/10.1145/3627826. doi:
          <volume>10</volume>
          .1145/3627826.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Swearingen</surname>
          </string-name>
          ,
          <article-title>The role of transparency in recommender systems</article-title>
          ,
          <source>in: CHI '02 Extended Abstracts on Human Factors in Computing Systems, CHI EA '02</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2002</year>
          , p.
          <fpage>830</fpage>
          -
          <lpage>831</lpage>
          . URL: https://doi.org/10.1145/506443.506619. doi:
          <volume>10</volume>
          .1145/506443.506619.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Slokom</surname>
          </string-name>
          ,
          <article-title>Comparing recommender systems using synthetic data</article-title>
          ,
          <source>in: Proceedings of the 12th ACM Conference on Recommender Systems</source>
          , RecSys '18,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>548</fpage>
          -
          <lpage>552</lpage>
          . URL: https://doi.org/10.1145/3240323.3240325. doi:
          <volume>10</volume>
          .1145/ 3240323.3240325.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <article-title>Discrimination-aware data mining</article-title>
          ,
          <source>in: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>560</fpage>
          -
          <lpage>568</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , W. Ma, M. Zhang, Y. Liu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <article-title>A survey on the fairness of recommender systems</article-title>
          ,
          <source>ACM Trans. Inf. Syst</source>
          .
          <volume>41</volume>
          (
          <year>2023</year>
          ). URL: https://doi.org/10.1145/3547333. doi:
          <volume>10</volume>
          .1145/3547333.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N. Ranjbar</given-names>
            <surname>Kermany</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pizzato</surname>
          </string-name>
          ,
          <article-title>A fairness-aware multi-stakeholder recommender system</article-title>
          ,
          <source>World Wide Web</source>
          <volume>24</volume>
          (
          <year>2021</year>
          )
          <fpage>1995</fpage>
          -
          <lpage>2018</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Buhayh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kathait</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ragothaman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mattei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Voida</surname>
          </string-name>
          ,
          <article-title>The many faces of fairness: Exploring the institutional logics of multistakeholder microlending recommendation</article-title>
          ,
          <source>in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1652</fpage>
          -
          <lpage>1663</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Ekstrand</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Burke</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Diaz</surname>
          </string-name>
          , et al.,
          <source>Fairness in information access systems, Foundations and Trends® in Information Retrieval</source>
          <volume>16</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>177</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          ,
          <article-title>Personalizing fairness-aware re-ranking</article-title>
          , arXiv preprint arXiv:
          <year>1809</year>
          .
          <volume>02921</volume>
          (
          <year>2018</year>
          ).
          <source>Presented at the 2nd FATRec Workshop held at RecSys</source>
          <year>2018</year>
          , Vancouver, CA.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Patro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Gummadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <article-title>Fairrec: Two-sided fairness for personalized recommendations in two-sided platforms</article-title>
          ,
          <source>in: Proceedings of The Web Conference</source>
          <year>2020</year>
          ,
          <year>2020</year>
          , pp.
          <fpage>1194</fpage>
          -
          <lpage>1204</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cramer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Holstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Vaughan</surname>
          </string-name>
          , H.
          <string-name>
            <surname>Daumé</surname>
            <given-names>III</given-names>
          </string-name>
          ,
          <string-name>
            <surname>M. Dudík</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wallach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Reddy</surname>
            ,
            <given-names>J. GarciaGathright</given-names>
          </string-name>
          ,
          <article-title>Challenges of incorporating algorithmic fairness into industry practice</article-title>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrario</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Loi</surname>
          </string-name>
          ,
          <article-title>How explainability contributes to trust in ai</article-title>
          , in: 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          , p.
          <source>1457â€“1466</source>
          . URL: https://doi.org/10.1145/3531146.3533202. doi:
          <volume>10</volume>
          . 1145/3531146.3533202.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rudin</surname>
          </string-name>
          ,
          <article-title>Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</article-title>
          ,
          <source>Nature machine intelligence</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>206</fpage>
          -
          <lpage>215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>"why should I trust you?": Explaining the predictions of any classifier</article-title>
          ,
          <source>in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          , San Francisco, CA, USA,
          <year>August</year>
          13-
          <issue>17</issue>
          ,
          <year>2016</year>
          ,
          <year>2016</year>
          , pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A unified approach to interpreting model predictions</article-title>
          , in: I. Guyon,
          <string-name>
            <given-names>U. V.</given-names>
            <surname>Luxburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fergus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vishwanathan</surname>
          </string-name>
          , R. Garnett (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>30</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2017</year>
          . URL: https://proceedings. neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Gohel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mohanty</surname>
          </string-name>
          ,
          <article-title>Explainable ai: current status and future directions</article-title>
          ,
          <source>arXiv preprint arXiv:2107.07045</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          , M. de Gemmis, P. Lops, G. Semeraro,
          <article-title>Generating post hoc review-based natural language justifications for recommender systems, User Modeling and User-Adapted Interaction 31 (</article-title>
          <year>2021</year>
          )
          <fpage>629</fpage>
          -
          <lpage>673</lpage>
          . URL: https://doi.org/10.1007/s11257-020-09270-8. doi:
          <volume>10</volume>
          .1007/ s11257-020-09270-8.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Explainable recommendation: A survey and new perspectives</article-title>
          ,
          <source>Found. Trends Inf. Retr</source>
          .
          <volume>14</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>101</lpage>
          . URL: https://doi.org/10.1561/1500000066. doi:
          <volume>10</volume>
          .1561/1500000066.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Masthof</surname>
          </string-name>
          ,
          <article-title>A survey of explanations in recommender systems</article-title>
          , in: Data Engineering Workshop, 2007 IEEE 23rd International Conference on, IEEE,
          <year>2007</year>
          , pp.
          <fpage>801</fpage>
          -
          <lpage>810</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Comparative explanations of recommendations</article-title>
          ,
          <source>in: Proceedings of the ACM Web Conference</source>
          <year>2022</year>
          ,
          <year>2022</year>
          , pp.
          <fpage>3113</fpage>
          -
          <lpage>3123</lpage>
          . doi:
          <volume>10</volume>
          .1145/3485447. 3512031.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vassiliades</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bassiliades</surname>
          </string-name>
          , T. Patkos,
          <source>Argumentation and explainable artificial intelligence:</source>
          a survey,
          <source>The Knowledge Engineering Review</source>
          <volume>36</volume>
          (
          <year>2021</year>
          )
          <article-title>e5</article-title>
          . doi:
          <volume>10</volume>
          .1017/S0269888921000011.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kampik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Čyras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Ruiz</given-names>
            <surname>Alarcón</surname>
          </string-name>
          ,
          <article-title>Change in quantitative bipolar argumentation: Suficient, necessary</article-title>
          , and counterfactual explanations,
          <source>International Journal of Approximate Reasoning</source>
          <volume>164</volume>
          (
          <year>2024</year>
          )
          <article-title>109066</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0888613X23001974. doi:https://doi.org/10.1016/j.ijar.
          <year>2023</year>
          .
          <volume>109066</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Naveed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Donkers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ziegler</surname>
          </string-name>
          ,
          <article-title>Argumentation-based explanations in recommender systems: Conceptual framework and empirical results</article-title>
          ,
          <source>in: Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization</source>
          , UMAP '18,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>293</fpage>
          -
          <lpage>298</lpage>
          . URL: https://doi.org/10.1145/3213586.3225240. doi:
          <volume>10</volume>
          .1145/ 3213586.3225240.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aird</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Farastu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Stefancová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>All</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Voida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mattei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          ,
          <article-title>Dynamic fairnessaware recommendation through multi-agent social choice</article-title>
          ,
          <source>ACM Trans. Recomm. Syst</source>
          .
          <volume>3</volume>
          (
          <year>2024</year>
          ). URL: https://doi.org/10.1145/3690653. doi:
          <volume>10</volume>
          .1145/3690653.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Harper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Konstan</surname>
          </string-name>
          ,
          <article-title>The movielens datasets: History and context</article-title>
          ,
          <source>ACM Transactions on Interactive Intelligent Systems (TiiS) 5</source>
          (
          <year>2015</year>
          )
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aird</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Štefancová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>All</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Voida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Homola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mattei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          ,
          <article-title>Social choice for heterogeneous fairness in recommendation</article-title>
          ,
          <source>in: Proceedings of the 18th ACM Conference on Recommender Systems</source>
          , RecSys '24,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>1096</fpage>
          -
          <lpage>1101</lpage>
          . URL: https://doi.org/10.1145/3640457.3691706. doi:
          <volume>10</volume>
          .1145/3640457.3691706.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>E.</given-names>
            <surname>Stefancova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>All</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Paup</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Homola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mattei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          ,
          <article-title>Data generation via latent factor simulation for fairness-aware re-ranking</article-title>
          ,
          <source>Presented at the 2024 FAccTRec Workshop on Responsible Recommendation</source>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2409.14078. arXiv:
          <volume>2409</volume>
          .
          <fpage>14078</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>