<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>SIGIR Workshop on eCommerce, Jul</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>mendation via Neural Additive Models with Contrastive Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yuchen Guo</string-name>
          <email>yuchguo@ebay.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhenqi Zhao</string-name>
          <email>zhzhao2@coupang.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Menghan Wang</string-name>
          <email>menghawang@ebay.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Fairness-aware Recommendation, Neural Additive Model, Contrastive Learning</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Unfairness - Reverse</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>17</volume>
      <issue>2025</issue>
      <abstract>
        <p>Research on recommendation fairness has risen up in recent years. As recommender systems are typically customer-centric design, customer-side fairness is extensively studied. However, some less obvious fairness issues hidden on the provider side have not received comparable attention. One remaining question is whether product competitiveness in recommender systems (e.g., recommendation scores) exhibits monotonic variation with respect to protected sensitive attributes on the provider side, such as item prices in E-commerce or ad bids in sponsored search. It is inherently unfair for sellers if the recommendation score of their listed products decreases as they lower their prices. Our investigation reveals that such instances of unfairness are not uncommon in recommender systems. In this paper, we define this phenomenon as an individual monotonic fairness issue, and propose a novel, fairness-aware framework to address it. Our approach leverages monotonic neural additive models, theoretically ensuring monotonicity, and incorporates contrastive learning to enhance fairness through augmented samples. Additionally, we introduce specific evaluation metrics to quantify fairness. Extensive experiments on real-world datasets demonstrate that our method significantly improves monotonic fairness while still maintaining a high level of personalization compared to state-of-the-art recommendation algorithms. The source codes are available at https://github.com/yuchguo1007/MNAM-CL.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Over recent decades, recommender systems have rapidly evolved and become integral to modern web
and mobile applications like eBay, Netflix, and Spotify. These online platforms serve as intermediaries
between content providers (e.g., eBay sellers, Netflix filmmakers, Spotify artists) and customers by
ofering recommendation services. Traditional personalized recommendations strive to enhance customer
satisfaction by suggesting products to best match customers’ interest, relying on historical user
interactions. However, such data-driven design inevitably introduces unfairness, either on the customer side
or the provider side. With the awakening of the unfairness in recommender systems, which is broadly
defined as harmful disparity in user experience [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6">1, 2, 3, 4, 5, 6</xref>
        ], research on recommendation fairness
has surged. Previous studies attempted to optimize two competing goals simultaneously: maximizing
recommendation accuracy and minimizing the prediction discrimination of diferent subgroups (e.g.,
      </p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
)80
%
(
ion60
t
ro40
p
o
rP20
0</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Provider Fairness Discovery</title>
        <p>
          Provider fairness, or supplier fairness, refers to not discriminating against individual provider or groups
on sensitive attributes [
          <xref ref-type="bibr" rid="ref3 ref8 ref9">3, 8, 9</xref>
          ]. On one hand, unfairness in recommender system is usually caused by
various forms of biases [
          <xref ref-type="bibr" rid="ref10 ref6">10, 6</xref>
          ], which mainly comes from the original training data. On the other hand,
increasingly complex structure of embedding based deep models did bring huge improvements for
data-driven recommender systems, but this also exacerbates the risk of amplifying the bias, which finally
leads to unfairness on sensitive attributes, as well as harming the benefit of the minority. Recent work
has explored the development of deep models with fairness-aware regulations to achieve fairness [
          <xref ref-type="bibr" rid="ref2 ref5">5, 2</xref>
          ].
These kind of fairness should be ensured regardless training data or models. Therefore, a two-pronged
approach of data augmentation and model structure improvement is a reasonable and efective solution
to this unfairness.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Neural Additive Model</title>
        <p>
          Neural Additive Models (NAMs) make restrictions on the structure of neural networks, which yields a
family of models that are inherently interpretable while sufering little loss in prediction accuracy when
applied to tabular data. Methodologically, NAMs belong to a larger model family called Generalized
Additive Models (GAMs) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          NAMs learn a linear combination of networks that each attend to a single input feature: each in the
traditional GAM formulationis parametrized by a neural network [
          <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
          ]. These networks are trained
jointly using backpropagation and can learn arbitrarily complex shape functions. With the continuous
development of AI technology, NAMs play a very important role in fields where interpretable and
explainable models are required, such as healthcare and finance [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Interpreting NAMs is easy as
the impact of a feature on the prediction does not rely on the other features and can be understood by
visualizing its corresponding shape function. NAMs are more easily extendable than existing GAMs
due to their diferentiability and composability.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Contrastive Learning</title>
        <p>
          Contrastive learning has emerged as a powerful paradigm in unsupervised and self-supervised learning
techniques [
          <xref ref-type="bibr" rid="ref15 ref16">15, 16, 17</xref>
          ] by significantly reducing the performance gap between supervised and
unsupervised learning. At its core, contrastive learning aims to learn similar representations for semantically
similar instances and dissimilar representations for distinct ones. It accomplishes this by continuous
optimizing target contrastive learning loss, through a variety of corresponding data augmentation
methods [18]. Especially, by leveraging large amounts of unlabeled data, it opens up new avenues for
model training in scenarios where labeled data is scarce or expensive to obtain. The efectiveness of
this approach has been showcased in numerous applications, such as image, speech recognition, and
natural language processing [19, 20, 21].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Monotonic Fairness</title>
      <sec id="sec-3-1">
        <title>3.1. Definition</title>
        <p>In this section, we introduce the definition of monotonic fairness and its measurement methodology.
DEFINITION 1 (MONOTONIC FAIRNESS). Generally, assume we can partition a multi-dimensional
vector  ∈ ℝ  into  = (, ) ∈ ℝ − × ℝ , such that a function  () for  = (, ) is monotonic on  if
this inequality holds1:
 (, ) &lt;  (, 
′), ∀, ∀ &lt; 
′,
where  &lt;  ′ denotes the inequality for all the elements (i.e.,   ≤  ′ for all 1 ≤  ≤  , where   denotes
the  -th element of  ). The formula above shows that  is monotonic on  . For a diferentiable function
 , Equation (1) is equivalent to:
min
∈[1,]
Assume that  and  refer to the unprotected and protected attributes in recommender systems, Equation
(2) indicates the individual monotonic fairness of each protected attribute.
1Assume that all monotonic constraints are increasing; the monotonically non-increasing case can be considered analogously.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Metric</title>
        <p>According to Definition 1, the best match metric to measure monotonic fairness is pairwise ranking
accuracy [22, 23, 24, 25], where the idea is to calculate the accuracy of a system ranking a pair of items
correctly conditioned on the true outcome. Formally, the metric is defined as:</p>
        <p>PairAcc =  ( () &lt;  (
′)| ∈  ,</p>
        <p>′ ∈  )
=  ( (, ) &lt;  (, 
′)|∀, ∀ &lt; 
′)
Intuitively, this metric means that given an item from dataset  , the probability of model score keeps
monotonic compared with itself when the protected feature  varies. When referring to cases not
meeting the criteria (i.e., unfair cases), they can be divided into two types:</p>
        <p>( (, ) &gt;  (, 
Unfairness = {
 ( (, ) =  (, 
′)|∀, ∀ &lt; 
′)|∀, ∀ &lt; 
′),    
′),</p>
        <p>As shown in Equation (4), reverse and irrelevant represent diferent levels of unfairness, both of which
are the targets to be eliminated. For irrlevant cases, the tolerable precision of the equal sign is set to
1e-6.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Proposed Model</title>
      <p>In this section we describe the proposed framework in details. The architecture of MNAM-CL, as
illustrated in Figure 2, is based on the classic two-tower model. It addresses reverse unfairness instances
by isolating protected attributes from the population and constructing a certified monotonic neural
additive model. Furthermore, to tackle the more challenging irrelevant instances, where subtle changes
in protected attributes fail to influence final outcomes, we employ an additional approach. Augmented
samples are generated through a self-supervised manner by simulating real changes to protected
attributes, serving as extra data for fairness tasks to assist model training.</p>
      <sec id="sec-4-1">
        <title>4.1. Certified Monotonic Neural Additive Models</title>
        <p>
          Neural additive models (NAMs) [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] consist of a linear combination of neural networks that each attend
to a single input feature, making it possible for learning arbitrarily complex relationships between
(3)
(4)
Algorithm 1: Procedures of Data Augmentation (Pairwise)
        </p>
        <p>Input: A training dataset  = { ,  }
Output: Augmented dataset  ′ = {( , 
 ← lower boundary of   ;
upper boundary of   ;
 ∈ 
 ′

= (, )
do</p>
        <p>;
 ′ = 1(  &lt;  ′);
 ′ = (,  1, ⋯ ,  ′, ⋯ ,   ) ;
end
end</p>
        <p>Generate an augmented sample ((  ,  ′),  ′);
their input feature and the output. Drawing inspiration from NAMs, we develop a Monotonic Neural
Additive Model (MNAM), as illustrated in Figure 2. Each of the protected attributes gets an expert
network as the weight, the input of which are unprotected features. Thus MNAM is mathematically
formulated as follows:
 () = () +
∑ ℎ () ⋅ Θ  (  ),

=1
where (⋅) is the score function for unprotected features, Θ (⋅) is the normalization function for  -th
protected feature, and ℎ (⋅) is the weight function, respectively. The partial derivative of  (⋅) with
respect to  is given by the following formula:
min
∈[1,]
Under the condition that ℎ (⋅) &gt; 0 and Θ (⋅) is diferentiable, the above derivation theoretically guarantees
′
monotonicity between  and  as long as Θ (  ) ≥ 0. In our work, ℎ (⋅) is a multi-layer fully connected
network with sigmoid as the final activation function, which satisfies the monotonic constraint while
introducing nonlinearity.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Self-supervised Contrastive Learning</title>
        <p>Contrastive learning aims to learn generalizable and transferable representations from unlabeled data
using contrastive pairs [20]. In our study, we focus on protected attributes, augmenting original training
data in a self-supervised manner.</p>
        <sec id="sec-4-2-1">
          <title>4.2.1. Data Augmentation</title>
          <p>
            The augmentation module plays a crucial role in contrastive learning, as indicated by previous
research[
            <xref ref-type="bibr" rid="ref15 ref16">17, 16, 15</xref>
            ]. Based on the hypothesis of the monotonicity relationship between protected attributes
and model scores, it naturally guides the following data augmentation approach, as shown in Algorithm
1. Specifically, we generated additional augmented samples by randomly adjusting the value of protected
attributes within certain constraints while keeping the remaining unprotected features unchanged.
The augmentation simply simulates changes to the protected attributes on provider side, actively or
passively. According to the assumption of monotonicity, it becomes straightforward to determine the
model score relationship between the original samples and the generated samples.
          </p>
        </sec>
        <sec id="sec-4-2-2">
          <title>4.2.2. Loss Calculation</title>
          <p>With the above data augmentation manner, the choice of loss function is significant. As shown in Figure
2, MNAM-CL consists of two kinds of losses: primary loss and fairness loss. For the primary task we
use pointwise cross entropy as the loss function:

=1</p>
          <p>= − ∑   log( ̂ ) + (1 −   ) log(1 −  ̂ ),
where   and  ̂ are the true label and the sigmoid probability of  -th sample, respectively. As for the
fairness task, we use BPR loss [26] to measure the mutual information between the original samples
and augmented samples.</p>
          <p>Let Δ ̂ =  ̂ ′ −  ̂ , then we have

 
=1 =1
  
= ∑</p>
          <p>∑   ′ log  (Δ ̂  ) + (1 −   ′) log  (1 − Δ ̂  ),
where  ̂ ′ is the predicted score of augmented sample   ′, and  is sigmoid function. Finally the total loss
function becomes a linear combination of Equation (7) and Equation (8). We also add 2 -regularization
terms to avoid overfitting:
(7)
(8)
(9)
  
=  
+  ∗   
+   ,
where  is a temperature hyperparameter. MNAM-CL basically follows the conventional stochastic
gradient descent (SGD) training routine.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments</title>
      <p>Questions (RQs):
In this section, we conduct experiments on three public datasets and answer the following Research
• RQ1: How to define sensitive attributes for monotonic fairness?
• RQ2: Is monotonic unfairness prevalent in standard models in recommendation?
• RQ3: Could MNAM completely eliminate monotonic unfairness?</p>
      <sec id="sec-5-1">
        <title>5.1. Datasets and Experimental Settings</title>
        <p>5.1.1. Dataset
We evaluate fairness and recommendation performance on three datasets: MovieLens2, Steam3 and
Beauty4. Table 1 provides a summary of the dataset statistics. Note the original ratings of some datasets
are explicit integer ratings range from 1 to 5, and we transform them into binary labels by threshold 3
to construct a binary classification model. Additionally, we mark protected attributes for each dataset,
selecting one attribute for both MovieLens (i.e., ratings) and Steam (i.e., price), and multiple attributes
for Beauty (i.e., ratings and price). It is worth mentioning that after splitting the validation set from
raw dataset at a ratio of 20%, we follow the same procedures as Algorithm 1 to generate a monotonic
fairness validation set.
4http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/</p>
        <sec id="sec-5-1-1">
          <title>5.1.2. Competitors</title>
          <p>
            We compare MNAM-CL with several baselines: (1) Wide&amp;Deep [27]. This model combines both linear
models and deep models to improve memorization and generalization capabilities. (2) NeuMF [28]. A
neural network-based collaborative filtering method utilizing binary cross-entropy loss. (3) NAM [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]
An explainable model that learns a separate subnetwork for each input feature and combines their
outputs through an additive operation. (4) T2 [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. A widely-used Two-Tower model in recommender
systems that employs two separate deep neural networks to learn user and item embeddings independently.
MANM-CL in this paper belongs to the family of generalized two-tower models.
          </p>
        </sec>
        <sec id="sec-5-1-2">
          <title>5.1.3. Evaluation Protocols</title>
          <p>We use the classical ROC-AUC and NDCG@10 to measure model accuracy. As defined in section 3.2,
we select reverse rate and irrelevant rate to measure monotonic fairness, where smaller values denotes
better fairness performance. For parameter settings,  is set to 0.1 as the weight of    in loss
function, while L2 regularization coeficient is set to 1e-6.</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Scope of Protected Sensitive Attributes (RQ1)</title>
        <p>Unlike inherent attributes of human beings, such as gender, age and race, which segment users into
subgroups, sensitive attributes related to monotonic fairness are more individual and quantifiable
dimensions. Ensuring fairness along these dimensions in recommender systems is critical for platforms,
as neglecting them can lead to a lose-lose scenario for both providers and platforms. For instance, price
is a critical factor influencing user purchase. Consider a seller who lowers the price of his item. If the
platform’s recommender system does not protect price attribute, the item’s model score might fail to
increase as expected. This misalignment could result in lost transaction opportunities—a loss for both
the seller and the platform. In summary, any attribute that may disrupt the online platform ecosystem
if it fails to meet the monotonic fairness criteria in Definition 1, falls within the scope of sensitive
attributes discussed in this paper. These include, but are not limited to, item prices for sellers, bids for
advertisers, and movie ratings for filmmakers.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Monotonic Fairness Evaluation (RQ2)</title>
        <p>Before the real research begins, it is essential to clarify the current status of monotonic fairness in
representative recommender systems. As previously defined, monotonic unfairness on protected
attributes in recommendation is divided into reverse and irrelevant. We evaluate various baseline models
on the same datasets to observe their fairness performance. Table 2 demonstrates that these baseline
models still sufer from monotonic unfairness of varying degrees, despite good ranking metrics. Such
results emphasize the importance of addressing monotonic unfairness from both the provider and
system perspectives.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.4. Efectiveness of MNAM (RQ3)</title>
        <p>Theoretically, MNAM alone could eliminate all reverse cases, which is verified by experiments, as shown
in Table 2. Nonetheless, MNAM still exhibits a certain number of irrelevant cases. According to Equation
(5), ℎ () is trained as the weight factor for   , influencing the contribution of protected attributes to the
ifnal score. Therefore the irrelevant cases occur when ℎ () ⋅ |Θ  ( ′) − Θ (  )| ≤ 1e-6. The reason why
MNAM is powerless in reducing irrelevant cases is the absence of supervised constraints. In summary,
MNAM cannot completely eliminate monotonic unfairness, indicating a need for improvement in
reducing irrelevant cases.</p>
        <p>We conduct additional experiments for investigating efectiveness of each component in MNAM-CL.
Figure 3 presents the unfairness distribution of Beauty@price, illustrating that the base model (T2) has
a relatively high occurrence of unfairness indiscriminately. In comparison, MNAM tends to generate
unfairness only when the score is near the upper bound 1.0 or the change rate is small ( ′/  ≈ 1.0).
This clearly demonstrates the alignment between the efect and design of MNAM. Compared with other
ablation versions, MNAM-CL achieves improved monotonic fairness, particularly in alleviating irrelevant
cases. The further reduction of unfairness from MNAM to MNAM-CL proves the efectiveness of data
augmentation and contrastive learning. The remaining unfair cases are mostly caused by diminishing
marginal efects, which is more acceptable from an ethical perspective. Furthermore, we evaluate
monotonic fairness both on single and multiple attributes, where MNAM-CL consistently outperforms
with stable performance.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>In this paper, we study individual monotonic fairness in recommender systems, and propose MNAM-CL,
a novel framework for reducing such unfairness. It has the capability to eliminate all reverse cases.
Furthermore, it enhances fairness by data augmentation and contrastive learning according to specific
scenarios and attributes. Extensive evaluations verify its efectiveness in modeling monotonic fairness
while maintaining recommendation accuracy. Fairness in process is the premise of fairness in result. In
the future work, more complex pairwise monotonic fairness would be further explored.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this paper, the authors used GPT-4o in order to: Grammar and spelling check.
After using this tool, the authors reviewed and edited the content as needed and take full responsibility
for the publication’s content.
[17] S. Ge, S. Mishra, C.-L. Li, H. Wang, D. Jacobs, Robust contrastive learning using negative
samples with diminished semantics, Advances in Neural Information Processing Systems 34 (2021)
27356–27368.
[18] P. Zhou, Y.-L. Huang, Y. Xie, J. Gao, S. Wang, J. B. Kim, S. Kim, Is contrastive learning necessary? a
study of data augmentation vs contrastive learning in sequential recommendation, in: Proceedings
of the ACM on Web Conference 2024, 2024, pp. 3854–3863.
[19] C. Yang, J. Zou, J. Wu, H. Xu, S. Fan, Supervised contrastive learning for recommendation,</p>
      <p>Knowledge-Based Systems 258 (2022) 109973.
[20] J. Zhang, K. Ma, Rethinking the augmentation module in contrastive learning: Learning
hierarchical augmentation invariance with expanded views, in: Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, 2022, pp. 16650–16659.
[21] W. Xia, C. Zhang, C. Weng, M. Yu, D. Yu, Self-supervised text-independent speaker verification
using prototypical momentum contrastive learning, arXiv preprint arXiv:2012.07178 (2020).
[22] Y. Wu, J. Cao, G. Xu, Fairness in recommender systems: evaluation approaches and assurance
strategies, ACM Transactions on Knowledge Discovery from Data 18 (2023) 1–37.
[23] A. Beutel, J. Chen, T. Doshi, H. Qian, L. Wei, Y. Wu, L. Heldt, Z. Zhao, L. Hong, E. H. Chi, et al.,
Fairness in recommendation ranking through pairwise comparisons, in: Proceedings of the
25th ACM SIGKDD international conference on knowledge discovery &amp; data mining, 2019, pp.
2212–2220.
[24] A. Fabris, G. Silvello, G. A. Susto, A. J. Biega, Pairwise fairness in ranking as a dissatisfaction
measure, in: Proceedings of the Sixteenth ACM International Conference on Web Search and Data
Mining, 2023, pp. 931–939.
[25] X. Wang, N. Thain, A. Sinha, F. Prost, E. H. Chi, J. Chen, A. Beutel, Practical compositional fairness:
Understanding fairness in multi-component recommender systems, in: Proceedings of the 14th
ACM International Conference on Web Search and Data Mining, 2021, pp. 436–444.
[26] S. Rendle, C. Freudenthaler, Z. Gantner, L. Schmidt-Thieme, Bpr: Bayesian personalized ranking
from implicit feedback, in: UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty
in Artificial Intelligence, Montreal, QC, Canada, June 18-21, 2009, 2009.
[27] H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado,
W. Chai, M. Ispir, et al., Wide &amp; deep learning for recommender systems, in: Proceedings of the
1st workshop on deep learning for recommender systems, 2016, pp. 7–10.
[28] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, T.-S. Chua, Neural collaborative filtering, in: WWW, 2017,
pp. 173–182.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Towards robust fairness-aware recommendation</article-title>
          ,
          <source>in: Proceedings of the 17th ACM Conference on Recommender Systems</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>211</fpage>
          -
          <lpage>222</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>When fairness meets bias: a debiased framework for fairness aware top-n recommendation</article-title>
          ,
          <source>in: Proceedings of the 17th ACM Conference on Recommender Systems</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>200</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Patro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Gummadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <article-title>Fairrec: Two-sided fairness for personalized recommendations in two-sided platforms</article-title>
          ,
          <source>in: Proceedings of the web conference</source>
          <year>2020</year>
          ,
          <year>2020</year>
          , pp.
          <fpage>1194</fpage>
          -
          <lpage>1204</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Improving recommendation fairness via data augmentation</article-title>
          ,
          <source>in: Proceedings of the ACM Web Conference</source>
          <year>2023</year>
          , WWW '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , p.
          <fpage>1012</fpage>
          -
          <lpage>1020</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mehrotra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McInerney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bouchard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lalmas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Diaz</surname>
          </string-name>
          ,
          <article-title>Towards a fair marketplace: Counterfactual evaluation of the trade-of between relevance, fairness &amp; satisfaction in recommendation systems</article-title>
          ,
          <source>in: Proceedings of the 27th acm international conference on information and knowledge management</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>2243</fpage>
          -
          <lpage>2251</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cao</surname>
          </string-name>
          , G. Xu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>Tfrom: A two-sided fairness-aware recommendation model for both customers and providers</article-title>
          ,
          <source>in: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1013</fpage>
          -
          <lpage>1022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hong</surname>
          </string-name>
          , D. Z. Cheng, L.
          <string-name>
            <surname>Heldt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kumthekar</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Wei</surname>
          </string-name>
          , E. Chi,
          <article-title>Samplingbias-corrected neural modeling for large corpus item recommendations</article-title>
          ,
          <source>in: Proceedings of the 13th ACM Conference on Recommender Systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>269</fpage>
          -
          <lpage>277</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Improving recommendation fairness via data augmentation</article-title>
          ,
          <source>arXiv preprint arXiv:2302.06333</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gómez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Boratto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Salamó</surname>
          </string-name>
          ,
          <article-title>Provider fairness across continents in collaborative recommender systems</article-title>
          ,
          <source>Information Processing &amp; Management</source>
          <volume>59</volume>
          (
          <year>2022</year>
          )
          <fpage>102719</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xie</surname>
          </string-name>
          , Profairrec:
          <article-title>Provider fairness-aware news recommendation</article-title>
          ,
          <source>in: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1164</fpage>
          -
          <lpage>1173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Hastie</surname>
          </string-name>
          ,
          <article-title>Generalized additive models</article-title>
          , in: Statistical models in S, Routledge,
          <year>2017</year>
          , pp.
          <fpage>249</fpage>
          -
          <lpage>307</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Melnick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Frosst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lengerich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Caruana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          ,
          <article-title>Neural additive models: Interpretable machine learning with neural nets</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>34</volume>
          (
          <year>2021</year>
          )
          <fpage>4699</fpage>
          -
          <lpage>4711</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bendersky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Grushetsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mitrichev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Sterling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ravina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <article-title>Interpretable ranking with generalized additive models</article-title>
          ,
          <source>in: Proceedings of the 14th ACM International Conference on Web Search and Data Mining</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>499</fpage>
          -
          <lpage>507</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Barr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Paisley</surname>
          </string-name>
          ,
          <article-title>Gaussian process neural additive models</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>38</volume>
          ,
          <year>2024</year>
          , pp.
          <fpage>16865</fpage>
          -
          <lpage>16872</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Rethinking the efect of data augmentation in adversarial contrastive learning</article-title>
          ,
          <source>in: The Eleventh International Conference on Learning Representations</source>
          ,
          <year>2023</year>
          . URL: https://openreview.net/forum?id=0qmwFNJyxCL.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Robinson</surname>
          </string-name>
          , C.-Y. Chuang,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jegelka</surname>
          </string-name>
          ,
          <article-title>Contrastive learning with hard negative samples</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2021</year>
          . URL: https://openreview.net/ forum?id=
          <fpage>CR1XOQ0UTh</fpage>
          -.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>