<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Examining the Impact of Multi-Objective Recommender Systems on Providers Bias</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Reza Shafiloo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kostas Stefanidis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Tampere University</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recommender systems are designed to help customers in finding their personalized content. However, biases in recommender systems can potentially exacerbate over time. Multi-objective recommender system (MORS) algorithms aim to alleviate bias while maintaining the accuracy of recommendation lists. While these algorithms efectively address item-side fairness, provider-side fairness often remains neglected. This study investigates the impact of MORS algorithms, leveraging evolutionary techniques to mitigate popularity bias on the item-side, on providers' fairness. Our findings reveal that baseline algorithms can adversely afect providers' fairness. Moreover, it is demonstrated that evolutionary algorithms, specifically those introducing less popular items to the initial population of their algorithms, exhibit superior performance compared to other MORS algorithms in enhancing providers' fairness. This research sheds light on the crucial role MORS algorithms, particularly those employing evolutionary approaches, can play in mitigating bias and promoting fairness for both users and providers in recommender systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Recommender systems</kwd>
        <kwd>Items-side fairness</kwd>
        <kwd>Producer-side fairness</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        These days, with the increasing amount of information on
the web, content providers need systems to personalize
content for end-users. As a result, users can eficiently access
their favorite content, leading to user satisfaction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Recommender systems (RS) provide personalized content for
users based on their historical interactions with systems,
such as ratings or clicks on items. Despite being a crucial
and valuable tool for users, RS has been identified as
amplifying various biases. These biases can significantly impact
the outcomes of RS, particularly concerning factors such as
gender, age, race, and other characteristics. One such bias
is popularity bias, where certain items typically receive a
substantial number of ratings, leading to them being
recommended more frequently than others.
      </p>
      <p>
        Fairness-aware recommender systems aim to address
algorithmic bias in various ways, ensuring the system’s
recommendations are unbiased [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Fairness-aware recommender
systems can take into account various attributes to ofer
equitable recommendations. The concept involves
evaluating how a recommender system treats or afects
individuals or groups based on the values of specific attributes.
Methods for ensuring fairness in RS can be categorized into
pre-processing, which involves modifying input data [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ];
in-processing, which constrains learning algorithms for fair
recommendations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]; and post-processing, which modifies
the output of the baseline algorithm [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        In RS, various stakeholders play crucial roles, with two
primary groups being consumers of items and providers of
items [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, numerous fairness-aware RS focus on
addressing consumer or producer-sided fairness, often
neglecting comprehensive, all-sided multi-stakeholder fairness.
While numerous studies concentrate on one-sided fairness
in RS, it is essential to explore how addressing fairness for
one group might impact the fairness of other stakeholders.
      </p>
      <p>
        Using Multi-objective Recommender Systems (MORS) as
a post-processing approach ofers a potential solution for
achieving fairness in RS outputs [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Some existing MORS
specifically address fairness for the item side. These
approaches aim to maintain the accuracy of RS for consumer
satisfaction while also creating opportunities for
recommending less popular items, thereby mitigating popularity
bias [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. While preserving accuracy and enhancing
fairness among items is valuable, it is crucial to investigate
fairness among providers of items.
      </p>
      <p>
        In this study, our objective is to investigate the behavior of
MORS algorithms in mitigating item popularity bias and its
impact on providers’ fairness. While existing research has
shown the trade-of between mitigating popularity bias and
maintaining recommendation accuracy on the item side, it
is crucial to delve deeper into how existing work objectives
can influence providers’ fairness. Prior research has yet to
be conducted in this area, and our study aims to address
this gap [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Furthermore, we aim to explore which specific objectives
may have a trade-of with providers’ fairness to provide a
more comprehensive understanding of the issue. We have
chosen MORS algorithms that benefit from evolutionary
algorithms to solve a multi-objective optimization to achieve
this. While evolutionary algorithms may not be the swiftest,
their superiority in addressing multi-objective problems
arises from their capability to tackle complex and non-linear
optimization problems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>Our work shows that MORS algorithms perform better
in ensuring providers’ fairness than baseline algorithms.
MORS algorithms enable providers to showcase their items
more efectively than baseline algorithms. Although it is
noteworthy that, among all MORS algorithms, there is no
significant diference in covering providers’ fairness, those
algorithms that add less popular items to their initial
population of evolutionary algorithms show better performance
than other MORS algorithms.</p>
      <p>The remainder of this paper is structured as follows.
Section 2 reviews the related fairness in recommender systems.
Section 3 describes the algorithms we use in our study and
the measures we utilize to compare the algorithms’
performance. Section 4 presents some results of our framework
on the MovieLens and IMDB datasets. Section 5 concludes
this work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The fundamental RS aims to forecast ratings for unknown
items among users, employing diverse algorithms for this
task. Approaches like User-based and Item-based
collaborative filtering algorithms, as explored by Adomavicius
et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Yue et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], entail the identification of
similar users or items to predict item ratings. CF algorithms
can be used in many post-processing algorithms as baseline
algorithms, from neural networks [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to multi-objective
evolutionary algorithms [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">9, 8, 10</xref>
        ].
      </p>
      <p>
        The investigation for an optimal balance between
accuracy and bias mitigation has garnered significant attention
in RS. Malekzadeh and Kaedi propose a strategy that
simultaneously personalizes recommended items to maintain
accuracy as discussed in their work [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Similarly, Wang et
al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] address the long-tail problem by employing
multiobjective evolutionary optimization algorithms, focusing
on improving recommendation list accuracy and reducing
the dominance of popular items. Shafiloo et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] present
a framework to alleviate popularity bias in recommender
systems by incorporating users’ dynamic preferences. Cai
et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] proposed a framework based on multi-objective
algorithms designed to concurrently optimize accuracy,
diversity, and coverage within recommendation lists.
Utilizing multi-objective algorithms reflects their commitment to
addressing multiple dimensions of recommendation
quality, aiming to enhance the overall user experience. Jain et
al. introduced a novel similarity metric tailored for
baseline algorithms[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Their approach involved modifying
fundamental functions of genetic algorithms, precisely the
crossover operation, to efectively manage the trade-of
between accuracy and diversity of recommended items. Pang
et al. introduced a framework based on genetic algorithms,
where accuracy and coverage serve as objective functions
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. This innovative approach is designed to tackle
popularity bias in recommendation lists, emphasizing a dual
focus on improving accuracy and coverage for a more
comprehensive and unbiased recommendation system.
      </p>
      <p>
        Fairness-aware recommender systems try to tackle the
algorithmic bias issue in diferent ways and ensure that the
recommendations made by the system are unbiased [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
However, many approaches consider tackling only one-sided
fairness issues but abandon all-sided multi-stakeholder fairness
[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. In the realm of multi-stakeholder recommender
systems (MS-RS), where numerous users participate in the
recommendation process from multiple perspectives, as noted
by Cornacchia et al. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], there should be studies on how
items side fairness how can afect another side of fairness.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <p>In this section, our initial focus is to introduce the
algorithms employed in our study. Subsequently, we will delve
into the evaluation metrics utilized for comparing results.
Our objective is to comprehensively explore the impact of
item bias mitigation on the producers’ side fairness and
understand how it influences the outcomes of the
recommendation systems.</p>
      <sec id="sec-3-1">
        <title>3.1. Baseline algorithms</title>
        <p>We have selected two baseline algorithms, item-based and
user-based collaborative filtering, where no post-processing
has been applied to the output. These algorithms serve as
our baseline models for evaluating bias mitigation
strategies and their impact on the producers’ side in subsequent
analyses.</p>
        <p>For computing the similarity between two users, we have:
(, ) = √︁∑︀
∈∩ ( −  ).</p>
        <p>∑︀∈∩ ( −  ).( −  )
√︁∑︀
∈∩ ( −  )
(1)</p>
        <p>Equation 1 defines the similarity measure between two
users,  and , calculated based on the items they have both
rated. Here,  represents the subset of items rated by user
,  denotes the rating given by user  to item , and  
is the average rating provided by user .</p>
        <p>ˆ =   +
∑︀∈() (, ).( −  )
∑︀∈() |(, )|
(2)</p>
        <p>Equation 2 outlines the predicted rating (ˆ ) of user
 for item . It incorporates the average rating   and
calculates the predicted rating by considering the similarity
between user  and other users () who have rated the same
item (). The set () represents the group of nearest
users to  who have provided ratings for item . The
itembased collaborative filtering is similar to the user-based
collaborative filtering.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Multi-objective algorithms</title>
        <p>In this section, we introduce algorithms that leverage the
outputs of baseline algorithms, implementing reranking
strategies to achieve specific objectives. Each algorithm
is characterized by an objective function to mitigate item
popularity bias.</p>
        <p>
          Malekzadeh and Kaedi [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] employ the simulated
annealing algorithm to address the long-tail problem in
recommender systems. Their approach begins with applying a
collaborative filtering algorithm to generate initial
recommendation lists. Subsequently, an evolutionary algorithm
is employed to optimize the combination of items in these
lists, focusing on satisfying three defined objective
functions. These functions encompass considerations for
personalized diversification, accuracy, and increased
participation of long-tail items, aiming to enhance recommendations’
overall quality. The objective functions are:
1. Diversity: The Shannon entropy is used for
diversity which the entropy () for attribute  of user
 is defined using the formula:
() = −

∑︁  · log 
=1
(3)
In this Equation:
() is the entropy for attribute  of user . 
is the number of possible values for attribute . 
represents the ratio of the number of ratings given
by user  to items with attribute  having the value
, divided by the total number of user’s ratings.
Essentially, this formula calculates the entropy of
the distribution of ratings given by a user  across
diferent values of attribute .
        </p>
        <p>
          Wang et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] address the long-tail problem by defining
two objective functions. The first function assesses the
accuracy of recommendation lists, while the second aims
to reduce the dominance of popular items. The objectives
formula can be expressed as follows:
1. Accuracy: The primary objective function for
assessing accuracy, labeled as  1, is formulated as:

 1 = ∑︁ ˆ,
        </p>
        <p>=1
In this expression,  denotes the length of the
recommendation list. A higher  1 value signifies
increased popularity of the items within the list.
2. Long tail recommendation: Items with higher
ratings might be prioritized higher on the ranking list
for all users, and popular items often receive similar
ratings, resulting in low variance. To measure the
unpopularity in terms of the mean and variance of
item ratings, Tamas et al. proposed a value for an
item :
 =</p>
        <p>1
 (  + 1)2
Here,   and   represent the mean and variance of
ratings for item  across all users. To prevent division
by zero, a value of one is added to the variance. The
reciprocal of this mean-variance combination yields
the value , where a smaller value indicates a more
popular item.</p>
        <p>Motivated by this concept, an objective function  2
is introduced to calculate the unpopularity of the
recommendation result:
The attribute-based diversity measurement in this
study is determined using an equation to assess a
recommendation list’s diversity. The formula is
expressed as:</p>
        <p>(1, . . . , ) =
1
( − 1) =1 =
 
∑︁ ∑︁(1 − similarity(,  ))
(4)
In this context,  signifies the number of items
within the recommendation list, and 1, . . . , 
represents the items recommended. The term
similarity(,  ) denotes the measure of similarity
between two items  and  based on the attribute
.</p>
        <p>Equation 3 illustrates the ideal diversity for a specific
user, capturing an optimal scenario. Subsequently,
the deviation between this ideal diversity and the
actual diversity computed from Equation 4 for the
recommendation list is measured. The disparity for
each item attribute is quantified through Equation
(7):</p>
        <p>Personalized Diversity = | −
Diversity| (5)
In this expression, Diversity denotes the diversity
of the recommendation list based on attribute ,
while  signifies the entropy of user preferences
related to attribute . This metric, termed Personalized
Diversity, serves to quantify the diference between
the ideal and actual diversity in the
recommendation list for a given user, explicitly considering the
preferences associated with a particular attribute.
2. The participation of long tail items: The long
tail metric is computed using the formula:

∑︁
=1
Long Tail =</p>
        <p>Popularity()
(6)
In this Equation,  signifies the size of the
recommendation list, representing the total number of items
included in the recommendation. A lower value
obtained from this calculation indicates a higher
likelihood of incorporating less popular items in the
recommendation list. This suggests a greater
emphasis on the inclusion of long-tail items in the
recommendations, reflecting a preference for diversity
and coverage beyond just popular items.
3. Accuracy: The Accuracy metric is evaluated using
the following Equation:</p>
        <p>Accuracy =</p>
        <p>1
∑︀
=1 PredictedRate()
(7)
In this Equation, PredictedRate() denotes the
predicted rating assigned to the item. The formula
computes the inverse of the sum of the predicted
ratings for all recommended items, ofering a metric
to assess the accuracy of the recommendation
system. A lower value in the Accuracy metric suggests
a higher overall accuracy in the predicted ratings
for the recommended items.
(8)
(9)
(10)

 2 = ∑︁</p>
        <p>1
=1  (  + 1)2
This function quantifies the unpopularity of the
recommended items, with lower values indicating more
popular items in the list.</p>
        <p>They employ a genetic algorithm to achieve these
objectives, seeking optimal combinations of items within
recommendation lists that satisfy the defined criteria. This
approach aims to enhance accuracy and mitigate popularity
bias for more balanced and practical recommendations.</p>
        <p>
          Shafiloo et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] introduced a framework to alleviate
popularity bias in recommender systems by incorporating users’
dynamic preferences. Their approach employs a memetic
algorithm, creating opportunities to include unpopular items
in recommendation lists. They define two objective
functions within their framework, aiming to simultaneously
preserve accuracy and mitigate popularity bias. This innovative
solution focuses on providing more diverse and unbiased
recommendations, catering to the dynamic preferences of
users. The objectives to be achieved are:
1. Accuracy: In their research, they employ accuracy
as expressed in formula 7.
2. Long tail participation: They utilize long tail
participation as described in formula 6.
        </p>
        <p>Additionally, in their research, they modified the memetic
algorithm. Instead of randomly adding items to the initial
population, as is common in other genetic algorithms, they
introduced a higher possibility of including items from the
long tail and a lower possibility of including popular items
in the initial population.</p>
        <p>() =</p>
        <p>1
( − 1)
∑︁ (, )
̸=</p>
        <p>Here,  represents the length of the recommendation
lists for user , and (, ) calculates the similarity
between two items  and  based on a similarity metric
defined in Equation 1. The purpose of () is to quantify
the similarity of items within user ’s recommendation list.</p>
        <p>The intra-user diversity for all users is then defined as:
(14)
(15)
(16)
(17)
all users() =
1 ∑︁ ()

 ∈</p>
        <p>Here,  denotes the number of users in the set  . This
Equation provides a measure of intra-user diversity
considering all users in the study.</p>
        <p>Equation 16 introduces the Normalized Discounted
Cumulative Gain (NDCG) measurement, a widely used metric
for evaluating the quality of recommendations. This
measurement is defined as:
@()
@()</p>
        <p>Here, @() represents the ideal @() for
user , where the ideal scenario assumes that all relevant
items in the user’s recommendation list appear at the top
rank, resulting in the maximum possible @().</p>
        <p>The discounted cumulative gain at position  for user ,
denoted as @(), is calculated using the formula:</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Evaluation</title>
      <p>This section presents the datasets employed for evaluating
the proposed method. Subsequently, we outline the
evaluation criteria and comprehensively represent the comparison
result.</p>
      <sec id="sec-4-1">
        <title>4.1. Dataset</title>
        <p>In our experimental evaluation, we use 2 real-world datasets,
namely MovieLens and IMDB. The MovieLens dataset is a
commonly employed dataset for evaluating methods
addressing long-tail problems in various studies. Specifically,
we utilize the MovieLens 1M dataset that features 6040
users and 1 million ratings for 3883 items. The IMDB
Dataset is also employed to enhance information about
movie providers, and director information is extracted. In
this study, movie directors are considered providers, and
the dataset includes information on 2208 movie directors.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Evaluation metric</title>
        <p>The study evaluates methods addressing the long-tail
problem using three criteria for comparison. The first criterion
is accuracy, measured through the precision metric defined
as:
  = 

(11)</p>
        <p>
          Here,  represents the total number of items
recommended to the user, and  denotes the relevant items
suggested to the user. Relevant items are those with ratings
higher than the user’s average ratings, as outlined by Wang
et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          The second criterion, aggregate diversity (AG) (Equation
12), counts the number of distinct items ofered to users,
particularly focusing on long-tail items in recommendation
lists [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>= | ⋃︁ ()|
(12)
Equation 12 introduces the aggregate diversity criterion,
where  represents a specific user from the set of users  ,
and () is the list of items recommended to the user .
The equation 12 is normalized by the number of items. This
equation is used to measure popularity bias on the item and
provider sides.</p>
        <p>The third criterion is Novelty, calculated as:
1
∈</p>
        <p>This Equation indicates that the novelty of the
recommendation list decreases as the popularity of items increases,
emphasizing a preference for less popular items. The study
employs these criteria to compare and evaluate the results
of diferent methods addressing the long-tail problem in
recommender systems.</p>
        <p>
          Equation 14 introduces a measurement for intra-user
diversity proposed by Zou et al. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. This measurement,
denoted as (), is defined for a specific user  and is
calculated as follows:
  = ∑︀
    ()
(13)

@() = ∑︁
        </p>
        <p>()
=1 log2( + 1)</p>
        <p>In this Equation, () is an indicator function that
determines if item  is relevant to user . A value of 1 indicates
that item  is relevant, while a 0 indicates that item  is
irrelevant.</p>
        <p>NDCG provides a normalized measure of the efectiveness
of a recommendation list by considering both relevance and
the position of items within the list.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Results and discussion</title>
        <p>In this section, the study compares and analyzes the results
obtained from various methods using the criteria introduced
in the section above. For comparison, we use a real life
scenario where the length of recommendation lists in all
algorithms is considered to be 10.</p>
        <p>The results in Table 1 indicate that MORS algorithms
outperform baseline algorithms in addressing the long-tail
problem. These algorithms demonstrate superior
performance in diversifying items in recommendation lists,
effectively increasing the participation of unpopular items.
Notably, the study highlights that MORS algorithms achieve
this diversification without compromising the accuracy of
the recommendation lists. Therefore, the MORS algorithms
are successful in preserving accuracy while simultaneously
enhancing the inclusion of less popular items in the
recommendations, addressing the long-tail problem in
recommender systems.</p>
        <p>The comparison table suggests that while MORS
algorithms efectively mitigate popularity bias in
recommendation lists, there is not a significant diference in the diversity
of providers between baseline algorithms and MORS
algorithms. For example, CF-User has a value of 0.5230 in
AG-providers, while Malekzadeh and Wang show 0.5697 and
0.5711, respectively. Although MORS algorithms, aided by
item diversifying objectives, ofer providers a better chance
to present their items, the disparity in aggregate diversity
is more noticeable on the item side than on the provider
side when comparing MORS algorithms with baseline
algorithms. Moreover, the comparison indicates that Baseline
algorithms with higher accuracy than MORS algorithms
exhibit poor performance in aggregate diversity, suggesting
that recommendation list accuracy can negatively impact
provider-side fairness. Specifically, CF-items achieve an
accuracy of 0.7163, whereas CF-Users attain 0.6622. However,
AG-providers exhibit respective values of 0.5067 and 0.5230.</p>
        <p>Also, Table 1 indicates that among MORS algorithms,
Malekzadeh’s work outperforms Shafiloo and Wang’s work
in terms of the precision metric. However, this
superiority adversely impacts aggregate diversity on both the
provider and item sides. Specifically, Shafiloo’s work
exhibits a precision of 0.7989, with aggregate provider
diversity at 0.6059 and aggregate item diversity at 0.6930. In
contrast, Malekzadeh’s work achieves a precision of 0.8338,
but the aggregate provider diversity decreases to 0.5711, and
the aggregate item diversity is 0.6651.</p>
        <p>Furthermore, in Figure 1, we present the provider
frequency using a bucketing technique. Specifically, in this
ifgure, providers are assigned to a bucket based on the
number of items belonging to that specific provider that are
represented in all recommendation lists generated by the
algorithm. For instance, a provider is placed in bucket one
if only one item from all items associated with that provider
is present in all recommendation lists.</p>
        <p>This figure shows that baseline algorithms exhibit a
weakness in recommending items from providers who lack
popularity. This is illustrated in the initial buckets, where baseline
algorithms struggle to include more items from less famous
providers. Conversely, the first part of the buckets shows
that MORS algorithms provide a more significant
opportunity for less-known providers to showcase their items in
the recommendation lists, ofering more visibility.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>In conclusion, our study highlights the significance of MORS
algorithms in addressing the issue of bias in recommender
systems and promoting fairness for both items and providers.
Our findings reveal that while baseline algorithms can
negatively impact the provider’s fairness, MORS algorithms,
particularly those leveraging evolutionary techniques and
introducing less popular items to the initial population of
their algorithms, can efectively mitigate popularity bias
and enhance the provider’s fairness. This emphasizes the
importance of considering provider-side fairness in the
development of recommender systems, as it is often neglected
in current research.</p>
      <p>Overall, our research contributes to the growing body of
work on fairness and bias in recommender systems and
emphasizes the crucial role of MORS algorithms, particularly
those employing evolutionary approaches, in mitigating
bias and promoting fairness for both items and providers.
Our study provides insights into how existing work
objectives can influence provider fairness. It highlights the need
for future research to delve deeper into this issue to
provide a more comprehensive understanding of the problem.
The efectiveness of MORS algorithms for providers could
be further enhanced if a specific objective function were
dedicated to mitigating provider bias. The absence of such
an objective function might limit the algorithms’ ability to
address biases related to the popularity of providers in the
recommendation process.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stratigi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Pitoura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stefanidis</surname>
          </string-name>
          ,
          <article-title>Squirrel: A framework for sequential group recommendations through reinforcement learning</article-title>
          ,
          <source>Information Systems</source>
          <volume>112</volume>
          (
          <year>2023</year>
          )
          <article-title>102128</article-title>
          . URL: https://www.sciencedirect.com/ science/article/pii/S0306437922001065. doi:https:// doi.org/10.1016/j.is.
          <year>2022</year>
          .
          <volume>102128</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Pitoura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stefanidis</surname>
          </string-name>
          , G. Koutrika,
          <article-title>Fairness in rankings and recommendations: an overview</article-title>
          ,
          <source>The VLDB Journal</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Salimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Howe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Suciu</surname>
          </string-name>
          ,
          <article-title>Interventional fairness: Causal database repair for algorithmic fairness</article-title>
          ,
          <source>in: Proceedings of the 2019 International Conference on Management of Data</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>793</fpage>
          -
          <lpage>810</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Caverlee</surname>
          </string-name>
          ,
          <article-title>Fairness-aware tensorbased recommendation</article-title>
          ,
          <source>in: Proceedings of the 27th ACM international conference on information and knowledge management</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1153</fpage>
          -
          <lpage>1162</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kamishima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Akaho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Asoh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sakuma</surname>
          </string-name>
          ,
          <article-title>Recommendation independence</article-title>
          , in: Conference on fairness,
          <source>accountability and transparency, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>187</fpage>
          -
          <lpage>201</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Giannopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Papastefanatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sacharidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stefanidis</surname>
          </string-name>
          ,
          <article-title>Interactivity, fairness and explanations in recommendations</article-title>
          ,
          <source>in: ACM UMAP</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>161</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Min</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yongfeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zhaoquan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yiqun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shaoping</surname>
          </string-name>
          ,
          <article-title>Fairness-aware group recommendation with pareto-eficiency</article-title>
          ,
          <source>in: Proceedings of the eleventh ACM conference on recommender systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Hamedani</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Kaedi, Recommending the long tail items through personalized diversification</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>164</volume>
          (
          <year>2019</year>
          )
          <fpage>348</fpage>
          -
          <lpage>357</lpage>
          . doi:
          <volume>10</volume>
          . 1016/j.knosys.
          <year>2018</year>
          .
          <volume>11</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shafiloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pourmiri</surname>
          </string-name>
          ,
          <article-title>Considering user dynamic preferences for mitigating negative efects of long tail in recommender systems</article-title>
          ,
          <source>arXiv preprint arXiv:2112.02406</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Yang, Multi-objective optimization for long-tail recommendation</article-title>
          ,
          <source>KnowledgeBased Systems</source>
          <volume>104</volume>
          (
          <year>2016</year>
          )
          <fpage>145</fpage>
          -
          <lpage>155</lpage>
          . doi:
          <volume>10</volume>
          .1016/j. knosys.
          <year>2016</year>
          .
          <volume>04</volume>
          .018.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Adomavicius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kwon</surname>
          </string-name>
          ,
          <article-title>Improving aggregate recommendation diversity using ranking-based techniques</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>24</volume>
          (
          <year>2011</year>
          )
          <fpage>896</fpage>
          -
          <lpage>911</lpage>
          . doi:
          <volume>10</volume>
          .1109/TKDE.
          <year>2011</year>
          .
          <volume>15</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Yue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lauria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>An optimally weighted user-and item-based collaborative filtering approach to predicting baseline data for friedreich's ataxia patients</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>419</volume>
          (
          <year>2021</year>
          )
          <fpage>287</fpage>
          -
          <lpage>294</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Borges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stefanidis</surname>
          </string-name>
          ,
          <article-title>Feature-blind fairness in collaborative filtering recommender systems</article-title>
          ,
          <source>Knowledge and Information Systems</source>
          <volume>64</volume>
          (
          <year>2022</year>
          )
          <fpage>943</fpage>
          -
          <lpage>962</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>X.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>A hybrid recommendation system with many-objective evolutionary algorithm</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>159</volume>
          (
          <year>2020</year>
          )
          <fpage>113648</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Dhar, Multi-objective item evaluation for diverse as well as novel item recommendations</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>139</volume>
          (
          <year>2020</year>
          )
          <fpage>112857</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. Zhang,</surname>
          </string-name>
          <article-title>Using multi-objective optimization to solve the long tail problem in recommender system, in: Advances in Knowledge Discovery and Data Mining: 23rd Pacific-Asia Conference</article-title>
          ,
          <string-name>
            <surname>PAKDD</surname>
          </string-name>
          <year>2019</year>
          , Macau, China,
          <source>April 14-17</source>
          ,
          <year>2019</year>
          , Proceedings,
          <source>Part III 23</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>302</fpage>
          -
          <lpage>313</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>C.-T. Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Hsu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , Fairsr:
          <article-title>Fairness-aware sequential recommendation through multi-task learning with preference graph embeddings</article-title>
          ,
          <source>ACM Transactions on Intelligent Systems and Technology (TIST) 13</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Diaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>A multi-objective optimization framework for multistakeholder fairness-aware recommendation</article-title>
          ,
          <source>ACM Transactions on Information Systems</source>
          <volume>41</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cornacchia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Donini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narducci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ragone</surname>
          </string-name>
          ,
          <article-title>Explanation in multi-stakeholder recommendation for enterprise decision support systems</article-title>
          ,
          <source>in: International Conference on Advanced Information Systems Engineering</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>F.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <article-title>A twostage personalized recommendation based on multiobjective teaching-learning-based optimization with decomposition</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>452</volume>
          (
          <year>2021</year>
          )
          <fpage>716</fpage>
          -
          <lpage>727</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>