<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Extended version [1] published in the 58th volume of Journal of Intelligent Information Systems</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>mender Systems with Federated Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Discussion Paper</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alberto Carlo Maria Mancino</string-name>
          <email>alberto.mancino@poliba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vito Walter Anelli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tommaso Di Noia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugenio Di Sciascio</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Ferrara</string-name>
          <email>antonio.ferrara@poliba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Federated Learning, Collaborative Filtering, Pair-wise Learning, Matrix Factorization</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Bari</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In recent years, recommender systems have successfully assisted user decision-making in various usercentered applications. In such scenarios, the modern approaches are based on collecting user-sensitive preferences. However, data collection is crucial since users now worry about the related privacy risks when sharing their data. This work presents a recommendation approach based on the Federated Learning paradigm, a distributed privacy-preserving approach to the recommendation. Here, users collaborate on the training while still controlling the amount of the shared sensitive data. This paper presents FPL: a pair-wise learning-to-rank approach based on Federated Learning. We show that it puts users in control of their data and reveals recommendation performance competing with centralized state-of-the-art approaches. The public implementation is available at https://split.to/sisinflab-fpl.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recommender Systems (RSs) have emerged as a solution for mitigating the information-overload
problem by assisting users with personalized recommendations. Generally, these models
are trained in a centralized fashion, where massive proprietary sensitive data are hosted on
a single server. In the last two decades, the RS mainstream research line has focused on
Collaborative Filtering (CF) [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], Content-based [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ], and hybrid [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] approaches. However,
these models need suficient data to provide accurate recommendations by exploiting users’
behavior similarities. Proposed by Google, Federated Learning (FL) emerged as a
privacy-bydesign solution for machine learning models [
        <xref ref-type="bibr" rid="ref10 ref7 ref8 ref9">7, 8, 9, 10</xref>
        ]. FL addresses the ML-privacy limitations
by horizontally distributing the training while having the clients train the global model on their
local devices without sharing their private data [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Recent works on Federated Learning-based
RSs have exhibited benefits for the users’ privacy [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Federated Pair-wise Learning (FPL) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
shows how a federated recommender system can exploit the Learning-to-Rank competitive
performance while still letting users control their data. Indeed, one of the most significant
advantages of employing FPL is that the users participating in the federated learning process can
independently decide how much they are inclined to disclose their private sensitive preferences.
      </p>
      <p>
        In this paper, we formally show how in FPL, the users can control their data. We investigate
the risks related to the transmission of the gradients and how we address this drawback by
putting users in control of their data. Moreover, we show that integrating FPL could lead to
competitive performance in accuracy and provide users with a trustworthy model [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
2. Foundations of Federated Pair-wise Learning
FPL: Federated Pair-wise Learning for Recommendation. Let be  and ℐ the set of users
and items, respectively, and X ∈ ℝ||×|ℐ |
      </p>
      <p>be the user-item matrix containing for each   an
implicit feedback 1 or 0. Inspired by the state-of-the-art MF [13, 14, 15, 16], each user  and item
 are represented by the embedding vectors p and q , respectively. The dot product between p
and q can explain any observed user-item interaction   , so that any non-observed interaction
can be estimated as  ̂</p>
      <p>=   + p q , where   is a bias term. FPL builds a global model Θ = ⟨Q, b⟩
on a server  , which is aware of the whole catalog ℐ, and a private local model Θ = ⟨p ⟩ on
each client of the federation. In our federated setting, each user  holds her own feedback
dataset x ∈ ℝℐ, which — compared with a centralized recommender system — corresponds to
the  -th row of matrix X, so only user  knows her own set of consumed items.</p>
      <p>In FPL, the model is trained by rounds of communications composed of a four-step protocol
(Distribution→Computation→Transmission→Aggregation), described in the following.
1. Distribution.  delivers the model Θ to a subset of selected users  − ⊆  .
2. Computation. Each user  generates  triples (, , )
from her local dataset and for each
of them performs BPR stochastic optimization to compute the updates for the local p
vector of Θ , and for p ,   , p , and   of the received Θ , following:
Δ =</p>
      <p>− ̂
1 +  − ̂
⋅


 ̂ − ,
with</p>
      <p>̂ =


⎧(q − q ) if  = p ,</p>
      <p>−p
⎪⎪p
⎨
⎪1
⎪
⎩−1
if  = q ,</p>
      <p>if  = q ,</p>
      <p>if  =   ,
if  =   .</p>
      <p>(1)
3. Transmission. Each client  ∈</p>
      <p>− send back the updates for the computed item
embedding and item bias to the server  . We should focus on how BPR computes the
training output. Since for a triple (, , ) , the server could be able to distinguish the
consumed item  from the non-consumed one  (for instance, by analyzing the positive
and the negative sign of Δ  and Δ  ), we argue that sending all the updates computed
by  may raise a privacy issue. FPL proposes a solution to overcome this vulnerability
by sending the sole update (Δq , Δ  ) of each training triples (, , ) . In this way, the
user  shares only indistinguishably negative or missing values, which are assumed to
be non-sensitive. Furthermore, FPL allows users to establish the number of consumed
items to share with the central server  , by introducing the parameter  . It is related to
the probability of a user sending a specific update relative to a positive item (Δq , Δ  ) in
addition to (Δq , Δ  ).
4. Global aggregation. All the received updates are aggregate by  in Q and b to build the
new model Θ ← Θ +  ∑∈ − ΔΘ , with  being the learning rate.</p>
      <p>Privacy Analysis of FPL. FPL has not been conceived as a privacy-preserving framework but
as a tool to control the trade-of between potentially exposed sensitive data and the
recommendation quality. While federated learning hides, by design, users’ raw data to the server, some
malicious actors might still try to learn sensitive information. For this reason, federated learning
alone does not consider providing privacy guarantees to users. In the context of FPL, the aim
is to protect the existence of user-item transactions. While attempts of active reconstruction
of the user profile are not considered here and are out of the scope of this work, we focus
on the presence of an honest-but-curious server. Regarding Eq. 1, suppose a pair of positive
and negative items  and  and the gradients received at the  -th round of communication. The
notation of Δq and Δq could be extended by focusing on a single latent factor  :
Δq, = p,−1  ( p,−1 (q,−1 − q,−1 )),
Δq, = −p,−1  ( p,−1 (q,−1 − q,−1 )),
(2)
(3)
where  (⋅) returns values in the range (0, 1). These equations show that the modules of Δq, and
Δq, (which must be sent to the server) are identical, while their signs are opposite. Moreover,
the sign of the update depends on both the existence/absence of a transaction for  and on
( p,−1 ). Therefore, the sign of a gradient does not directly reveal the presence or absence of
an item in the user’s training set. However, the pairs of positive and negative gradients disclose
user preference patterns. In a round of communication, all the updates for the consumed items
share the same sign, as well as all the updates for the non-consumed items have the same
positive or negative sign, depending on ( p,−1 ). If the server  is honest-but-curious (i.e., it
may try to inspect the updates to obtain some user information), as soon as it obtains enough
information adequate to identify one or more consumed/non-consumed items, the entire user
dataset will be exposed. To avoid this problem, FPL puts users in control of their data. If the
users adopt the privacy-oriented masking procedure during the Transmission phase, they can
decide the fraction of updates for positive items to send. In the case of exposure of the user
transactions, only a fraction is given up. As a consequence, FPL has to work in a data scarcity
scenario, where the fraction of used data is defined by the parameter  (this could be fixed by the
system designer or actively decided by the users). In the experimental section, we empirically
show how FPL is resilient to missing data in the federated scenario.</p>
    </sec>
    <sec id="sec-2">
      <title>3. Experimental Results</title>
      <p>In the following, we report the accuracy performance of FPL. It has been evaluated on the
Foursquare dataset [17] in the Point-of-Interest domain since it contains data usually perceived
as sensible. To mimic a federation of devices in a single country, we have extracted check-ins
* Best  obtained for each the proposed FPL variations across three countries (Brazil, Canada,
and Italy) are: sFPL = (0.5, 0.1, 0.4), sFPL+ = (0.9, 0.4, 0.2), pFPL = (0.8, 0.1, 1), pFPL+ = (0.8, 0.3, 0.1)
for three countries, namely Brazil, Canada, and Italy. Moreover, we have split each dataset by
adopting a realistic temporal hold-out 80-20 splitting on a per-user basis [18, 19]. FPL has been
evaluated with diferent values of  in [0.0, 1.0] with step 0.1, in order to assess the impact of
user sharing more (high  ) or less (low  ) positive feedbacks. Hence, four configurations have
been considered regarding variations in computation ad communication. In sFPL and pFPL,
the model is updated for each round of communication involving one client, or all the clients,
respectively. In these configurations, the clients’ local training involves only one triple (, , )
from their local dataset. In contrast, in the correspondent sFPL+ and pFPL+ configurations, the
number of selected triples is proportional to the number of each user’s positive feedback. FPL
has been compared against six centralized models (Random, Most Popular, BPR-MF [20],
User-kNN and Item-kNN [21], VAE [22]), and a federated recommendation approach based on
MF (FCF [23]). The accuracy performances of the results are reported in table 1, comparing the
four configurations of FPL and the state-of-the-art baselines. We notice that VAE and User-kNN
outperform the other models, while Item-kNN and BPR-MF show similar results. Regarding
FPL, when comparing sFPL and sFPL+ with their parallelized configurations ( pFPL and pFPL+),
we observe that the increased parallelism does not afect the performance significantly. On
the other hand, increasing the local computation (sFPL+ and pFPL+) boots the performance up
to 24%. The results show that FPL behaves better than BPR-MF in precision and recall. These
performances are surprising considering that FPL exploits less feedback per round since they
are reduced by  . It is also notable that FPL outperforms FCF and preserves privacy to a greater
extent since sharing gradients of all rated items in FCF can result in a data leak [24].</p>
      <p>We have seen how the proposed system can generate recommendations with a quality that
is comparable with the centralized pair-wise learning approach. Moreover, the increased local
computation causes a considerable improvement in the accuracy of recommendations. On the
other side, the training parallelism does not significantly afects results. Finally, when the local
computation is combined with parallelism, the results show a further improvement.
International Working Conference, HCSE 2020, Eindhoven, The Netherlands, November
30 - December 2, 2020, Proceedings, volume 12481 of Lecture Notes in Computer Science,
Springer, 2020, pp. 181–189. URL: https://doi.org/10.1007/978-3-030-64266-2_11. doi:10.
1007/978- 3- 030- 64266- 2\_11.
[13] Y. Koren, R. M. Bell, C. Volinsky, Matrix factorization techniques for recommender systems,</p>
      <p>IEEE Computer 42 (2009) 30–37.
[14] V. W. Anelli, T. D. Noia, E. D. Sciascio, A. Ferrara, A. C. M. Mancino, Sparse feature
factorization for recommender systems with knowledge graphs, in: H. J. C. Pampín, M. A.
Larson, M. C. Willemsen, J. A. Konstan, J. J. McAuley, J. Garcia-Gathright, B. Huurnink,
E. Oldridge (Eds.), RecSys ’21: Fifteenth ACM Conference on Recommender Systems,
Amsterdam, The Netherlands, 27 September 2021 - 1 October 2021, ACM, 2021, pp. 154–165.</p>
      <p>URL: https://doi.org/10.1145/3460231.3474243. doi:10.1145/3460231.3474243.
[15] V. W. Anelli, T. D. Noia, E. D. Sciascio, A. Ragone, J. Trotta, Semantic interpretation
of top-n recommendations, IEEE Trans. Knowl. Data Eng. 34 (2022) 2416–2428. URL:
https://doi.org/10.1109/TKDE.2020.3010215. doi:10.1109/TKDE.2020.3010215.
[16] V. W. Anelli, T. D. Noia, P. Lops, E. D. Sciascio, Feature factorization for top-n
recommendation: From item rating to features relevance, in: RecSysKTL, volume 1887 of CEUR
Workshop Proceedings, CEUR-WS.org, 2017, pp. 16–21.
[17] D. Yang, D. Zhang, B. Qu, Participatory cultural mapping based on collective behavior
data in location-based social networks, ACM TIST 7 (2016) 30:1–30:23.
[18] A. Gunawardana, G. Shani, Evaluating recommender systems, in: Recommender Systems</p>
      <p>Handbook, Springer, 2015, pp. 265–308.
[19] V. W. Anelli, T. D. Noia, E. D. Sciascio, A. Ragone, J. Trotta, Local popularity and time
in top-n recommendation, in: European Conf. on Information Retrieval, volume 11437,
Springer, 2019, pp. 861–868.
[20] S. Rendle, C. Freudenthaler, Z. Gantner, L. Schmidt-Thieme, BPR: bayesian personalized
ranking from implicit feedback, in: Proc. of the 25th Conf. on Uncertainty in Artificial
Intelligence, 2009, pp. 452–461.
[21] Y. Koren, Factor in the neighbors: Scalable and accurate collaborative filtering, ACM</p>
      <p>Transactions on Knowledge Discovery from Data (TKDD) 4 (2010) 1–24.
[22] D. Liang, R. G. Krishnan, M. D. Hofman, T. Jebara, Variational autoencoders for
collaborative filtering, in: Proc. of 2018 WWW Conf., 2018, pp. 689–698.
[23] M. Ammad-ud-din, E. Ivannikova, S. A. Khan, W. Oyomno, Q. Fu, K. E. Tan, A. Flanagan,
Federated collaborative filtering for privacy-preserving personalized recommendation
system, CoRR abs/1901.09888 (2019). arXiv:1901.09888.
[24] D. Chai, L. Wang, K. Chen, Q. Yang, Secure federated matrix factorization, CoRR
abs/1906.05108 (2019). URL: http://arxiv.org/abs/1906.05108. arXiv:1906.05108.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Anelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deldjoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narducci</surname>
          </string-name>
          ,
          <article-title>User-controlled federated matrix factorization for recommender systems</article-title>
          ,
          <source>J. Intell. Inf. Syst</source>
          .
          <volume>58</volume>
          (
          <year>2022</year>
          )
          <fpage>287</fpage>
          -
          <lpage>309</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>McFee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Barrington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. R. G.</given-names>
            <surname>Lanckriet</surname>
          </string-name>
          ,
          <article-title>Learning content similarity for music recommendation</article-title>
          ,
          <source>IEEE Trans. Audio, Speech &amp; Language Processing</source>
          <volume>20</volume>
          (
          <year>2012</year>
          )
          <fpage>2207</fpage>
          -
          <lpage>2218</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Shalaby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Korayem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <article-title>AlJadda</article-title>
          , J. Luo,
          <article-title>Solving cold-start problem in large-scale recommendation engines: A deep learning approach</article-title>
          ,
          <source>in: 2016 IEEE Int. Conf. on Big Data, BigData 2016</source>
          ,
          <string-name>
            <surname>Washington</surname>
            <given-names>DC</given-names>
          </string-name>
          , USA, December 5-
          <issue>8</issue>
          ,
          <year>2016</year>
          , IEEE Computer Society,
          <year>2016</year>
          , pp.
          <fpage>1901</fpage>
          -
          <lpage>1910</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bellini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Biancofiore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Sciascio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narducci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pomo</surname>
          </string-name>
          ,
          <article-title>Guapp: A conversational agent for job recommendation for the italian public administration</article-title>
          ,
          <source>in: 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems, EAIS</source>
          <year>2020</year>
          , Bari, Italy, May
          <volume>27</volume>
          -29,
          <year>2020</year>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . URL: https://doi.org/10.1109/EAIS48028.
          <year>2020</year>
          .
          <volume>9122756</volume>
          . doi:
          <volume>10</volume>
          .1109/EAIS48028.
          <year>2020</year>
          .
          <volume>9122756</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Anelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bellogín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Malitesta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Merra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Donini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <article-title>V-elliot: Design, evaluate and tune visual recommender systems</article-title>
          , in: H.
          <string-name>
            <surname>J. C. Pampín</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Larson</surname>
            ,
            <given-names>M. C.</given-names>
          </string-name>
          <string-name>
            <surname>Willemsen</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          <string-name>
            <surname>McAuley</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Garcia-Gathright</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Huurnink</surname>
          </string-name>
          , E. Oldridge (Eds.),
          <source>RecSys '21: Fifteenth ACM Conference on Recommender Systems, Amsterdam, The Netherlands, 27 September 2021 - 1 October</source>
          <year>2021</year>
          , ACM,
          <year>2021</year>
          , pp.
          <fpage>768</fpage>
          -
          <lpage>771</lpage>
          . URL: https://doi.org/10.1145/3460231.3478881. doi:
          <volume>10</volume>
          .1145/3460231.3478881.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Anelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Sciascio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ragone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Trotta</surname>
          </string-name>
          ,
          <article-title>How to make latent factors interpretable by feeding factorization machines with knowledge graphs</article-title>
          ,
          <source>in: ISWC (1)</source>
          , volume
          <volume>11778</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2019</year>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Konecný</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>McMahan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ramage</surname>
          </string-name>
          ,
          <article-title>Federated optimization: Distributed optimization beyond the datacenter</article-title>
          ,
          <source>CoRR abs/1511</source>
          .03575 (
          <year>2015</year>
          ). arXiv:
          <volume>1511</volume>
          .
          <fpage>03575</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Anelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deldjoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narducci</surname>
          </string-name>
          ,
          <article-title>How to put users in control of their data in federated top-n recommendation with learning to rank</article-title>
          ,
          <source>in: SAC '21: The 36th ACM/SIGAPP Symposium on Applied Computing</source>
          , Virtual Event,
          <source>Republic of Korea, March</source>
          <volume>22</volume>
          -26,
          <year>2021</year>
          , ACM,
          <year>2021</year>
          , pp.
          <fpage>1359</fpage>
          -
          <lpage>1362</lpage>
          . URL: https://doi.org/10.1145/3412841.3442010. doi:
          <volume>10</volume>
          .1145/3412841.3442010.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Konecný</surname>
          </string-name>
          , H. B.
          <string-name>
            <surname>McMahan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ramage</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Richtárik</surname>
          </string-name>
          ,
          <article-title>Federated optimization: Distributed machine learning for on-device intelligence</article-title>
          ,
          <source>CoRR abs/1610</source>
          .02527 (
          <year>2016</year>
          ). arXiv:
          <volume>1610</volume>
          .
          <fpage>02527</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>McMahan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ramage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hampson</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. A. y Arcas</surname>
          </string-name>
          ,
          <article-title>Communication-eficient learning of deep networks from decentralized data</article-title>
          ,
          <source>in: Proc. of 20th Int. Conf. on Artiifcial Intelligence and Stat</source>
          .,
          <year>2017</year>
          , pp.
          <fpage>1273</fpage>
          -
          <lpage>1282</lpage>
          . URL: http://proceedings.mlr.
          <source>press/v54/ mcmahan17a.html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Anelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deldjoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narducci</surname>
          </string-name>
          ,
          <article-title>Federank: User controlled feedback with federated recommender systems</article-title>
          ,
          <source>in: ECIR (1)</source>
          , volume
          <volume>12656</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2021</year>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ardito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Sciascio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lofú</surname>
          </string-name>
          , G. Mallardi,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vitulano</surname>
          </string-name>
          ,
          <article-title>Towards a trustworthy patient home-care thanks to an edge-node infrastructure</article-title>
          , in: R.
          <string-name>
            <surname>Bernhaupt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Ardito</surname>
          </string-name>
          , S. Sauer (Eds.),
          <string-name>
            <surname>Human-Centered Software</surname>
          </string-name>
          Engineering - 8th
          <source>IFIP WG 13.2</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>