<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Smartnic-Based Secure Aggregation Scheme for Federated Learning 1</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shengyin Zang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jiawei Fei</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaodong Ren</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yan Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhuang Cao</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jiagui Wu</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>College of Artificial Intelligence, Southwest University</institution>
          ,
          <addr-line>Chongqing</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>College of Computer, National University of Defense Technology</institution>
          ,
          <addr-line>Changsha</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Defense innovation institute</institution>
          ,
          <addr-line>Beijing</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>School of Physical Science and Technology, Southwest University</institution>
          ,
          <addr-line>Chongqing</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <fpage>81</fpage>
      <lpage>89</lpage>
      <abstract>
        <p>Federated learning is a widely used distributed machine learning technique where participants collaborate to train neural network models without disclosing private training datasets. Technologies like homomorphic encryption are proposed to avoid private data leakage. However, they are plagued by two problems, including performance degradation caused by excessive computational and communication overhead and insecurity due to the data exposure on the parameter server. Focusing on these problems, we propose an efficient and privacypreserving federated learning solution that improves performance and security by offloading the aggregation procedure into the hardware data plane, like FPGA-based SmartNICs. Furthermore, we also combine it with differential privacy techniques. Our method has higher security because it is hard to access the data plane. Besides, massive experiments show that, compared to the system employing additively homomorphic encryption, our scheme reduces the communication cost by around 59.5% and offers around 2.5× speedup at the aggregation stage while significantly decreasing the participant's computational overhead.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Federated Learning</kwd>
        <kwd>Differential Privacy</kwd>
        <kwd>SmartNIC</kwd>
        <kwd>Secure Aggregation</kwd>
        <kwd>Privacy-Preserving</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        mains an urgent problem. In this paper, we present an efficient federated learning privacy computing
scheme based on hardware security by offloading the gradients aggregation operation to an
FPGAbased SmartNIC [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and combining it with differential privacy techniques [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] as an alternative to
traditional software protections.
      </p>
      <p>Our contributions are summarized as follows:</p>
      <p>We propose a SmartNIC-based gradients aggregation algorithm, which improves the security and
performance of federated learning by offloading the gradients aggregation operation onto SmartNIC.</p>
      <p>We implement an efficient aggregation structure on SmartNIC, which enables our scheme to
complete privacy-preserving computations while ensuring computational efficiency.</p>
      <p>We evaluate the performance benefits of the SmartNIC-based gradients aggregation algorithm.
Extensive experiments show that our scheme has lower communication and computational overhead
than schemes using additively homomorphic encryption.</p>
    </sec>
    <sec id="sec-2">
      <title>2. System model and threat model</title>
    </sec>
    <sec id="sec-3">
      <title>2.1. System model</title>
      <p>As shown in Figure 1, the SmartNIC-based federated learning system mainly consists of two main
components: users and the SmartNIC-based parameter server on the cloud server. All users agree on
an identical initial model and common training objectives. During the training process, all participants
do not directly share their respective private data.</p>
      <p>On the users’ side, they first download the global network model and the initial parameters from
the cloud server, then perform model training based on the local dataset to obtain local gradients,
encrypt and upload those gradients to the server, and finally decrypt the global gradients returned by the
server to update local model parameters.</p>
      <p>
        On the SmartNIC-based parameter server, the primary task of SmartNIC is to achieve efficient and
secure aggregation of gradients by utilizing its hardware-isolated execution environment and extreme
computing performance. The SmartNIC decrypts local gradients uploaded by each participant,
aggregates them to obtain global gradients, and adds Gaussian noise perturbation to thwart differential
attacks [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The global gradients are then broadcast to all users after being encrypted. After continuous
iterations, until the loss function reaches the minimum value, the optimal neural network model is
finally constructed.
      </p>
    </sec>
    <sec id="sec-4">
      <title>2.2. Threat model</title>
      <p>
        Our proposed scheme aims to safeguard the user’s private information throughout the training
phase. The cloud server is assumed to be honest-but-curious, meaning that it will adhere to the
protocol to execute gradients aggregation, while also showing some degree of curiosity about the users’
raw data and may attempt to bypass some security measures to access the users’ privacy data directly.
Additionally, malicious participants try to determine whether a particular user is involved in the
training process by analyzing the shared global gradients, i.e., performing membership inference attacks
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
3.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Proposed scheme</title>
      <p>
        In this section, we propose a SmartNIC-based federated learning gradients aggregation scheme that
works as an alternative to traditional software protections. The fundamental idea is to provide a
trusted execution environment [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] for isolated execution of procedures and privacy data processing by
offloading the gradients aggregation operation to the FPGA-based SmartNIC. It will be difficult for
the server to access the data in the SmartNIC to realize the privacy calculation of sensitive data. In
addition, the perturbation noise satisfying the Gaussian mechanism is added to the aggregated global
gradients to prevent user privacy data from being subjected to differential attacks and model
overfitting, although this may lead to a decrease in model accuracy or increase the number of iterations
required for model convergence.
      </p>
    </sec>
    <sec id="sec-6">
      <title>3.1. Overview of secure aggregation scheme</title>
      <p>Initialization. The cloud server broadcasts the global network model, the initial parameters ω0 of
the model and the learning rate η to all users participating in the training. Different encryption
methods will be assigned to each participant at the same time.</p>
      <p>Local Training. Based on the network model and initial parameters distributed by the server, each
participant trains based on stochastic gradient descent [16] with a mini-batch of local datasets and
calculates the local gradients. Those gradients are then encrypted and sent to the server’s SmartNIC for
aggregation.</p>
      <p>Secure Aggregation. SmartNIC separately decrypts local gradients from different participants.
After receiving all users’ data, SmartNIC starts to perform aggregation operations. The aggregated
global gradients are perturbed for privacy-preserving purposes with noise satisfying a Gaussian
distribution. Finally, those gradients are broadcast to all users after being encrypted by the SmartNIC.</p>
      <p>Global Update. After receiving the global gradients returned by SmartNIC, the user decrypts them
and updates the model parameter ω according to the global gradients and learning rate η.</p>
      <p>The system will repeat the above steps until the loss function reaches the minimum value. The
final neural network model is constructed through a continuous loop iteration between SmartNIC and
users. During the whole training process, the server only manages the SmartNIC but cannot access the
users’ private data on the SmartNIC, thus playing the role of privacy protection.</p>
    </sec>
    <sec id="sec-7">
      <title>3.2. Aggregation architecture on SmartNIC</title>
    </sec>
    <sec id="sec-8">
      <title>Implementation</title>
      <p>In this section, we will then discuss the hardware structure of the components on the SmartNIC.</p>
      <p>Decrypt Engine. We have implemented DES and 3DES encryption and decryption algorithms in a
pipelined manner on the decryption engine. After receiving the data, the decryption engine will
decrypt it based on the user ID and sequence number. The Encrypt Engine uses a similar structure for
encryption.</p>
      <p>Storage Engine. The structure of the Storage Engine is shown in Figure 4. Each user is assigned a
separate memory space in the DDR of the Storage Engine. When data enters the Storage Engine, the
user ID and sequence number are used to control where the data is stored and to check for packet loss.
Data is read from the DDR and delivered to the aggregation engine after being synchronized by the
FIFOs when none of the address spaces are empty.</p>
      <p>Aggregate Engine. The structure of the Aggregate Engine is shown in Figure 5. We implement a
scenario with up to 64 sets of gradient data that can be aggregated simultaneously, which uses a
sixstage pipeline structure to improve the processing performance. 32 Aggregate Blocks (ABs) are used
in the first stage, and each AB aggregates two gradients. 16 ABs are used in the second stage, and
then the number of ABs is halved sequentially in subsequent pipelines until the final global gradients
are output and submitted to the Perturb Engine. In the case of more than 64 users, the result should be
temporarily stored in the DDR pending further aggregation. Moreover, if fewer than 64 users, set
some of the ABs’ inputs in the pipeline’s first stage to zero.</p>
      <p>The structure of the Aggregate Block is shown in Figure 6. Since the Encrypt Engine only
implemented DES and 3DES encryption algorithms and the ciphertexts are 64 bits, we presume that the
input of the AB is also 64 bits. Gradients data are presented as 32-bit fixed-point numbers. The input of
AB is the gradient data after 32-bit alignment, i.e., two gradients at a time. AB splits the input and
sends it to the two adders, respectively. The results of the two adders are concatenated into 64 bits and
then delivered to the register for temporary storage.</p>
    </sec>
    <sec id="sec-9">
      <title>Experiment</title>
      <p>
        In this section, we will assess our proposed scheme from the aspects of communication and
computational overhead, and hardware resource consumption by comparing it with the novel scheme
PPDL [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], in which additively homomorphic encryption is adopted as the privacy-preserving
approach. We implemented the SmartNIC-based secure aggregation structure on the Xilinx Zynq
UltraScale+ZCU111 evaluation platform using Verilog language, and used Vivado 2018.3 for logic
synthesis and implementation. Without path violations, the final data processing speed can be up to 425
MHz, and the data throughput rate can reach approximately 27 Gbps. Table 1 describes the resource
utilization of the SmartNIC-based aggregation scheme. PPDL’s benchmarks are based on the
Tensorflow 1.1.0 library over Cuda-8.0 and GPU Tesla K40m, with a Xeon CPU E5-2660 v3@ 2.60GHz
server, and assume that each user uses only one thread. Since we completely offload the gradients
aggregation process in federated learning from the cloud server to the SmartNIC, and the cloud server
only configures the SmartNIC, the resource utilization on the cloud server of our scheme can be
approximately considered zero compared with that of PPDL. Next, we will mainly compare the two
schemes’ communication and computational overhead differences.
      </p>
      <p>Table 1 The area utilization report of our scheme</p>
      <p>Resource Utilization Available Utilization %
LUT 40962 425280 9.63
LUTRAM 663 213600 0.3111
FF 30366 850560 3.57
BRAM 256 1080 23.70
DSP 2 4272 0.05
BUFG 1 696 0.14</p>
    </sec>
    <sec id="sec-10">
      <title>5.1. Communication overhead</title>
      <p>Assuming that each participant uses only one thread for computation, we first compare the
communication overhead of each participant in federated learning in PPDL. The relationship between the
communication overhead and the number of gradients is shown in Figure 7. The figure clearly
indicates that the communication overhead of PPDL is more than twice as high as our scheme. The main
reason is the rapid growth of ciphertext volume brought on by homomorphic encryption.</p>
    </sec>
    <sec id="sec-11">
      <title>5.2. Computational overhead</title>
      <p>The computational cost differences between our approach and PPDL during the encryption,
aggregation, and decryption stages will be compared. Since each participant in our scheme may use
different encryption methods, we pick the 3DES algorithm with the highest computational cost to compare
with the homomorphic encryption of PPDL. As shown in Figure 8, our encryption overhead is
considerably lower than that of PPDL as the number of gradients increases, which is caused by the high
computational complexity of homomorphic encryption. Figure 9 describes the difference in
computational cost between our scheme and PPDL in the aggregation stage. It is worth noting that the
aggregation stage in our scheme includes four sub-stages: decryption, aggregation, noise addition, and
encryption. Benefiting from the high performance of SmartNIC, our computational overhead in the
aggregation phase is less than half of PPDL. Similarly, as shown in Figure 10, as the number of
gradients increases, the decryption overhead is also much smaller than PPDL. Therefore, our scheme is
more suitable for training large-scale deep neural network models.</p>
    </sec>
    <sec id="sec-12">
      <title>Related work</title>
      <p>
        Some related work on increasing the performance and security of federated learning systems by
using homomorphic encryption has been studied recently. For example, AONO Y et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
implemented the aggregation of gradients on the cloud server using an additively homomorphic encryption
scheme with low computational load and guaranteed the system’s high accuracy. Even yet, there is
still a substantial computational overhead associated with this scheme due to the growing size of the
neural network and the growing amount of training set samples involved in the modeling process. At
the same time, this scheme is also vulnerable to differential attacks, and the privacy of honest
participants’ data will be threatened by the analysis of the shared model [17]. For this reason, Meng Hao et
al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed an efficient federated deep learning scheme by integrating a lightweight symmetric
additively homomorphic encryption [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] with differential privacy [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. This scheme is secure for
honest-but-curious server settings, even if the cloud server colludes with multiple users. Unfortunately,
all participants in this scheme use a single encryption key, making the system vulnerable to security
threats even though the aggregation can be carried out smoothly. Therefore, how to design a scheme
to meet the above challenges remains an urgent problem.
      </p>
    </sec>
    <sec id="sec-13">
      <title>Conclusion</title>
      <p>This paper proposes an efficient and privacy-preserving federated learning system based on
hardware security by offloading the gradients aggregation operation onto SmartNIC as an alternative to
homomorphic cryptography. Extensive experiments demonstrate that our scheme has lower
communication and computational overhead than schemes using additively homomorphic encryption.
Meanwhile, our scheme is secure for the honest-but-curious cloud server and has superior security by using
different encryption methods for each participant compared to single-key homomorphic encryption. In
addition, we further add Gaussian noise that satisfies differential privacy to the aggregated global
gradients to defend against differential attacks in federated learning. In future work, we will train
realworld deep neural networks to evaluate the impact of our scheme on model accuracy compared to
traditional centralized machine learning and investigate using SmartNIC clusters for secure aggregation
to support federated learning for larger-scale applications.
8.</p>
    </sec>
    <sec id="sec-14">
      <title>Acknowledgment</title>
      <p>This work is supported by the National Natural Science Foundation of China (61875168);
Chongqing Science Funds for Distinguished Young Scientists(cstc2021jcyj-jqX0027); Innovation Research
2035 Pilot Plan of Southwest University (SWU-XDPY22012); Innovation Support Program for
Overseas Students in Chongqing (cx2021008).
[16] R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proceedings of the 22nd
ACM SIGSAC conference on computer and communications security. ACM, 2015, pp. 1310–
1321.
[17] R. Geyer, T. Klein, M. Nabi, “Differentially private federated learning: a client level
perspective,” arXiv preprint arXiv:1712.07557, 2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Tiffany</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Josh</surname>
          </string-name>
          , M. Daniele, “
          <article-title>Asynchronous collaborative learning across data silos</article-title>
          ,
          <source>” arXiv preprint arXiv:2203.12637</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Franz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Olga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kay</surname>
          </string-name>
          , G. Florian,
          <string-name>
            <given-names>J.</given-names>
            <surname>Igor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Franz</surname>
          </string-name>
          et al., “
          <article-title>Embracing opportunities of livestock big data integration with privacy constraints,”</article-title>
          <source>in Proceedings of the 9th international conference on the internet of things</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>H. B. McMahan</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Moore</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ramage</surname>
            ,
            <given-names>B. A.</given-names>
          </string-name>
          <string-name>
            <surname>Arcas</surname>
          </string-name>
          , “
          <article-title>Federated learning of deep networks using model averaging</article-title>
          ,
          <source>” arXiv preprint arXiv:1602.05629</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , Y. Liu,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          , “
          <article-title>Federated learning for privacy-preserving AI,” Communications of the ACM</article-title>
          , vol.
          <volume>63</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>36</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ligeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhijian</surname>
          </string-name>
          , H. Song, “Deep Leakage from Gradients,” arXiv preprint arXiv:
          <year>1906</year>
          .08935,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiaoyun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jingzheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mutian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tianyue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Weiheng</given-names>
            <surname>Li</surname>
          </string-name>
          et al.,
          <source>“Information Leakage by Model Weights on Federated Learning,” in Proceedings of the 2020 workshop on privacy-preserving machine learning in practice</source>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>36</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Naehrig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lauter</surname>
          </string-name>
          ,V VaikuntanathanN, “
          <article-title>Can homomorphic encryption be practical?,”</article-title>
          <source>in Proceedings of the 3rd ACM workshop on cloud computing security workshop</source>
          , pp.
          <fpage>113</fpage>
          -
          <lpage>124</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Aono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hayashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          , S. Moriai, “
          <article-title>Privacy-preserving deep learning via additively homomorphic encryption,” IEEE transactions on information forensics and security</article-title>
          , vol.
          <volume>13</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>1333</fpage>
          -
          <lpage>1345</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          , “
          <article-title>Towards efficient and privacy-preserving federated deep learning</article-title>
          ,” in 2019 IEEE international conference on communications, pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhenfu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xiaolei</surname>
          </string-name>
          , and L. Xiaodong, “
          <article-title>Ppdm: A privacy-preserving protocol for cloud-assisted e-healthcare systems,”</article-title>
          <source>IEEE Journal of Selected Topics in Signal Processing</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>1332</fpage>
          -
          <lpage>1344</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ming</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tianyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Henry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Arvind</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Simon</surname>
          </string-name>
          , G. Karan, “
          <article-title>Offloading distributed applications onto smartNICs using iPipe,” in Proceedings of the ACM special interest group on data communication</article-title>
          .
          <source>Association for Computing Machinery</source>
          , pp.
          <fpage>318</fpage>
          -
          <lpage>333</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chu</surname>
          </string-name>
          , I. Goodfellow, H. B.
          <string-name>
            <surname>McMahan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Mironov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Talwar</surname>
          </string-name>
          et al., “
          <article-title>Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security</article-title>
          .
          <source>ACM</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>308</fpage>
          -
          <lpage>318</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chuanxin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,W. Degang, “
          <article-title>Federated Learning with Gaussian Differential Privacy,”</article-title>
          <source>in Proceedings of the 2020 2nd international conference on robotics, intelligent control and artificial intelligence</source>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>301</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shokri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stronati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Song</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Shmatikov</surname>
          </string-name>
          , “
          <source>Membership Inference Attacks Against Machine Learning Models,” 2017 IEEE symposium on security and privacy (SP)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mondal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>More</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Rooparaghunath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gupta</surname>
          </string-name>
          , “
          <source>Flatee: Federated Learning Across Trusted Execution Environments,” arXiv preprint arXiv:2111.06867</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>