<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Spiking Neural Networks for Near-Sensor Processing: An Open-Hardware Experience</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luca Martis</string-name>
          <email>luca.martis@unica.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca Leone</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luigi Rafo</string-name>
          <email>rafo@unica.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>a nd Paolo Melon</string-name>
          <email>paolo.meloni@unica.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Electrical and Electronic Engineering, University of Cagliari</institution>
          ,
          <addr-line>09123, Cagliari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Moving data processing closer to its source, known as edge computing, ofers significant advantages including reduced latency, bandwidth savings, and enhanced reliability. However, implementing complex algorithms on edge devices is challenging due to power and computational limitations. In this context, Spiking Neural Networks (SNNs) emerge as a promising solution due to their energy eficiency. This work presents a preliminary version of an accelerator for Spiking Neural Networks designed to be integrated into a System-on-Chip (SoC). The accelerator was developed using the open-source Process Design Kit (PDK) SKY130A and the open-source Electronic Design Automation (EDA) tool OpenLane. Preliminary results highlight the potential of SNNs for edge applications, as well as the advantages of using these new open-source tools, which reduce cost barriers and facilitate transparent information exchange.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Edge computing</kwd>
        <kwd>low power</kwd>
        <kwd>spiking neural network</kwd>
        <kwd>real-time</kwd>
        <kwd>ASIC</kwd>
        <kwd>EDA</kwd>
        <kwd>open-source</kwd>
        <kwd>OpenLane flow</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In recent years, the deployment of artificial intelligence (AI) algorithms at the edge has gained significant
traction, driven by the exponential growth of Cyber-Physical Systems (CPS) and Internet of Things
(IoT) devices. Moving data processing as close as possible to the source ofers significant benefits, such
as the ability to perform real-time processing, reduce latency, and improve overall system eficiency.
However, implementing these algorithms on edge devices is challenging due to constraints related to
energy consumption and computational power.</p>
      <p>Spiking Neural Networks (SNNs) have emerged as a promising AI algorithm for edge applications due
to their energy eficiency. Unlike traditional neural networks, SNNs mimic the brain’s spiking behavior,
leading to lower power consumption and improved performance in event-driven scenarios. This makes
SNNs particularly suitable for edge computing, where energy resources are constrained, and eficient
processing is crucial.</p>
      <p>These networks are ideally better performed on neuromorphic processors, which can fully exploit the
sparsity of events and ofer remarkable eficiency. Neuromorphic hardware is designed to support
the unique operational characteristics of SNNs, such as asynchronous processing and event-driven
computation, resulting in significant energy savings and faster response times. However, the adoption
of neuromorphic processors is still limited due to high costs and integration challenges with existing
edge devices.</p>
      <p>In light of these considerations, the objective of this work is to develop a low-power accelerator for
SNNs to be integrated into a System on Chip (SoC) for edge applications.</p>
      <p>Leveraging current trends, we have utilized an open-source Process Design Kit (PDK) and open-source
Electronic Design Automation (EDA) tools to implement the system. This approach not only aligns
with the growing emphasis on open-source solutions but also enhances the flexibility, accessibility, and
cost-efectiveness of our design process.</p>
      <p>The main findings of this study can be summarized as follows:
• we present an SNN accelerator integrable into a SoC, such as the Efabless Caravel1;
• we present an initial version of the accelerator layout, created using exclusively open-source
tools;
• we have demonstrated that the SNN accelerator is a viable solution for edge applications, achieving
an energy consumption of 3.7  per inference at a frequency of 89 MHz.</p>
      <p>The remainder of the paper is organized as follows: Section 2 provides a brief overview of related works,
Section 3 provides an introduction to spiking neural networks, the architecture of the accelerator, and
the RTL-GDSII flow. Finally, Section 4 discusses the obtained results, while Section 5 outlines future
work, and Section 6 presents the conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Currently, several specialized neuromorphic processors are highly efective in executing SNNs. These
advanced processors, as detailed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], are capable of eficiently handling large-scale
networks while maintaining high energy eficiency. However, these devices are not suitable for
implementation on edge devices, as they require power levels ranging from hundreds of milliwatts to Watts,
which are not manageable by such devices.
      </p>
      <p>
        More relevant to this work are the studies [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], which focus on implementing FPGA-based
accelerators for lightweight SNNs, making them better suited for edge applications. Particularly relevant to
this study is the work described in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], where a previous version of the accelerator used in this study
was implemented on an Artix-7 FPGA.
      </p>
      <p>To the best of our knowledge, there are no works that describe accelerators for SNNs developed entirely
using open-source tools. However, several studies have explored the potential of these emerging tools
and demonstrated their capabilities in various applications.</p>
      <p>
        The studies by [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] are particularly relevant to our work, as they use OpenLANE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and the
Sky130A PDK to implement an accelerator integrated into an SoC. The research in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] involved designing
a Vector Accelerator Unit for integration with the Caravel SoC, aimed at enhancing the performance of
the existing RISC-V processor on the board. The results showed significant performance improvements,
with parallel processing speeds ranging from 0.19 to over 100 times faster for (12 × 12) Scalar Products.
In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], an RRAM-based In-Memory Computing (IMC) co-processor is introduced, which enables
energyeficient mapping of Multiply and Accumulate (MAC) operations and ofers reconfigurable logic mapping.
The work presented in [9] describes the design of a 3-stage pipelined RISC-V RV32I processor using the
OpenLANE tool in conjunction with the open-source Sky130A PDK. This study compared the results
obtained using open-source EDA tools with those from commercial tools, demonstrating that the timing,
area, and power consumption metrics from the OpenLANE flow were competitive and eficient relative
to commercial flows.
      </p>
      <p>Finally, the study in [10] represents the most complex project, utilizing OpenRoad [11] and Yosys
[12] tools along with the open-source PDK IHP-SG13 to develop Basilisk, the first fully open-source,
Linux-capable RISC-V SoC. Basilisk features a 64-bit RISC-V core, a fully digital HyperRAM DRAM
controller, and a comprehensive set of I/O peripherals, including USB 1.1 and VGA. This work highlights
the advantage ofered by open-source tools to better customize the provided scripts. by modifying the
Yosys and OpenRoad scripts, we achieved better results compared to using the non-customized scripts,
including higher system clock frequencies and reduced area occupancy.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <sec id="sec-3-1">
        <title>3.1. Spiking Neural Network</title>
        <p>Spiking neural networks are defined as the third generation of neural networks. What distinguishes
them from other conventional neural networks is their use of a more realistic neuron model. Neurons
1https://caravel-harness.readthedocs.io/en/latest/
communicate with each other through binary signals known as spikes and maintain an internal state
called membrane potential.</p>
        <p>The neuron model used in this work is the Leaky Integrate and Fire (LIF) neuron, which is described by
the following equation:</p>
        <p>[] =  [ − 1] + ∑︁ [] − [ − 1]
[] =
{︃1, if  [] &gt; 
0, otherwise
(1)
where  represents the neuron’s membrane potential,  is the membrane potential decay rate, 
denotes the synaptic weights,  indicates the input spikes,  is the output spike, and  represents the
neuron’s threshold. When the potential exceeds the threshold value  , a spike  is generated and a
reset mechanism is activated, subtracting the threshold value from the potential.</p>
        <p>The communication through spikes makes these networks particularly interesting from an energy
standpoint for several reasons. Spiking neural networks operate in an event-driven manner, where
signals remain inactive for extended periods and only become active when spikes occur. This
phenomenon, known as sparsity, allows the system to process information only when necessary, thus
avoiding unnecessary operations and conserving energy during inactive periods. Additionally, since
the inputs to these neurons are binary (0 or 1), the computational operations primarily involve simple
accumulation rather than complex multiplications. This reduces the overall algorithmic complexity
and leads to lower energy consumption compared to conventional neural networks, which rely heavily
on multiplication operations. These characteristics make spiking neural networks highly eficient and
well-suited for edge applications where energy eficiency is critical.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. System Architecture</title>
        <p>
          The proposed system is an accelerator for Spiking Neural Networks intended for integration into a
System-on-Chip (SoC). This version of the accelerator is a modified adaptation of the one described in
[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], which was initially designed for FPGA implementation.
        </p>
        <p>As the original design leveraged FPGA-specific components like configurable blocks of memory (BRAMs)
and hard-wired Digital Signal Processor (DSP), several modifications were required to adapt the
architecture to the PDK used. Modules previously implemented with DSPs were replaced by combinational logic,
while blocks originally implemented with BRAMs were substituted with fourteen 4 Kbyte single-port
SRAM memories 2 and flip-flops.</p>
        <p>In particular, the weight memories were implemented using eight memories, while the spike memories
were realized using flip-flops, as this approach proved more area-eficient due to the mismatch with
the available memory blocks. In the FPGA, the BRAMs were dual-port, allowing simultaneous read
and write operations. Therefore, the FIFOs for membrane potentials were implemented to support
concurrent read and write access. However, since the available memories are now single-port, it was
necessary to adapt the architecture by using two separate memories and a state machine. During each
inference, one memory is used for reading and the other for writing, swapping roles in the subsequent
inference. Finally, two memories were used to store the decay factors. Weights are stored in 8-bit integer
format, membrane potentials in 32-bit format to avoid saturation, and decay factors in 14-bit format.
The accelerator receives input spikes in groups of four, while the membrane voltages of the neurons in
the final layer of the network are used as output.</p>
        <p>The accelerator executes a network with four dense layers, using two layer hardware modules on which
two layers are mapped each. Each module contains a memory that stores the synaptic weights, a FIFO
where the neurons’ decay factors are saved, and a FIFO where the neurons’ membrane potentials are
saved. Each layer also includes two stacks that dynamically address the active input synapses and two
memories that store the generated spikes. Additionally, every layer contains a module that calculates</p>
        <sec id="sec-3-2-1">
          <title>2https://github.com/efabless/EF_SRAM_1024x32</title>
          <p>the synaptic current and a neuron module for updating the LIF neuron state.</p>
          <p>The hardware layer module operates as follows: the module receives input spikes, grouped in sets of
four, and stores them in the spike memory 0, keeping track of the addresses of the active groups in
the spike stack 0. The addresses saved in stack 0 are used to access the weight memory, where the
weights are also stored in groups of four and are summed based on the spike condition of the associated
synapses stored in spike memory 0. The LIF modules update the potential of the neurons and generate
the spikes for the next layer. The generated spikes are again grouped in sets of four and stored in
spike memory 1, and the active addresses are saved in stack 1. This process is then repeated, and the
generated outputs become the input for the next layer module.</p>
          <p>SNN accelerator
Layer module 0</p>
          <p>Layer module 1
Input</p>
          <p>Potentials
Memory
Decays
Memory</p>
          <p>Spike
Memory 1</p>
          <p>Spike
Memory 0
Stack 0
Stack 1</p>
          <p>LIF
4-way
Synaptic adder</p>
          <p>Weights
Memory</p>
          <p>Potentials
Memory
Decays
Memory</p>
          <p>Spike
Memory 1</p>
          <p>Spike
Memory 0
Stack 0
Stack 1</p>
          <p>LIF
4-way
Synaptic adder</p>
          <p>Weights
Memory</p>
          <p>Output</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. RTL-GDSII flow</title>
        <p>
          To develop the chip layout of the proposed system, the open-source OpenLANE flow [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] was used.
This tool is an automated RTL to GDSII flow designed for implementing single macros, which will then
be integrated into SoCs, and complete chips. In particular, the tool is maintained by EFabless, and its
main advantage is the ability to integrate the designs produced into a system that includes essential
functions such as IO, power management, and configuration called Caravel.
        </p>
        <p>The flow relies entirely on open-source tools: Yosys [ 12] for synthesis, OpenSTA 3 for static timing
analysis, and OpenROAD [11] for floorplanning, placement, CTS, and routing. KLayout 4 generates the
ifnal GDSII layout file, while Magic 5 and Netgen 6 are used for Design Rule Checks (DRC) and Layout
Versus Schematic (LVS) verification.</p>
        <p>The tool is compatible with both A and B variants of the open-source SKY130 PDK, the C variant of
the open-source GF180MCU PDK, and includes documentation for adding support for other PDKs,
including proprietary ones. For this work, version A of the open-source SKY130 PDK was selected, to
enable the integration of the system onto Caravel.</p>
        <sec id="sec-3-3-1">
          <title>3https://github.com/The-OpenROAD-Project/OpenSTA 4https://www.klayout.de/ 5https://github.com/RTimothyEdwards/magic 6https://github.com/RTimothyEdwards/netgen</title>
          <p>The flow is completely automated and requires only the RTL files describing the system and a
configuration file, in either JSON or TCL format, which outlines the settings to follow.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>The table 1 presents the main statistics of the chip. In the figure 2, the layout visualized with Klayout is
shown on the left, while the placement density visualized through the OpenROAD graphical interface
is shown on the right. The core area of the chip was chosen to meet the constraints imposed by the
Caravel Harness and occupies 46% of the available area, leaving 5.7 mm² of space for additional modules.
The system can operate at a maximum frequency of 89 MHz, dissipating a power of 54.3 mW at this
frequency, as calculated using the OpenSTA tool.</p>
      <p>Since the hardware can perform 4-synaptic additions per clock cycle, one inference can be executed
every  , expressed as</p>
      <p>= sy4napse ·  (2)
where synapse is the number of active synapses and  is the clock period.</p>
      <p>In the worst-case scenario, where all the synapses are active,  is equal to 70  s and energy dissipated
per inference is equal to 3.7  J.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Future works</title>
      <p>A fundamental aspect of using spiking neural networks is the process of converting inputs from real
values to spike trains. Conversely, the process of converting the output spikes from the network back
into real values is equally important. In the proposed accelerator, it is assumed that the encoding phase
has already been completed, with spikes being received directly as input, while the decoding phase is
not necessary as the output is the membrane potential, which is already a real value.
In the future, the goal is to add spike encoding and decoding modules to make the system as complete
as possible. Various conversion algorithms have been proposed in the literature for the conversion task,
including Ben’s Spiker [13], step-forward [14] or Pulse width modulated-Based [15], which are better
suited depending on the use case. To accommodate diferent applications, the plan is to integrate an
embedded FPGA (eFPGA) into the accelerator. This will allow for the mapping of various encoding and
decoding algorithms, depending on the application, leveraging the flexibility of FPGA reconfigurability
while maintaining the low power consumption of an ASIC. This integration will be facilitated by new
open-source tools, including OpenFPGA [16] and FABulous [17], which enable the generation of RTL
to describe the FPGA and provide the necessary tools to generate the bitstream for configuring them.
Once the eFPGA is integrated into the system, the final goal will be to perform the tapeout of the chip.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this work, we presented an accelerator for SNNs designed for integration into a SoC. An initial version
of the accelerator layout was provided, created entirely using open-source tools and compatible with
integration into the Caravel SoC. Additionally, the energy eficiency of the accelerator was demonstrated,
with an energy consumption of 3.7 µJ per inference at a frequency of 89 MHz.</p>
      <p>As a preliminary step, these results highlight the potential for further improvements in both energy
eficiency and area. The customization possibilities ofered by open-source tools present a promising
path for future optimization, suggesting that our approach could evolve into more competitive and
scalable solutions in the field of hardware accelerators for SNNs.
[9] S. Hesham, M. Shalan, M. W. El-Kharashi, M. Dessouky, Digital asic implementation of
riscv: Openlane and commercial approaches in comparison, in: 2021 IEEE International Midwest
Symposium on Circuits and Systems (MWSCAS), 2021, pp. 498–502. doi:10.1109/MWSCAS47672.
2021.9531753.
[10] P. Schefler, P. Sauter, T. Benz, F. K. Gürkaynak, L. Benini, Basilisk: An end-to-end
opensource linux-capable risc-v soc in 130nm cmos, 2024. URL: https://arxiv.org/abs/2406.15107.
arXiv:2406.15107.
[11] D. B. T. Ajayi, Openroad: Toward a self-driving, open-source digital layout implementation tool
chain, Proceedings of Government Microcircuit Applications and Critical Technology Conference
(2019). URL: https://par.nsf.gov/biblio/10171024.
[12] D. Shah, E. Hung, C. Wolf, S. Bazanski, D. Gisselquist, M. Milanovic, Yosys+ nextpnr: an open
source framework from verilog to bitstream for commercial fpgas, in: 2019 IEEE 27th Annual
International Symposium on Field-Programmable Custom Computing Machines (FCCM), IEEE,
2019, pp. 1–4.
[13] B. Schrauwen, J. Van Campenhout, Bsa, a fast and accurate spike train encoding scheme, in:
Proceedings of the International Joint Conference on Neural Networks, 2003., volume 4, 2003, pp.
2825–2830 vol.4. doi:10.1109/IJCNN.2003.1224019.
[14] K. Wang, X. Hao, J. Wang, B. Deng, Comparison and selection of spike encoding algorithms
for snn on fpga, IEEE Transactions on Biomedical Circuits and Systems 17 (2023) 129–141.
doi:10.1109/TBCAS.2023.3238165.
[15] A. Arriandiaga, E. Portillo, J. I. Espinosa-Ramos, N. K. Kasabov, Pulsewidth modulation-based
algorithm for spike phase encoding and decoding of time-dependent analog data, IEEE Transactions
on Neural Networks and Learning Systems 31 (2020) 3920–3931. doi:10.1109/TNNLS.2019.
2947380.
[16] X. Tang, E. Giacomin, B. Chauviere, A. Alacchi, P.-E. Gaillardon, Openfpga: An open-source
framework for agile prototyping customizable fpgas, IEEE Micro 40 (2020) 41–48. doi:10.1109/
MM.2020.2995854.
[17] D. Koch, N. Dao, B. Healy, J. Yu, A. Attwood, Fabulous: An embedded fpga framework, in:
The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA
’21, Association for Computing Machinery, New York, NY, USA, 2021, p. 45–56. URL: https:
//doi.org/10.1145/3431920.3439302. doi:10.1145/3431920.3439302.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Michmizos</surname>
          </string-name>
          ,
          <article-title>Spiking neural network on neuromorphic hardware for energyeficient unidimensional slam</article-title>
          ,
          <year>2019</year>
          , pp.
          <fpage>4176</fpage>
          -
          <lpage>4181</lpage>
          . doi:
          <volume>10</volume>
          .1109/IROS40897.
          <year>2019</year>
          .
          <volume>8967864</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M. V.</given-names>
            <surname>DeBole</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Taba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Amir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Akopyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andreopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. P.</given-names>
            <surname>Risk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kusnitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. O.</given-names>
            <surname>Otero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Nayak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Appuswamy</surname>
          </string-name>
          , et al.,
          <article-title>Truenorth: Accelerating from zero to 64 million neurons in 10 years</article-title>
          ,
          <source>Computer</source>
          <volume>52</volume>
          (
          <year>2019</year>
          )
          <fpage>20</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Behrenbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Tayeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bhiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Richter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Rhodes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kasabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. I.</given-names>
            <surname>Espinosa-Ramos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Furber</surname>
          </string-name>
          , G. Cheng, J. Conradt,
          <article-title>Classification and regression of spatio-temporal signals using neucube and its realization on spinnaker neuromorphic hardware</article-title>
          ,
          <source>Journal of Neural Engineering</source>
          <volume>16</volume>
          (
          <year>2019</year>
          )
          <article-title>026014</article-title>
          . URL: https://dx.doi.org/10.1088/
          <fpage>1741</fpage>
          -2552/aafabc. doi:
          <volume>10</volume>
          .1088/
          <fpage>1741</fpage>
          -2552/aafabc.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Leone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Martis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rafo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Meloni</surname>
          </string-name>
          ,
          <article-title>Spiking neural networks for integrated reach-to-grasp decoding on fpgas</article-title>
          ,
          <source>in: 2023 IEEE Biomedical Circuits and Systems Conference (BioCAS)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . doi:
          <volume>10</volume>
          .1109/BioCAS58349.
          <year>2023</year>
          .
          <volume>10389037</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carpegna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Savino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Di</given-names>
            <surname>Carlo</surname>
          </string-name>
          ,
          <article-title>Spiker: an fpga-optimized hardware accelerator for spiking neural networks</article-title>
          ,
          <source>in: 2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISVLSI54635.
          <year>2022</year>
          .
          <volume>00016</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E. I.</given-names>
            <surname>Baungarten-Leon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ortega-Cisneros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Jaramillo-Toral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Rodriguez-Navarrete</surname>
          </string-name>
          , L. PizanoEscalante,
          <string-name>
            <given-names>J. J. R.</given-names>
            <surname>Panduro</surname>
          </string-name>
          ,
          <article-title>Vector accelerator unit for caravel</article-title>
          ,
          <source>IEEE Embedded Systems Letters</source>
          <volume>16</volume>
          (
          <year>2024</year>
          )
          <fpage>73</fpage>
          -
          <lpage>76</lpage>
          . doi:
          <volume>10</volume>
          .1109/LES.
          <year>2023</year>
          .
          <volume>3267341</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Moorthii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Suri</surname>
          </string-name>
          , Open-rimc:
          <article-title>Open-source rram-based imc co-processor with reconfigurable logic mapping</article-title>
          ,
          <source>in: 2023 IEEE 23rd International Conference on Nanotechnology (NANO)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>519</fpage>
          -
          <lpage>523</lpage>
          . doi:
          <volume>10</volume>
          .1109/NANO58406.
          <year>2023</year>
          .
          <volume>10231247</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shalan</surname>
          </string-name>
          , T. Edwards,
          <article-title>Building openlane: A 130nm openroad-based tapeout- proven flow : Invited paper</article-title>
          , in: 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD),
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>