<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Frontiers in
Neuroscience 7 (2013). URL: https://www.frontiersin.org/journals/neuroscience/articles/10.3389/
fnins.2013.00002/full. doi:10.3389/fnins.2013.00002</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1080/00018732.2022</article-id>
      <title-group>
        <article-title>The next generation of SNNs, energy efectiveness and memory optimisation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Max Talanov</string-name>
          <email>max.talanov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>AI, Artificial Neural Networks, Energy Consumption, Spiking Neural Networks, Architecture, Hippocampus</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute for Artificial Intelligence R&amp;D of Serbia</institution>
          ,
          <addr-line>Fruŝkogorska 1, Novi Sad</addr-line>
          ,
          <country country="RS">Serbia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>38</volume>
      <fpage>11016</fpage>
      <lpage>11023</lpage>
      <abstract>
        <p>Modern artificial intelligence (AI) systems, predominantly based on traditional artificial neural networks (ANNs), face fundamental limitations in energy eficiency, adaptability, and biological plausibility. These challenges stem from high computational costs, rigid learning paradigms, and the inability to perform continuous adaptation in real-time environments. In this work, we analyse these constraints, emphasising the high energy consumption of ANNs and their lack of online learning capabilities, which hinder flexibility and scalability in autonomous applications. To address these issues, we propose a neuro-inspired approach that integrates memristive spiking neural networks (SNNs) with biologically relevant mechanisms such as sleep-phase memory consolidation. Memristive hardware enables energy-eficient in-memory computation, while SNNs facilitate event-driven processing and synaptic plasticity, reducing power consumption and enhancing learning eficiency. Additionally, sleep-inspired consolidation processes, particularly those modelled after hippocampal replay, ofer a mechanism for optimising memory retention and adaptation over time. By leveraging these principles, we outline a path toward next-generation AI architectures that are both energy-eficient and dynamically adaptable, crucial for applications in autonomous robotics and edge AI systems. Our findings suggest that spiking neuromorphic solutions, when combined with biologically inspired learning mechanisms, could pave the way for a more optimal, self-sustaining AI paradigm that is less reliant on energy-intensive training and retraining cycles.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial Neural Networks (ANNs) have driven remarkable advances in machine learning, enabling
breakthroughs in image recognition, natural language processing, and game-playing agents. However,
these successes come with significant limitations that hinder the eficiency, adaptability, and biological
realism of modern ANNs. Researchers have increasingly noted challenges such as the enormous energy
demands of training large-scale models, the reliance on biologically implausible learning algorithms,
and the inability of most networks to learn continuously in changing environments [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. These issues
have practical consequences: from hefty carbon footprints and economic costs to brittle AI systems
that cannot adapt on the fly or retain old knowledge[
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        This article provides an in-depth analysis of the major limitations of contemporary ANNs, structured
around six core issues: (1) high energy consumption, (2) ineficiency and biological implausibility of
the backpropagation algorithm, (3) lack of online learning for real-time adaptability, (4) absence of
neuromodulatory mechanisms for context-sensitive learning, (5) frozen weights leading absence of
online learning, and (6) limited parallelism relative to biological brains. We contrast these shortcomings
with principles from biological neural systems and discuss real-world consequences [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. High Energy Consumption in Large-Scale ANNs</title>
      <p>Modern networks with millions or billions of parameters require significant computational resources
for training and inference. Therefore, one of the most pressing limitations of modern ANNs is their</p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
high energy consumption, especially in large-scale deep-learning models. For example, training a
single large natural language processing model can consume hundreds of megawatt-hours of electricity
[
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ], leading to economic and environmental consequences. For instance, OpenAI’s GPT-3 (175 billion
parameters) reportedly required an estimated $4.6 million worth of computing and 355 GPU-years of
training time [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Training and running large models are often only feasible for organisations with
massive computing infrastructure, raising concerns about the financial costs and accessibility of AI
research. High electricity consumption is also often linked to carbon-intensive energy sources, especially
if data centres do not use renewable energy. For example, training a large Transformer-based model
with neural architecture search resulted in approximately 626,000 pounds of  2 emissions [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], which
can indirectly impact global climate change and greenhouse gas concentrations.
      </p>
      <p>
        Currently, several strategies are being explored to curb the exponential rise in energy demand for AI
training: improving hardware [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], or using sparsely activated neural networks, where only a portion
of the parameters are used for any given input [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. An alternative approach to solving high energy
consumption problems could be the use of a new neuromorphic architecture that does not require
constant updates of all synaptic weights but operates locally and selectively. This approach is similar
to the way the nervous system works, which demonstrates exceptional energy eficiency in nature.
80–100 billion neurons in the human brain perform trillions of operations per second on roughly 20
watts of power [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Performing similar operations on conventional computer hardware would require
energy comparable to the output of a small power plant [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Ineficiency and Biological Implausibility of Backpropagation</title>
      <p>
        The backpropagation algorithm has demonstrated high efectiveness as an engineering tool; however, it
faces several critical limitations, including computational ineficiency, sensitivity to changes in data,
catastrophic forgetting, and long training times. The algorithm requires global synchrony and weight
symmetry, which leads to the weight transport problem, reducing the flexibility and eficiency of the
algorithm [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Additionally, backpropagation requires significant computational resources to store
and update gradients, which makes scaling the algorithm to large networks challenging, especially
when training with long data sequences or deep models. As credit assignment paths grow longer, the
algorithm encounters the well-known vanishing and exploding gradient problems [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ].
      </p>
      <p>
        In biological neural networks, we do not face these issues information exchange and learning occur
locally, at the synaptic area level, without the need for global synchrony and weight symmetry. This
has led to the development of several bio-inspired algorithms, such as Hebbian learning [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], target
propagation [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], which attempt to distribute credit assignment in a more biologically plausible manner,
reducing both computational load and improving speed and eficiency. Although these methods are
still under development, they highlight that, despite their successes, backpropagation is not a universal
learning principle. It works well on digital computers but faces scalability issues and cannot be used on
hardware that is not a traditional von Neumann computer.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Lack of Online Learning and Real-Time Adaptability</title>
      <p>
        Another significant limitation of most ANN implementations is the lack of online learning capabilities.
In practice, the vast majority of deep neural networks are trained in an ofline, batch mode: the model
is optimised on a static training dataset (often with multiple passes over the data in epochs), and once
training is complete, the weights are fixed during deployment [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. When there is a need to update the
model, retraining or fine-tuning it on newly accumulated data is a time-consuming, computationally
intensive process that is typically done ofline rather than instantly. As a result, there is a delay in
adaptability the model may remain outdated between retraining cycles, leading to errors or suboptimal
decisions during that time.
      </p>
      <p>While this approach is efective for static tasks, it severely limits adaptability in dynamic environments
where data distributions can shift. For example, a vision model trained on one set of lighting conditions
may fail when the lighting changes [19], a language model may not understand new slang or terminology
that emerged after its training data was collected, and a recommendation system may fail to catch a
sudden shift in user behaviour until it is retrained much later. These shortcomings highlight the need
for the implementation of online learning mechanisms or techniques for streaming data adaptation in
ANNs to enable real-time learning [20, 21].</p>
      <p>However, introducing online learning, i.e., updating weights with each new data sample or small
batch, is practically incompatible with the classical approach to ANNs. Using backpropagation, in this
case, may enhance issues such as catastrophic forgetting and lead to instability if the data stream has
not been carefully normalized or if learning rates are too high. Scalability issues also arise: performing
continuous gradient descent on streaming data means that the computational cost of training is never
”finished,” which is especially prohibitive for large models.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Limited Parallelism and Architectural Diferences Compared to</title>
    </sec>
    <sec id="sec-6">
      <title>Biological Nervous System</title>
      <p>In a typical ANN implementation, synaptic weights are stored in of-chip or of-core memory, and
at each layer or operation, those weights must be fetched, applied to the data, and then the results
written back. This constant shuttling of data between the memory and the processor is ineficient
and becomes a dominant cost for large models[22]. As an IBM research report explains, “the limiting
factor isn’t that the processor is too slow, but that moving data back and forth between memory
and computing takes too long and uses too much energy”, and this is a fundamental limitation of
conventional architectures[22]. By contrast, the brain co-locates memory and processing: synapses
(which store connection strengths) are part of the neuron’s structure that also performs computation
(integration of inputs). Thus, information processing in brains doesn’t sufer from a large global memory
bandwidth bottleneck – neurons only communicate with other neurons they are connected to, and
there is no single bus shuttling all data around. This distributed, in-memory computing nature of the
brain is a major factor in its eficiency [ 22]. Neuromorphic engineering eforts are trying to emulate
this by designing chips where memory (weights) is physically embedded alongside computation units,
minimising data movement.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Memristive Approaches and Solutions</title>
      <p>
        Conventional computing architectures sufer from high energy consumption due to the separation of
memory and processing units (the von Neumann bottleneck). In contrast, the human brain performs
computations directly within memory (at synapses) and achieves incredible energy eficiency using
only about 20 W, it outperforms computers by several orders of magnitude [23, 24]. One possible
solution is the use of memristors, the electronic components that change resistance depending on the
current passing through them. Memristors are an electronic analog of synapses: they can operate
in spiking mode, and according to some estimates, the dynamics of memristors can approach the
biological dynamics of neurons [25]. This allows for energy eficiency similar to mammalian nervous
systems, with ultra-low power consumption (on the order of femtojoules per event) [24]. For example,
memristive synapses operating in a stable low-resistance mode have demonstrated energy consumption
up to ∼ 1, 000, 000 times lower in neural network tasks compared to GPU-based systems [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <sec id="sec-7-1">
        <title>6.1. On-chip Plasticity and Architectural Optimisations</title>
        <p>A key advantage of neuromorphic memristive hardware is its massive parallelism and high connectivity,
comparable to biological neural circuits. In the brain, each neuron connects to ∼ 10, 000 others via
synapses [24], and neuromorphic memristor arrays can achieve similarly dense fan-in/fan-out.
Highdensity integration (down to nanometre scales) further enables network scales to reach millions of
devices [23]. The other promising direction is building neuromorphic systems of a brain-region scale;
for example, memristor-based chips with &gt; 105 synapses on-chip have been demonstrated, and large
neuromorphic platforms now reach 108–109 spiking neurons by tiling many cores [24]. Although
memristive hardware is still catching up to achieve the 106 − 109 neurons and 1010 synapses numbers of
mammalian regions like the hippocampus, its 3D integration and nanoscale connectivity could provide
interesting parallelism. By coping main structures of the brain’s architecture of distributed, concurrent
processing, memristive neuromorphic systems can scale up neural networks while maintaining real-time
performance.</p>
        <p>
          Each memristor’s resistance can be modulated by local voltage and spike timing, naturally
implementing STDP in hardware[26, 27]. In a 2019 demonstration, hybrid CMOS/memristor spiking networks with
memristive synapses achieved fast one-shot learning of streaming data, highlighting the potential for
continuous, lifelong learning in hardware [27]. We consider online learning as crucial for autonomous
agents and robotics: a memristive SNN can adjust to new environments or changing goals during
operation via local synaptic updates (e.g. STDP or reward-modulated STDP) rather than requiring
full re-training as in traditional ANNs. Such local learning in memristor arrays has been validated in
simulations and prototypes, showing improved performance as the network self-refines with experience
[
          <xref ref-type="bibr" rid="ref10">28, 10</xref>
          ]
        </p>
        <p>
          To maximize memristive neuromorphic performance, researchers are optimising both memristive
device materials and network architectures. On the device side, engineering memristors with STDP
learning with multiple intermediate states (as opposed to abrupt digital switching) improves their ability
to represent synaptic weights smoothly and reliably [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Recent designs integrate memristor arrays
for synapses with spiking neuron circuits and even memristive axon-dendrite trees, reproducing the
compartmentalized processing of biological neurons [24]. Self-organising and self-learning memristive
nanowire networks – could be used as a complex memristive substrate. The nanowire networks provide
a densely interconnected web of memristive junctions a-la synapses, implementing hardware
reservoircomputing principles [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. These online architectural optimisations aim to approach the scale and
resilience of brain microcircuits while preserving low-energy operation.
        </p>
      </sec>
      <sec id="sec-7-2">
        <title>6.2. Neuromorphic Robots with Memristive SNNs</title>
        <p>From the autonomous robotics perspective, the integration of memristive networks with SNNs paves the
way for robots with brain-like control, online adaptability, and flexibility. Recent work demonstrated
that a memristor-based “memristive nervous system” in a humanoid robot yields high energy eficiency
and bionic sensory processing akin to biological nervous systems[25, 29]. Memristive sensory neurons
can transduce analog signals from sensors directly into electrical spikes[24], processing information
on-site with minimal processing computing cost. The possible result is neuromorphic robotics capable
of complex perception-SNN processing-action loops under strict power budgets that traditional robot
controllers cannot match [25]. Memristive SNN controllers have demonstrated real-time learning and
adaptation, showing promise for agile, autonomous robots with nervous system-level eficiency [ 30].</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>7. Memory Optimisation in Neuromorphic Systems</title>
      <p>In this work, we propose a novel memory optimisation through a consolidation process (Fig 1). Firstly,
the forward replay enhances the route as experienced, whereas reverse replay (often occurring at the
goal location) runs the sequence backwards [31]. These replay events are time-compressed (lasting
∼ 100 ms) and coincide with the hippocampus’s sharp-wave ripple oscillations (∼ 150–200 Hz). Such
bidirectional and flexible reactivation of memories is believed to strengthen the neural representation
of space, supporting long-term memory storage and informed planning.</p>
      <sec id="sec-8-1">
        <title>7.1. Memory Consolidation Stages (Bio-Inspired Model)</title>
        <p>Wakeful Encoding of Routes. During active exploration, dedicated neurons in the hippocampal
formation encode the animal’s trajectory (Fig. 1). Place cells (hippocampal pyramidal neurons) fire at
0
START</p>
        <p>6
END
1
specific locations in an environment, efectively mapping out “place fields” along a route[ 31]. At the
same time, head-direction cells (found in connected regions like the entorhinal cortex and subiculum)
ifre when the animal faces a particular direction. Together, these cells create a cognitive map of the
route during the wake phase. In a neuromorphic system, a similar mechanism can be implemented with
spiking memristive neurons that respond to location and orientation features, providing an internal
representation of paths a robot traverses. Each segment of a route would activate a unique combination
of “place-cell” and “head-direction” spiking units, encoding the spatial trajectory in memory.</p>
        <p>Sleep Memory Optimisation. The hardware memristive implementation could use, a randomly
organised nanoscale networks naturally replicate aspects of neural morphology: they form dense
“axon-dendritic” meshes with synapse-like junctions that can store analog weights [32, 33, 30].
Mimicking hippocampal processes – using place-cell-like encodings, neuromodulatory reward signals, and
high-frequency reactivation (replay) – provides a framework for on-line learning with of-line
consolidation [34, 35].</p>
        <p>During sleep bidirectional replay, memory optimisation is implemented through a gradient
descentlike neuronal approach. This process involves the high-frequency reactivation of previously encoded
sequences, particularly during sharp wave-ripple (SWR) events (∼ 200  ). During replay, neuronal
activity propagates from the final step (e.g., step 6) back through the sequence ( 5 → 4 → 3 → 2 → 1 → 0),
reinforcing key synaptic connections. This stochastic propagation occurs through a dynamically
organised synaptic landscape, allowing for the spontaneous exploration of alternative pathways. A
crucial aspect of this process is the selective strengthening of synapses that contribute to a more
eficient representation of the experience. For example, if an indirect synapse between neuron 1 and
neuron 5 (1 → 5) is activated but was not part of the original forward encoding, its reinforcement
suggests a reconfiguration of the memory trace. This reorganisation efectively optimises the stored
representation, potentially leading to more eficient recall or problem-solving upon future retrieval.
Bidirectional replay thus serves as a neural computational mechanism that refines stored experiences,
not just by consolidating past sequences but also by iteratively optimising connectivity to facilitate
improved future performance. The concept of simultaneous bidirectional replay suggests that when
a route is being replayed, it does not necessarily follow a unidirectional trajectory but can occur in
both forward and backwards directions at the same time. This is particularly evident in cases where
nodes at opposite ends of a path, such as 0 and 5, exhibit high-frequency oscillations simultaneously
(Fig. 1). This coactivation pattern implies that neural circuits involved in memory processing or
path optimisation might leverage bidirectional information flow to enhance learning eficiency. By
simultaneously replaying sequences in both directions, the system can more rapidly converge on an
optimal network connectivity structure. This reduces the number of iterative adjustments needed,
thereby accelerating the process of discovering the most eficient connections within the network. The
attached diagram, which represents a structured path from a starting point (0) to an endpoint (6) via
intermediate nodes, visually supports this notion by depicting the potential bidirectional flow of activity
across the network.</p>
        <p>An artificial system with ”sleeping” memory optimisation is highly relevant to neuromorphic robots
and AI: during active phases, it learns from the environment (forming new memories), and during rest
phases, it autonomously replays and strengthens those memories. By consolidating important
information and pruning or downscaling the rest, the system optimizes its memory usage and improves recall
reliability. This brain-inspired two-stage learning (wake/sleep) could prevent catastrophic forgetting
in neuromorphic chips, enabling long-term memory storage even with continuous learning. Looking
ahead, the convergence of memristive hardware (for dense, low-power memory) with biologically
grounded memory algorithms may yield neuromorphic processors capable of human-like memory
consolidation – eficiently encoding experiences and consolidating them into lasting knowledge. Such
systems would represent a significant step toward brain-inspired cognition in practical autonomous
machines.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>8. Conclusion</title>
      <p>The next generation of Spiking Neural Networks (SNNs) holds great promise for bridging the gap
between artificial intelligence and biological intelligence. Throughout this study, we have highlighted the
limitations of traditional Artificial Neural Networks (ANNs), including their high energy consumption,
reliance on biologically implausible learning algorithms, lack of real-time adaptability, and vulnerability
to catastrophic forgetting. By contrast, SNNs particularly when integrated with neuromorphic hardware
and memristive technologies ofer a more eficient, scalable, and biologically inspired alternative.</p>
      <p>The adoption of memristive architectures allows for in-memory computing, reducing energy costs
and overcoming the von Neumann bottleneck. Furthermore, mechanisms such as synaptic plasticity,
neuromodulatory influences, and sleep-inspired memory consolidation strategies provide a pathway
for lifelong learning and adaptability in neuromorphic systems. The incorporation of bidirectional
hippocampal replay in artificial systems also paves the way for more robust memory optimisation,
enabling eficient long-term retention and dynamic reconfiguration of learned experiences.</p>
      <p>Future research should focus on refining these neuromorphic principles by integrating advanced
materials, optimising architectural designs, and further exploring biologically inspired mechanisms for
continual learning. By doing so, we can move closer to developing AI systems that match the eficiency,
adaptability, and cognitive resilience of biological brains ushering in a new era of energy-eficient,
intelligent computation.</p>
    </sec>
    <sec id="sec-10">
      <title>9. Acknowledgments</title>
      <p>This study was partially funded by the FAIRGROUND project: Artificial and Bio-Inspired Networked
Intelligence for Constrained Autonomous Devices Fairground Bando A Cascata A Valere Sul Piano
Nazionale Ripresa E Resilienza (PNRR) Missione 4, Istruzione E Ricerca-Componente 2, Dalla Ricerca
Allimpresa-Linea Di Inve-Stimento 1.3, Finanziato Dallunione Europea Nextgenerationeu, Progetto
Future Artificial Intelligence Fair PE0000013 CUP (Master): J53C22003010006 CUP: J43C24000230007.
The author(s) acknowledge(s) the support of the APC central fund of the University of Messina.</p>
    </sec>
    <sec id="sec-11">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used GPT-4o in order to: Grammar and spelling
check. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed and
take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>G. I. Parisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kemker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Part</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wermter</surname>
          </string-name>
          ,
          <source>Continual Lifelong Learning with Neural Networks: A Review</source>
          ,
          <source>Neural Networks</source>
          <volume>113</volume>
          (
          <year>2019</year>
          )
          <fpage>54</fpage>
          -
          <lpage>71</lpage>
          . ArXiv:
          <year>1802</year>
          .07569 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Takesian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Hensch</surname>
          </string-name>
          ,
          <article-title>Balancing plasticity/stability across brain development</article-title>
          ,
          <source>Progress in Brain Research</source>
          <volume>207</volume>
          (
          <year>2013</year>
          )
          <fpage>3</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Supermicro</surname>
          </string-name>
          ,
          <article-title>Did you know training a single ai model can emit as much carbon as five cars in their lifetimes?</article-title>
          ,
          <year>2024</year>
          . URL: https://www.supermicro.com.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Cybernews</surname>
          </string-name>
          ,
          <article-title>The cost of training ai models is rising exponentially</article-title>
          ,
          <year>2025</year>
          . URL: https://www. cybernews.com.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Ldd: High-precision training of deep spiking neural network transformers guided by an artificial neural network</article-title>
          ,
          <source>Biomimetics</source>
          <volume>9</volume>
          (
          <year>2024</year>
          )
          <fpage>413</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Whittington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bogacz</surname>
          </string-name>
          ,
          <article-title>An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity</article-title>
          ,
          <source>Neural computation 29</source>
          (
          <year>2017</year>
          )
          <fpage>1229</fpage>
          -
          <lpage>1262</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fedorova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jovišić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vallverdù</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Battistoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jovičić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Medojević</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Toschev</surname>
          </string-name>
          , E. Alshanskaia,
          <string-name>
            <given-names>M.</given-names>
            <surname>Talanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Erokhin</surname>
          </string-name>
          ,
          <article-title>Advancing neural networks: Innovations and impacts on energy consumption</article-title>
          ,
          <source>Advanced Electronic Materials</source>
          <volume>10</volume>
          (
          <year>2024</year>
          )
          <fpage>2400258</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Aquino-Brítez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>García-Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          Aquino-Brítez,
          <article-title>Towards an energy consumption index for deep learning models: A comparative analysis of architectures, gpus, and measurement tools</article-title>
          ,
          <source>Sensors</source>
          <volume>25</volume>
          (
          <year>2025</year>
          )
          <fpage>846</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Strubell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ganesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McCallum</surname>
          </string-name>
          ,
          <article-title>Energy and policy considerations for modern deep learning research</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on artificial intelligence</source>
          , volume
          <volume>34</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>13693</fpage>
          -
          <lpage>13696</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mikhaylov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pimashkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pigareva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gerasimova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Gryaznov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shchanikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zuev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Talanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Lavrov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Demin</surname>
          </string-name>
          , et al.,
          <article-title>Neurohybrid memristive cmos-integrated systems for biosensors and neuroprosthetics</article-title>
          ,
          <source>Frontiers in neuroscience 14</source>
          (
          <year>2020</year>
          )
          <fpage>358</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bizopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Koutsouris</surname>
          </string-name>
          ,
          <article-title>Sparsely activated networks</article-title>
          ,
          <source>IEEE Transactions on Neural Networks and Learning Systems</source>
          <volume>32</volume>
          (
          <year>2020</year>
          )
          <fpage>1304</fpage>
          -
          <lpage>1313</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>H. B. Project</surname>
          </string-name>
          ,
          <article-title>Learning brain: make ai more energy-</article-title>
          eficient,
          <year>2023</year>
          . URL: https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/ learning
          <article-title>-brain-make-ai-more-energy-efficient/</article-title>
          , accessed:
          <fpage>2025</fpage>
          -03-23.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <article-title>Reconsidering the energy eficiency of spiking neural networks</article-title>
          ,
          <source>arXiv</source>
          (
          <year>2023</year>
          ). URL: https://arxiv.org/abs/2409.08290.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Balduzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Vanchinathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Buhmann</surname>
          </string-name>
          ,
          <article-title>Kickback cuts backprop's red-tape: Biologically plausible credit assignment in neural networks</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on artificial intelligence</source>
          , volume
          <volume>29</volume>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <article-title>Sheared backpropagation for fine-tuning foundation models</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>5883</fpage>
          -
          <lpage>5892</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Poggio</surname>
          </string-name>
          ,
          <article-title>Biologically-plausible learning algorithms can scale to large datasets</article-title>
          , arXiv preprint arXiv:
          <year>1811</year>
          .
          <volume>03567</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <article-title>Donald o. hebb's synapse and learning rule: a history and commentary</article-title>
          ,
          <source>Neuroscience &amp; Biobehavioral Reviews</source>
          <volume>28</volume>
          (
          <year>2005</year>
          )
          <fpage>851</fpage>
          -
          <lpage>874</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Shibuya</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kawakami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Inoue</surname>
          </string-name>
          ,
          <article-title>Eficient target propagation by deriving</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>