<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Application of Izhikevich-Based Spiking Neural</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmitriy Klyushin</string-name>
          <email>dmytroklyushin@knu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Maistrenko</string-name>
          <email>o.maistrenko@knu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ICST-2025: Information Control Systems &amp; Technologies</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrska St, 60, Kyiv, 01033</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Spiking Neural Networks offer a biologically plausible alternative to conventional artificial neural networks by leveraging temporal coding and event-driven computation. Among spiking neuron models, the Izhikevich model achieves a unique balance between computational efficiency and biological realism, enabling diverse neuronal firing patterns with minimal complexity. This study presents a comparative evaluation of five Izhikevich neuron types Regular Spiking, Intrinsically Bursting, Chattering, Fast Spiking, and Low-Threshold Spiking within a hybrid convolutional-spiking architecture for biomedical image classification. Using a dataset of buccal epithelium nuclei exhibiting normal and anomalous fractal morphologies, we assess each neuron type in terms of F1 score, precision, recall, spike efficiency, and convergence behavior, employing surrogate gradient learning for supervised training. The findings highlight the importance of neuron model selection in SNN-based diagnostic systems and demonstrate the potential of Izhikevich-based architectures for interpretable, biologically inspired medical AI.</p>
      </abstract>
      <kwd-group>
        <kwd>Spiking neural networks</kwd>
        <kwd>Izhikevich neuron</kwd>
        <kwd>machine learning 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Networks⋆</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        In recent years, Spiking Neural Networks (SNNs) have emerged as a promising paradigm in
computational neuroscience and neuromorphic engineering. Unlike conventional artificial neural
networks (ANNs), which process information in a continuous and static manner, SNNs rely on
discrete-time spikes and temporal coding, mimicking the sparse and asynchronous communication
patterns observed in biological neural systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This temporal nature not only brings SNNs closer
to biological plausibility but also opens up opportunities for low-power, event-driven processing
especially relevant for real-time and edge applications. Within the landscape of SNN models, the
Izhikevich neuron model stands out for its unique combination of biological richness and
computational efficiency. Capable of replicating diverse firing behaviors observed in cortical neurons
using just a pair of differential equations and a reset condition, it occupies a middle ground between
simple models like the Leaky Integrate-and-Fire (LIF) neuron and more complex conductance-based
models such as Hodgkin-Huxley [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>
        While the original Izhikevich model was primarily proposed as a tool for simulating large-scale
cortical circuits, it has since been adapted and extended in various ways to better serve machine
learning tasks and neuromorphic applications [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Different parameterizations of the Izhikevich
model tailored to specific neuron types such as regular spiking, intrinsically bursting, fast spiking,
and chattering neurons offer distinct computational properties that can influence learning
dynamics and performance when integrated into network architectures [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. However, there is a
lack of comprehensive empirical analysis comparing these variants in practical machine learning
contexts. Most existing studies either focus on one variant or apply the model to synthetic datasets,
limiting our understanding of its real-world applicability [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ].
      </p>
      <p>In this paper, we aim to bridge this gap by conducting a systematic evaluation of different
Izhikevich model variants in the context of a real-world biomedical classification task. Specifically,
we apply these models to the classification of interphase nuclei in buccal epithelium cells. This
dataset, characterized by morphological variability and subtle class distinctions, presents a suitable
challenge for assessing the discriminative capacity of biologically inspired models. We explore how
different dynamical behaviors of the Izhikevich neuron influence feature encoding, network activity,
and overall classification performance.</p>
      <p>To this end, we implement several SNN architectures, each built using a different variant of the
Izhikevich model. We compare their performance in terms of accuracy, spike efficiency, and
convergence behavior. The networks are trained using supervised learning with surrogate gradients.
Furthermore, we analyze the internal dynamics of these networks to identify which spiking patterns
and neuron types are most suitable for capturing the complexity of cellular morphology in
biomedical imaging. Through this investigation, we aim to not only benchmark the Izhikevich
variants but also provide insights into their practical use in biologically motivated AI systems dealing
with complex, real-life data.
2.</p>
      <p>
        comparison
The Izhikevich neuron model represents a powerful and versatile formalism for simulating
biologically plausible neural activity using relatively simple mathematics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. It combines the
computational efficiency of integrate-and-fire models with the biological richness of
conductancebased models, such as Hodgkin-Huxley, while being orders of magnitude less computationally
expensive [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The model is defined by two coupled first-order differential equations that govern
the evolution of the membrane potential  and the membrane recovery variable  , as well as an
afterspike reset mechanism. The dynamics are governed by the following equations:


= 0.04 2 + 5 + 140 −  + 


=  (
      </p>
      <p>−  )
  ≥ 30</p>
      <p>← 
,  ℎ { ←  + 
with a reset condition applied when the membrane potential exceeds a predefined threshold (usually
30 mV):</p>
      <p>
        Here,  represents the membrane potential of the neuron, while  models the recovery variable,
responsible for activation of K+ channels and inactivation of Na+ channels. The four parameters  ,
 ,  , and  determine the dynamical regime of the neuron [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Parameter  affects the time scale of
the recovery variable,  controls the sensitivity of  to changes in  ,  sets the reset value of the
membrane potential after a spike, and  determines how much the recovery variable is increased
during the spike reset [13]. By adjusting these parameters, the Izhikevich model can emulate a wide
variety of spiking and bursting patterns observed in real biological neurons (see Figure 1).
      </p>
      <p>In this study, we focus on several well-known parameterizations of the Izhikevich model, each
corresponding to a different biologically observed neuron type. These include regular spiking (RS),
intrinsically bursting (IB), chattering (CH), fast spiking (FS), and low-threshold spiking (LTS)
neurons. Each of these configurations has been shown to exhibit distinct temporal dynamics,
response to stimuli, and adaptive behavior, all of which can significantly influence how information
is processed and encoded in an SNN [14, 15]. The motivation for including this range of neuron types
in our comparison is to investigate how their differing electrophysiological characteristics translate
(1)
(2)
to performance differences in practical machine learning tasks
morphological features in biomedical images.
specifically, the classification of</p>
      <p>The Regular Spiking (RS) model, with parameters  = 0.02,  = 0.2,  = 65, and  = 8, represents
the typical firing behavior of cortical pyramidal neurons (see Figure 2). These neurons fire a train of
spikes with gradually increasing inter-spike intervals when stimulated with a constant current,
modeling the frequency adaptation observed in real biological systems. Intrinsically Bursting (IB)
neurons ( = 0.02,  = 0.2,  = 55,  = 4) produce rhythmic bursts of spikes followed by quiescent
periods, a pattern observed in several cortical and hippocampal cells. This bursting behavior is
thought to play a key role in attention, memory encoding, and signal amplification [16, 17].</p>
      <p>The Chattering (CH) variant ( = 0.02,  = 0.2,  = 50,  = 2) fires at extremely high frequencies
in bursts, similar to neurons found in layers 2/3 of the neocortex (see Figure 2). Their dense and rapid
activity makes them suitable for encoding rapidly changing signals, although they may be
computationally more demanding due to their high spike counts. In contrast, the Fast Spiking (FS)
neuron ( = 0.1,  = 0.2,  = 65,  = 2) is designed to model inhibitory interneurons, which are
capable of sustaining high-frequency firing with little to no adaptation. These neurons are crucial
for precise timing and synchronization in biological networks and are often implicated in feedback
and feedforward inhibition mechanisms.</p>
      <p>Finally, the Low-Threshold Spiking (LTS) model ( = 0.02,  = 0.25,  = 65,  = 2) captures the
behavior of thalamic relay neurons and other types that are activated by relatively weak stimuli but
exhibit delayed firing [18]. This delay can contribute to more complex temporal coding schemes and
may be useful in contexts where temporal integration and gating are required [19].</p>
      <p>To assess the computational implications of these different models, we constructed different
variants of a spiking neural network architecture, each using one of the aforementioned Izhikevich
neuron types in its hidden layers. We trained these networks on a real-life biomedical dataset
involving classification of interphase nuclei of buccal epithelium cells, a domain relevant for
applications in genotoxicology, cytopathology, and early disease screening. The dataset consists of
images representing different nuclear morphologies, including normal, micronucleated, and other
irregular types [20, 21]. The task requires the model to distinguish between subtle structural
differences, often under conditions of natural variability and noise, making it well-suited for testing
the representational power of SNNs.</p>
      <p>The choice of neuron model has direct consequences on how the network encodes features from
input data. For instance, bursting neurons may provide stronger responses to salient features,
effectively amplifying key information, whereas regular or fast-spiking neurons may offer more
precise and stable encoding. These functional differences impact not only classification accuracy but
also energy efficiency (due to different spike rates), convergence speed during training, and
robustness to input perturbations [22, 23]. To quantify these effects, we evaluated each model across
a comprehensive set of metrics: classification accuracy, spike count per inference, training loss
convergence, inference latency, and response to added noise or image distortions.</p>
      <p>Moreover, we analyzed the internal behavior of the networks using neuron-level statistics, such
as inter-spike intervals, average firing rates, and synchrony, to gain deeper insight into how each
neuron type contributes to overall network function. Our results reveal that certain Izhikevich
variants perform significantly better than others for this particular biomedical task, and that the
interplay between neuron dynamics and network architecture is critical for optimizing SNN
performance.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental fractal analysis of buccal kernels</title>
      <p>The primary objective is to explore the influence of different Izhikevich neuron types on
classification performance when applied to real-life biomedical data characterized by subtle
structural variability, specifically nuclei exhibiting fractal properties. This analysis is crucial for
assessing the practical applicability of biologically inspired computation in complex visual domains
and for guiding the selection of spiking dynamics for sensitive medical tasks.</p>
      <p>In the 1960s, some of the earliest reports on malignant transformations emerged, focusing on the
study of X-chromatin content in somatic cells. These studies highlighted its instability and its
association with various functional alterations in the body as well as general somatic cell pathology.
Significant changes in the X-chromatin content of buccal epithelium and peripheral blood
neutrophils were observed in individuals with tumors. Additionally, it was demonstrated that
fluctuations in the number of cells containing X chromosomes are linked to defects in the functional
state of the heterozygous X chromosome.</p>
      <p>Research highlighting alterations in buccal epithelial cells in tumor patients has attracted
significant attention. In the 1960s, H. Nieburgs and colleagues [24] reported a distinct redistribution
of chromatin mass in somatic cells in 77% of cancer patients, labeling these changes as tumorigenic.
These alterations included an enlargement of epithelial cell nuclei and an increase in the size of
"restricted" chromatin regions, which were surrounded by lighter areas. Similar transformations
were observed in the cells of organs such as the liver and kidneys.</p>
      <p>Among breast cancer patients, there was a noted increase in DNA content and the size of
interphase nuclei in buccal epithelium. However, some researchers did not find a significant
difference in DNA content between patients and healthy men, as demonstrated in a study measuring
the DNA content of buccal epithelial cells in men with bronchial epithelioma using cellular
spectrophotometry.</p>
      <p>In study [25], researchers examined three distinct groups: a control group of 29 individuals, 68
patients diagnosed with stage II breast cancer, and 33 patients with fibroadenomatosis. All diagnoses
were confirmed through histological analysis. The dataset used for morphological assessment
included 20,256 images of interphase nuclei derived from buccal epithelium samples, with 6,752
nuclei imaged in three variations: unfiltered, filtered with yellow, and filtered with purple.</p>
      <p>Samples were collected as epithelial cell smears from the middle layer of the spinous stratum in
the oral mucosa, averaging 52 cells per smear. DNA-fuchsin concentration within the nuclei was
calculated by multiplying the optical density by the area of the nucleus. Initially, chromatin
distribution data were transformed into 128×128-pixel matrices for analysis [25].
-channel (RGB) input containing fractal kernel
sizes, although the number of elements varied significantly between samples. To normalize the data
prior to training the neural network, quantiles were calculated based on the sample with the smallest
number of elements (see Figure 3).</p>
      <p>Further input sets were created by transforming the original RGB images. The experimental
dataset included the full RGB input, as well as individual red (R), green (G), and blue (B) channels.
mean-channel representation was formed by averaging the three RGB components. Only green
channel (G) was left for final results due to the most information gain.</p>
      <p>The hybrid neural architecture that combines conventional deep learning components for spatial
feature extraction with a spiking decision layer for classification was implemented (see Figure 4).
The model is divided into three main stages: a convolutional front-end, a spike encoding layer, and
a spiking classifier composed of Izhikevich neurons.</p>
      <p>The convolutional front-end is responsible for learning and extracting high-level representations
of the input data. The network begins with two convolutional blocks. The first block consists of a 1D
convolutional layer with 32 filters of size 20×1, followed by batch normalization, ReLU activation,
and max pooling. The second block mirrors the first but with 64 filters. A dropout layer with a
dropout rate of 0.25 is applied after each block to prevent overfitting. The output of the second max
pooling operation is flattened and passed through a fully connected dense layer with 256 units and
ReLU activation.</p>
      <p>The spike encoding layer converts the dense feature vector into a temporal spike train. We used
rate-based encoding, where the magnitude of each feature determines the firing probability of a
corresponding input neuron over a fixed simulation window. Specifically, the 256 features are
normalized and encoded over a 100 ms simulation interval. Each scalar feature value is converted
into a Poisson-distributed spike train, where higher feature intensities yield higher spike rates. This
layer serves as the bridge between the analog convolutional backbone and the discrete-time spiking
decision layer.</p>
      <p>The spiking classifier consists of a fully connected layer composed of 20 Izhikevich neurons, each
receiving inputs from all 256 spike trains. Each neuron type (RS, IB, CH, FS, LTS) is tested in isolation
in its own model instance, meaning only one neuron type is used at a time in the final layer. The
membrane potential and recovery variables of each spiking neuron evolve according to the
Izhikevich equations, and neuron outputs are binary spikes triggered whenever the membrane
potential exceeds the threshold of 30 mV. The spike trains produced by the classifier layer are
integrated over time, and the final decision is made based on which output neuron group
accumulates the most spikes. For binary classification, neurons are partitioned into two groups of 10
units each, one group representing the "positive" class and the other the "negative" class. The class
corresponding to the group with the higher total spike count at the end of the simulation is selected
as the prediction.</p>
      <p>This architecture was trained using surrogate gradient learning to backpropagate through the
spiking dynamics.</p>
      <p>Training spiking neural networks using gradient-based methods presents a fundamental
challenge: the spiking mechanism, typically modeled as a non-differentiable thresholding function
(e.g., the Heaviside step function), prevents the direct application of backpropagation. This
nondifferentiability obstructs the computation of gradients, which are essential for optimizing weights
in supervised learning tasks.</p>
      <p>To address this, surrogate gradient learning (SGL) has emerged as a powerful technique. The key
idea is to replace the true gradient of the spiking function with a continuous, differentiable surrogate
during the backward pass. During the forward pass, the original spiking non-linearity is retained to
preserve biologically faithful behavior. Common surrogate functions include the sigmoid, fast
sigmoid, piecewise linear, and arctangent functions, each offering a smooth approximation that
facilitates gradient flow.</p>
      <p>The use of surrogate gradients was crucial in enabling the comparative evaluation of various
Izhikevich neuron types. Without such a method, direct training of a differentiable Izhikevich SNN
would require reinforcement learning or local learning rules like STDP, which are less efficient or
harder to scale.</p>
      <p>Recent research has validated the effectiveness of surrogate gradients across diverse tasks,
including image classification, speech recognition, and event-based vision. In our experiments, we
found that combining surrogate gradient learning with the biologically expressive Izhikevich model
not only allowed training convergence, but also highlighted how different neuron types responded
to gradient updates.</p>
      <p>Neuron classes with richer dynamics (e.g., bursting neurons) showed greater representational
plasticity, suggesting a link between neuronal phenotype and learning efficiency under
surrogatebased optimization.</p>
      <p>In summary, surrogate gradient learning offers a practical and biologically plausible solution to
the gradient flow problem in SNNs. Its integration with Izhikevich neurons enables the training of
complex, dynamic spiking architectures while retaining the computational expressivity and realism
that make these models attractive for neuromorphic and biomedical applications.</p>
      <p>The normalized cross-entropy loss computed on the spike counts at the output layer was utilized,
and training was conducted with the Adam optimizer (learning rate 0.001) over 50 epochs. Each
experiment was repeated five times with different random seeds to obtain stable metrics.</p>
      <p>To measure classification performance, we computed F1 score, precision, and recall metrics
particularly suitable for imbalanced datasets, where one of the classes may be underrepresented.
These metrics reflect different aspects of classifier behavior: precision emphasizes the ability to avoid
false positives, recall indicates the ability to detect all positive instances, and F1 provides a balance
between them.</p>
      <p>The results of the experiments are presented in the table below.</p>
      <p>From these results, we observe that Intrinsically Bursting (IB) neurons consistently outperform
the other neuron types across all three performance metrics. The burst-firing behavior of IB neurons
plays a crucial role in enhancing the network's ability to capture essential features in the encoded
spike trains. This is particularly important when analyzing datasets with subtle and intricate
fractallike structures, such as those observed in anomalous nuclei. The burst firing of IB neurons may help
highlight key patterns, effectively amplifying important features that would otherwise be difficult to
detect. Furthermore, this type of firing behavior could serve as a kind of attention mechanism,
prolonging neuron activation in response to persistent stimuli. This prolonged response likely aids
in classifying difficult or ambiguous cases, improving the separation between classes and facilitating
more accurate classification results.</p>
      <p>Chattering (CH) neurons, while exhibiting a different firing pattern, also demonstrated strong
performance, especially in terms of precision. This suggests that CH neurons are particularly
effective at minimizing false positives, accurately detecting true positive instances with minimal
error. Their high-frequency bursting activity acts as a redundancy mechanism, essentially providing
a safeguard against noise or ambiguity in the input data. This redundancy may help the network
become more confident in its predictions, even when faced with uncertain or noisy data, further
improving the overall performance of the network. The combination of their precision and
robustness makes CH neurons a strong contender in scenarios where reliability and accuracy are
paramount.</p>
      <p>Regular Spiking (RS) neurons, while not exhibiting the specialized burst-firing behavior of IB or
CH neurons, offered a balanced trade-off between precision and recall. This made them a reliable
choice for baseline classification tasks. RS neurons are characterized by their ability to adapt their
firing rate over time, which helps them to regulate their activity and avoid overactivation. This
adaptability allows RS neurons to function effectively in a range of scenarios, maintaining stable
performance even when presented with complex input data. While they may not excel in any one
specific area, their versatility ensures that they can handle a variety of classification tasks with
consistent results, making them suitable for a broad range of applications.</p>
      <p>In contrast, Fast Spiking (FS) and Low-Threshold Spiking (LTS) neurons showed relatively poor
performance when compared to the other neuron types. FS neurons are known for their rapid,
nonadaptive firing, which, while allowing for quick responses, can lead to premature saturation in spike
count. This saturation reduces their ability to capture the nuanced temporal features necessary for
accurate classification, particularly in complex or anomalous data. As a result, FS neurons often fail
to maintain the level of detail required for precise classification, making them less effective in certain
applications.</p>
      <p>LTS neurons, on the other hand, exhibit a delayed but low-threshold response, which can be
beneficial in some contexts but detrimental in others. The lower recall observed with LTS neurons
indicates that they often fail to identify anomalous cases, potentially missing crucial instances of
interest. Their conservative activation behavior, while effective at avoiding false positives, leads to
underperformance when sensitivity to rare or anomalous events is required. This makes LTS neurons
less suitable for tasks that demand high recall or the detection of subtle, infrequent patterns.</p>
      <p>Additionally, the internal activity of the networks was analyzed by examining the spike rasters
and interspike interval distributions across different neuron types. Networks incorporating IB and
CH neurons exhibited dense and highly synchronized spiking patterns, particularly in response to
pathological nuclei. This dense spiking reflects their heightened sensitivity to unusual or complex
patterns in the data, contributing to their superior performance in tasks involving anomalous or
fractal-like features. In contrast, RS and FS neurons displayed more sparse and regular spike patterns,
suggesting a less responsive or adaptable network behavior. LTS neurons, however, often failed to
exhibit meaningful activity, especially when faced with challenging examples. This lack of activation
further underscores their limitations, particularly in tasks requiring high sensitivity and
responsiveness.</p>
      <p>Overall, this experiment shows that biologically inspired classification using Izhikevich neurons
can be both effective and interpretable in the context of biomedical image analysis. The results
confirm that neuron type selection has a significant impact not only on accuracy but on how the
network reacts to ambiguous or weakly expressed patterns. These findings provide concrete
guidance for selecting appropriate neuron dynamics in the design of SNN-based diagnostic tools and
pave the way for further applications in early disease detection, cytology, and digital pathology.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>This study has explored the application of Izhikevich-based spiking neural networks (SNNs) to the
domain of biomedical image classification, with a focus on distinguishing morphological patterns in
interphase nuclei of the buccal epithelium. The unique aspect of this work lies in its comparison of
various Izhikevich neuron types within a unified hybrid architecture that combines convolutional
feature extraction with biologically inspired spike-based classification.</p>
      <p>Through the adapted implementation of a convolutional-spiking model based on the framework
proposed in study [26], we evaluated five distinct Izhikevich neuron classes Regular Spiking (RS),
Intrinsically Bursting (IB), Chattering (CH), Fast Spiking (FS), and Low-Threshold Spiking (LTS) on
a binary classification task. The data consisted of images of buccal cell nuclei exhibiting normal or
anomalous fractal features, a complex and noisy dataset representative of real-world medical
diagnostics.</p>
      <p>Our results demonstrate that the type of spiking neuron used in the final classification layer
significantly influences model performance. Intrinsically Bursting neurons consistently
outperformed other types in terms of F1 score, precision, and recall. Their burst-generating dynamics
appear particularly well-suited for amplifying weak or spatially diffuse signals common in
biomedical imagery. Chattering neurons also performed well, leveraging their high-frequency output
to encode subtle variations. In contrast, Fast Spiking and Low-Threshold Spiking neurons were less
effective, often showing reduced recall, which may limit their usefulness in sensitive medical
classification tasks.</p>
      <p>The findings suggest that incorporating biologically plausible neuron models into machine
learning pipelines can offer both performance and interpretability benefits. The spiking layer not
only emulates natural computation more closely than traditional dense classifiers but also allows for
temporal dynamics that could be further exploited in future time-series or video-based biomedical
tasks.</p>
      <p>Moreover, this research highlights the importance of neuron model selection when designing
SNN-based systems. While much of the literature treats neuron dynamics as a low-level
implementation detail, our results indicate that the computational phenotype of spiking neurons
plays a critical role in system-level behavior, especially when subtle distinctions in visual input must
be captured and amplified for accurate decision-making.</p>
      <p>In future work, this methodology could be extended to multi-class classification of other
cytological structures, integration with neuromorphic hardware for low-power medical edge devices,
and exploration of learning rules beyond surrogate gradient descent. Furthermore, coupling spiking
networks with explainable AI techniques may offer new opportunities for interpreting the
physiological relevance of neuron spiking patterns in the context of cellular morphology and
pathology.</p>
      <p>In summary, this study provides empirical evidence that Izhikevich-based SNNs, when carefully
configured, are viable tools for biomedical image classification. By aligning computational
architecture with the dynamics of biological neurons, such systems hold promise for more adaptive,
robust, and interpretable solutions in medical AI.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>Authors would like to express their sincere gratitude to K. Golubeva and N. Borodai for generously
providing the data used for training the models in this study.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[13] E. M. Izhikevich, Dynamical systems in neuroscience: the geometry of excitability and bursting.</p>
      <p>MIT Press. doi:10.7551/mitpress/2526.001.0001.
[14] E. M. Izhikevich, G. M. Edelman, Large‑scale model of mammalian thalamocortical systems,</p>
      <p>Proc. Natl. Acad. Sci. U.S.A. 105 9 (2008) 3593–3598. doi:10.7551/mitpress/2526.001.0001.
[15] S. R. Kheradpisheh, M. Ganjtabesh, T. Masquelier, Bio‑inspired unsupervised learning of visual
features through spike timing dependent plasticity, PLoS Comput. Biol. 12 8 (2016).
doi:10.48550/arXiv.1504.03871.
[16] F. Chollet, Xception: Deep learning with depthwise separable convolutions (2017).</p>
      <p>doi:10.48550/arXiv.1610.02357.
[17] G. Huang, Z. Liu, L. Van der Maaten, K. Weinberger, Densely connected convolutional networks
(2018). doi:10.48550/arXiv.1608.06993.
[18] J. H. Wijekoon, P. Dudek, Compact silicon neuron circuit with spiking and bursting behaviour,</p>
      <p>Neural Networks 21 2 3 (2009) 524 534. doi:10.1016/j.neunet.2007.12.037.
[19] A. Van‑Schaik, C. Jin, A. McEwan, T. Hamilton, A log‑domain implementation of the Izhikevich
neuron model, Proc. IEEE ISCAS (2010) 4253–4256. doi:10.1109/ISCAS.2010.5537564.
[20] V. Rangan, A. Ghosh, V. Aparin, G. Cauwenberghs, A subthreshold analog VLSI implementation
of the Izhikevich simple neuron model, in: Proc. IEEE EMBC, 2010, pp. 725 728.
doi:10.1109/IEMBS.2010.5627392.
[21] M. Gill, Even simpler real‑time model of neuron, BioNanoScience 10 4 (2020) 416–419.</p>
      <p>doi:10.1007/s12668-020-00721-5.
[22] F. Brunel, V. Hakim, M. J. E. Richardson, Single neuron dynamics and computation, Curr. Opin.</p>
      <p>Neurobiol. 25 (2014) 161 168. doi:10.1016/j.conb.2014.01.005.
[23] W. Gerstner, W. Kistler, Spiking neuron models: single neurons, populations, plasticity.</p>
      <p>Cambridge Univ. Press (2022). doi:10.1017/CBO9780511815706.
[24] H. Nieburgs, Recent progress in the interpretation of malignancy associated changes (MAC),</p>
      <p>Acta Cytologica 12 (1968) 445-453. doi:10.1155/238921.
[25] D. Klyushin, K. Golubeva, N. Boroday, D. Shervarly, Breast Cancer Diagnosis Using Machine
Learning and Fractal Analysis of Malignancy-Associated Changes in Buccal Epithelium,
Artificial Intelligence, Machine Learning, and Data Science Technologies Future Impact and
Well-Being for Society 5 (2021) 1-18. doi:10.1201/9781003153405-1.
[26] A. Luna‑Álvarez, D. Mújica‑Vargas, M. Mejía‑Lavalle, Convolutional model with classification
through Izhikevich neuron, Research in Computing Science 148, 10 (2019) 65 80.
doi:10.13053/rcs-148-10-6.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Augustin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ladenbauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Obermayer</surname>
          </string-name>
          ,
          <article-title>Low‑dimensional spike rate models derived from networks of adaptive integrate‑and‑fire neurons (</article-title>
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1371/journal.pcbi.
          <volume>1005545</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Brette</surname>
          </string-name>
          , W. Gerstner,
          <article-title>Adaptive exponential integrate‑and‑fire model as an effective description of neuronal activity</article-title>
          ,
          <source>J. Neurophysiol. 94 5</source>
          (
          <year>2005</year>
          )
          <fpage>3637</fpage>
          -
          <lpage>3642</lpage>
          . doi:
          <volume>10</volume>
          .1152/jn.00686.
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pospischil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Piwkowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Bal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Destexhe</surname>
          </string-name>
          ,
          <article-title>Comparison of different neuron models to conductance‑based PSTHs obtained in cortical pyramidal cells using dynamic‑clamp in vitro</article-title>
          ,
          <source>Biol. Cybern. 105 2</source>
          (
          <year>2011</year>
          )
          <fpage>167</fpage>
          -
          <lpage>180</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00422-011-0458-2.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Richert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nageswaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Dutt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Krichmar</surname>
          </string-name>
          ,
          <article-title>An efficient simulation environment for modeling large‑scale cortical processing</article-title>
          ,
          <source>Front. Neuroinform. 5</source>
          <volume>19</volume>
          (
          <year>2011</year>
          ). doi:
          <volume>10</volume>
          .3389/fninf.
          <year>2011</year>
          .
          <volume>00019</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Shang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Daly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McGrath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Barker</surname>
          </string-name>
          ,
          <article-title>Neural network based classification of cell images via estimation of fractal dimensions</article-title>
          ,
          <source>Biomed. Signal Process. Control 5</source>
          ,
          <issue>4</issue>
          (
          <year>2010</year>
          )
          <fpage>304</fpage>
          313. doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4471</fpage>
          -0513-8_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Andabili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nazari</surname>
          </string-name>
          , T. Moosazadeh,
          <article-title>Chaotic dynamics analysis and digital hardware design of the Izhikevich neuron model</article-title>
          ,
          <source>Scientific Reports 15</source>
          <volume>1</volume>
          (
          <issue>2025</issue>
          )
          <article-title>1 19</article-title>
          . doi:
          <volume>10</volume>
          .1038/s41598- 025-01876-5
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Moshruba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Poursiami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Parsa</surname>
          </string-name>
          ,
          <article-title>Izhikevich-Inspired Temporal Dynamics for Enhancing Privacy, Efficiency, and Transferability in Spiking Neural Networks (</article-title>
          <year>2025</year>
          ).
          <source>doi:10.48550/arXiv.2505.04034</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Enríquez‑Gaytán</surname>
          </string-name>
          ,
          <article-title>Spiking neural network approaches PCA with metaheuristics</article-title>
          ,
          <source>Neural Comput. Appl. 30</source>
          <volume>6</volume>
          (
          <year>2018</year>
          )
          <fpage>1869</fpage>
          -
          <lpage>1880</lpage>
          . doi:
          <volume>10</volume>
          .1049/el.
          <year>2020</year>
          .
          <volume>0283</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Taherkhani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Belatreche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cosma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Maguire</surname>
          </string-name>
          ,
          <article-title>A review of learning in biologically plausible spiking neural networks</article-title>
          ,
          <source>Neural Networks</source>
          <volume>123</volume>
          (
          <year>2020</year>
          )
          <fpage>259</fpage>
          282. doi:
          <volume>10</volume>
          .1016/j.neunet.
          <year>2019</year>
          .
          <volume>09</volume>
          .036.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Dang</surname>
          </string-name>
          ,
          <article-title>Supervised learning in spiking neural networks: a review of algorithms and evaluations</article-title>
          ,
          <source>Neural Networks</source>
          <volume>123</volume>
          (
          <year>2020</year>
          )
          <fpage>199</fpage>
          216. doi:
          <volume>10</volume>
          .1016/j.neunet.
          <year>2020</year>
          .
          <volume>02</volume>
          .011.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Izhikevich</surname>
          </string-name>
          ,
          <article-title>Simple model of spiking neurons</article-title>
          ,
          <source>in: IEEE Trans. Neural Netw</source>
          ,
          <volume>14</volume>
          ,
          <issue>6</issue>
          ,
          <year>2003</year>
          , pp.
          <fpage>1569</fpage>
          <lpage>1572</lpage>
          . doi:
          <volume>10</volume>
          .1109/TNN.
          <year>2003</year>
          .
          <volume>820440</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Izhikevich</surname>
          </string-name>
          ,
          <article-title>Which model to use for cortical spiking neurons? in:</article-title>
          <source>IEEE Trans. Neural Netw. 15 5</source>
          (
          <year>2004</year>
          )
          <fpage>1063</fpage>
          1070. doi:
          <volume>10</volume>
          .1109/TNN.
          <year>2004</year>
          .
          <volume>832719</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>