<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Illésová);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Across Circuits, Features, and Training Dimensions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Silvie Illésová</string-name>
          <email>illesova.silvie.scholar@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomasz Rybotycki</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martin Beseda</string-name>
          <email>martin.beseda@univaq.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Coppito-L'Aquila, Italy</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center of Excellence in Artificial Intelligence, AGH University</institution>
          ,
          <addr-line>Aleje Mickiewicza 30, 30-059 Cracow</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Information Engineering</institution>
          ,
          <addr-line>Computer Science and Mathematics</addr-line>
          ,
          <institution>University of L'Aquila</institution>
          ,
          <addr-line>via Vetoio, I-67010</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IT4Innovations National Supercomputing Center</institution>
          ,
          <addr-line>Studentská 6231/1B, 708 00 Ostrava</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences</institution>
          ,
          <addr-line>ul. Bartycka 18, 00-716 Warsaw</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Quantum Computing, Machine Learning, Hybrid Quantum-Classical Model, Quantum Machine Learning</institution>
          ,
          <addr-line>Evalua-</addr-line>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>Systems Research Institute, Polish Academy of Sciences</institution>
          ,
          <addr-line>ul. Newelska 6, 01-447 Warszawa</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>As hybrid quantum-classical models gain traction in machine learning, there is a growing need for tools that assess their efectiveness beyond raw accuracy. We present interpretable metrics to evaluate quantum circuit expressibility, feature representations, and training dynamics. QMetric quantifies key aspects, including circuit fidelity, entanglement entropy, barren plateau risk, and training stability. The package integrates with Qiskit and PyTorch, and is demonstrated via a case study on binary MNIST classification comparing classical and quantum-enhanced models. Code, plots, and a reproducible environment are available on GitLab.</p>
      </abstract>
      <kwd-group>
        <kwd>Dimensions</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Hybrid quantum-classical neural networks (QNNs) [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ] are playing a central role in the development
of algorithms for near-term quantum devices. By embedding parameterized quantum circuits within
classical training loops, these architectures aim to leverage quantum resources such as entanglement and
superposition while maintaining trainability through well-established classical optimizers. This hybrid
structure has enabled a broad spectrum of quantum machine learning (QML) [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ] models to flourish
across domains including classification [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ], generative modeling[8, 9], optimization [10, 11, 12],
benchmarking[13, 14], medicine[15, 16, 17, 18], and quantum chemistry[19, 20, 21].
      </p>
      <p>Importantly, many canonical variational algorithms—originally developed for quantum
simulation—can be reframed as learning architectures.</p>
      <p>The Variational Quantum Eigensolver (VQE)
[22, 23, 24, 25] exemplifies this duality. Therein, a parameterized quantum ansatz is trained to
minimize an energy objective, similarly to a neural network minimizing a loss function. Over time, VQE
has evolved into a family of learning-based formulations, including State-Averaged Orbital-Optimized
VQE [26, 27], ADAPT-VQE [28], and Subspace-Search VQE [29], each introducing novel strategies for
parameterization, target state selection, and optimization flow.</p>
      <p>Beyond simulation, hybrid QNNs are widely applied in supervised learning, typically in classification
or regression tasks[30]. Here, quantum circuits are used to encode classical data (via feature maps
[31, 32]), process it through a variational ansatz, and output probabilities or decision boundaries.
Such architectures are used in Quantum Neural Networks [33], Quantum Support Vector Machines [34],
and more recent paradigms like Quantum Kitchen Sinks [35] and Quantum Feature Spaces [36]. In</p>
      <p>CEUR</p>
      <p>ceur-ws.org
unsupervised learning, models such as Quantum Circuit Born Machines [37], and Quantum Autoencoders
[38] extend the reach of QML into generative and latent-variable modeling.</p>
      <p>Despite the growing variety and sophistication of QML models, there remains a lack of principled,
interpretable, and reproducible tools for evaluating their behavior. Traditional ML diagnostics—accuracy,
F1-score, or validation loss—do not capture key quantum characteristics such as circuit expressibility,
entanglement structure, barren plateaus, or the sensitivity of quantum feature maps. Without such
metrics, model design becomes largely heuristic, and comparisons between quantum and classical
architectures are often inconclusive or misleading.</p>
      <p>To bridge this gap, we introduce QMetric, a modular and extensible Python framework for
evaluating hybrid quantum-classical models. QMetric computes interpretable scalar metrics across three
complementary dimensions (i) the structure and expressiveness of quantum circuits; (ii) the geometry
and compression of quantum feature spaces; and (iii) the stability, eficiency, and gradient flow during
training. These tools allow researchers to diagnose bottlenecks, compare architectures, and validate
empirical claims beyond raw accuracy.</p>
      <p>Our package integrates with Qiskit 1 and PyTorch 2, which we demonstrate through a binary
classification example on MNIST digits. Therein, we compare a classical neural network with a hybrid
QNN. All code, plots, and environment files are publicly available for reproducibility and further
experimentation.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Software Specifications</title>
      <p>All experiments were conducted using the qmetric-env Conda 3 environment 4, configured for hybrid
quantum-classical machine learning. The system exploits GPU-accelerated libraries, supports Qiskit
primitives of version V1, and integrates PyTorch and scikit-learn5 for classical model components and
preprocessing.</p>
      <p>The environment is based on Python 3.10.13 with key libraries and versions listed in Table 1. Qiskit
version 1.4.3 was used in conjunction with Qiskit Aer 0.17.0 and Qiskit Machine Learning 0.8.2. PyTorch
version 2.7.0 and CUDA 12.9 toolchain were used for classical and hybrid model execution. Principal
component analysis and classical baseline training relied on scikit-learn version 1.6.1.
1https://www.ibm.com/quantum/qiskit
2https://pytorch.org/
3https://anaconda.org/anaconda/conda
4https://gitlab.com/illesova.silvie.scholar/qmetric/-/blob/main/environment.yml
5https://scikit-learn.org/stable/</p>
    </sec>
    <sec id="sec-3">
      <title>3. Metrics Categories</title>
      <p>QMetric organizes its metrics into three complementary categories—quantum circuit behavior, quantum
feature space, and training dynamics—that together provide a comprehensive profile of a hybrid model’s
expressiveness, learnability, and robustness. These categories are summarized in Tab. 2.</p>
      <sec id="sec-3-1">
        <title>3.1. Quantum Circuit Metrics</title>
        <p>As the computational core of hybrid models, quantum circuits influence representational capacity and
noise resilience. QMetric evaluates circuit quality through metrics such as Quantum Circuit Expressibility,
which measures the diversity of quantum states produced under random parameters, and Quantum
Circuit Fidelity, estimating robustness to noise via state overlap.</p>
        <p>To characterize circuit structure, the Quantum Locality Ratio captures the balance between local and
entangling gates. Entanglement-based metrics include Efective Entanglement Entropy
and Quantum
Mutual Information, which quantify intra-circuit quantum correlations. These metrics are useful when
tuning ansätze for VQE, QAOA, or classification tasks, where poor ansatz expressibility or excessive
entanglement can hinder learning.</p>
        <sec id="sec-3-1-1">
          <title>3.1.1. Quantum Circuit Expressibility</title>
          <p>Quantum Circuit Expressibility (QCE) [39] quantifies a circuit’s ability to generate a diverse set of
quantum states across the Hilbert space. It measures how closely the distribution of states produced
by the parameterized circuit approximates the uniform (Haar) distribution [40]. High expressibility
corresponds to broader state coverage in Hilbert space and is conceptually linked to the Fubini-Study
distance [41]. It can be quantitatively related to the Kullback–Leibler divergence [42] between the
circuit’s output distribution and the Haar distribution. QCE implies that the circuit can reach a wide
variety of states, which is crucial for representing complex functions in quantum machine learning and
variational algorithms.</p>
          <p>Formally, QCE is defined via the pairwise fidelity of randomly generated state vectors,
QCE = 1 −</p>
          <p>1
 ( − 1)
∑ |⟨ 
&lt;</p>
          <p>
            2
|  ⟩| ,
(1)
where  is the number of randomly sampled parameter sets used to generate the corresponding quantum

states, {  }=1 are the quantum states obtained by randomly sampling parameters from the specified
ranges and applying them to the circuit. This expression captures the average overlap between states.
Lower overlap corresponds to greater expressibility. The QCE score lies in the range [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ], with values
closer to 1 indicating higher expressiveness.
          </p>
          <p>In practice, QCE helps identify whether a variational circuit is too shallow (low expressibility) or
overly complex (potentially prone to barren plateaus). A well-designed circuit should maintain a
high QCE while preserving trainability and manageable entanglement. QMetric implements QCE by
sampling multiple parameter sets, evaluating state vector overlaps, and averaging across all pairwise
ifdelities, making it an eficient diagnostic for early-stage ansatz evaluation.
6https://qiskit-community.github.io/qiskit-machine-learning/stubs/qiskit_machine_learning.neural_networks.EstimatorQNN.html
7https://quantum.cloud.ibm.com/docs/en/api/qiskit/qiskit.primitives.SamplerPubResult</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.1.2. Quantum Circuit Fidelity</title>
          <p>Quantum Circuit Fidelity (QCF) [43] quantifies the robustness of a quantum circuit to noise by measuring
how closely the output of the noisy circuit resembles that of the ideal (noise-free) version. Fidelity
serves as a key metric for assessing noise resilience in near-term quantum devices, where decoherence,
gate errors, and readout noise can significantly degrade quantum state quality.</p>
          <p>Mathematically, QCF is defined as the fidelity between two quantum states,
 (,  ) =</p>
          <p>2
(Tr√√ √ ) ,
(2)
(3)
where  is the density matrix of the ideal output state and  represents the output state under a specified
noise model. In the special case of pure states (as in most simulation scenarios), the fidelity simplifies
to the squared absolute value of the inner product between the ideal and noisy state vectors.</p>
          <p>In QMetric, QCF is computed by simulating both the ideal and noisy execution of a circuit using
Qiskit’s statevector simulator and a user-defined noise model. The resulting fidelity score ranges
from 0 to 1, with higher values indicating stronger fidelity. QCF is especially useful when benchmarking
circuits across diferent hardware targets or when optimizing ansatz designs for noisy intermediate-scale
quantum (NISQ) devices [44].</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>3.1.3. Quantum Locality Ratio</title>
          <p>Quantum Locality Ratio (QLR) [45] quantifies the proportion of single-qubit operations relative to the
total number of gates in a quantum circuit. This metric captures the locality of interactions, ofering
insight into how much a circuit relies on entangling operations. A high QLR implies that the circuit
uses mostly local, single-qubit gates, whereas a low value suggests strong reliance on multi-qubit
entanglement.</p>
          <p>Formally, QLR is defined as</p>
          <p>QLR =
 1-q
 total
,
where  1-q denotes the number of gates acting on a single qubit and  total is the total number of gates
in the circuit.</p>
          <p>In QMetric, this ratio is computed by iterating over the circuit’s gate operations and counting how
many act on one qubit. QLR helps to assess the tradeof between locality and entanglement. It returns
the ratio of single to multi-qubit gates, providing a fast and interpretable structural descriptor. QLR is
particularly useful during ansatz design, where excessive entanglement can introduce barren plateaus
or hardware noise sensitivity.</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>3.1.4. Efective Entanglement Entropy</title>
          <p>Efective Entanglement Entropy (EEE) [46] evaluates the degree of quantum entanglement between a
subsystem of qubits and the rest of the circuit. It is based on the von Neumann entropy of the reduced
density matrix of the selected subsystem, capturing how mixed its state becomes due to entanglement
with its complement.</p>
          <p>The metric is defined as
(  ) = −Tr(  log   ),
(4)
where   is the reduced density matrix obtained by tracing out all qubits missing from the chosen
subsystem.</p>
          <p>QMetric computes EEE by generating a state vector from the circuit, selecting a target subset of qubits,
performing a partial trace, and evaluating the entropy. This metric is useful in tasks like entanglement
scaling analysis, where understanding subsystem correlations is essential for tuning circuit depth and
topology.</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>3.1.5. Quantum Mutual Information</title>
          <p>Quantum Mutual Information (QMI) [47] measures the total correlations—both classical and
quantum—between two disjoint subsets of qubits in a quantum circuit. It extends the concept of mutual
information to the quantum domain, revealing how strongly two regions of a circuit are statistically
linked.</p>
          <p>The metric is computed as
 ( ∶ ) = (
 ) + (  ) − (  ),
(5)
where   ,   , and   are the reduced density matrices of subsystems  ,  , and their union, respectively.</p>
          <p>In QMetric, QMI is computed by preparing a full state vector, currently via analytical simulation,
computing partial traces for each subsystem and their union, and evaluating the entropies. This metric
is instrumental for analyzing modular architectures, verifying disentanglement, or diagnosing undesired
correlations in VQE, QAOA, or classification-oriented quantum circuits [ 48].</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Quantum Feature Space Metrics</title>
        <p>When encoding classical data into Hilbert space, the geometry of the resulting feature space directly
afects model performance. QMetric provides the Feature Map Compression Ratio (FMCR), assessing
how eficiently classical data are compressed via PCA, and the Efective Dimension (EDQFS), which
reflects variance spread in the quantum feature space.</p>
        <p>The Quantum Layer Activation Diversity (QLAD) and Quantum Output Sensitivity (QOS) evaluate
output variability and robustness to perturbations. Low QLAD and high QOS signal collapsed or brittle
encodings. These metrics are critical in Parametrized Quantum Circuit (PQC)-based classifiers, quantum
kernel methods, and other models relying on quantum feature geometry.</p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Feature Map Compression Ratio</title>
          <p>Feature Map Compression Ratio (FMCR) [49] quantifies how eficiently a quantum feature map
compresses the input data. It compares the original classical dimensionality with the number of principal
components needed to capture most of the variance in the quantum-transformed space. A high FMCR
indicates strong compression, meaning fewer efective dimensions are required to retain the majority
of the encoded information.</p>
          <p>Formally, FMCR is defined as</p>
          <p>FMCR =  in
 ef
where  in is the dimensionality of the classical input and  ef is the number of principal components
explaining 95% of the variance in the quantum feature space.</p>
          <p>QMetric implements FMCR by applying PCA to the quantum-transformed dataset, calculating the
cumulative explained variance, and identifying the number of components required to exceed the 95%
threshold. This metric is especially relevant when assessing whether a feature map leads to redundancy
or useful abstraction..</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Efective Dimension of Quantum Feature Space</title>
          <p>Efective Dimension of Quantum Feature Space (EDQFS) [50] measures how uniformly information is
distributed in the quantum feature space. It is based on the PCA eigenvalue spectrum and captures the
intrinsic dimensionality of the embedded data. A high EDQFS suggests a flat eigenvalue distribution
and a more balanced use of the available Hilbert space dimensions.</p>
          <p>The efective dimension is calculated as
 Ef =
(∑   )
∑  2 ,
,
2
(6)
(7)
where   are the PCA eigenvalues of the quantum-encoded dataset that was encoded by the feature
map one used. The summation runs over all principal components, i.e.,  = 1, … ,  , where  = min(, )
is the rank of the dataset with  samples and  features.</p>
          <p>QMetric computes EDQFS by performing PCA on the quantum features and evaluating the above
formula. This metric complements FMCR by indicating how eficiently the encoded dimensions are
utilized, helping to diagnose over- or underspread feature distributions [51].</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>3.2.3. Quantum Layer Activation Diversity</title>
          <p>Quantum Layer Activation Diversity (QLAD) [52] evaluates the diversity of measurement outcomes
across samples in the quantum feature space. It is based on the variance of probability distributions
obtained from quantum measurements, reflecting how varied the output activations are for diferent
inputs.</p>
          <p>The metric is defined as</p>
          <p>QLAD =
∑ Var(  ),

1
 =1
where   is the measurement probability distribution for the  -th sample and  is the number of samples.</p>
          <p>In QMetric, QLAD is computed by estimating the variance across each sample’s probability vector
and averaging the results. Low QLAD may signal that the circuit is collapsing inputs into narrow output
distributions, which could hinder expressivity and generalization capabilities of the model.</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>3.2.4. Quantum Output Sensitivity</title>
          <p>Quantum Output Sensitivity (QOS) [53] measures how sensitive a quantum model’s output is to small
perturbations in the input. It captures robustness and smoothness of the mapping from classical data to
quantum measurements. A low QOS implies a stable, noise-tolerant model, while a high value may
indicate fragility or sharp decision boundaries.</p>
          <p>The metric is calculated as</p>
          <p>QOS =  [
‖ ( + ) −  ()‖
‖‖ 2
2
] ,
(8)
(9)
where  () and  (+)</p>
          <p>are the quantum model outputs for the original and perturbed inputs, respectively.</p>
          <p>Here,  denotes the empirical average over a batch of perturbation vectors  , typically sampled from a
zero-mean isotropic Gaussian distribution.</p>
          <p>In QMetric, QOS is evaluated by generating perturbed versions of inputs, computing the model output
diferences, and normalizing by the squared perturbation norms. This metric is useful for analyzing
encoding smoothness, adversarial stability, and overall model resilience.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Training Dynamics</title>
        <p>QMetric also tracks training behavior using the Training Stability Index (TSI), which compares variability
in training and validation losses, and the Training Eficiency Index (TEI), which measures epochs needed
to reach a target accuracy relative to model size.</p>
        <p>Quantum-specific diagnostics include the Quantum Gradient Norm (QGN) and Barren Plateau Indicator
(BPI), both of which expose vanishing gradients linked to deep or poorly initialized circuits. To compare
hybrid and classical models, QMetric implements relative metrics, such as RQLSI and r-QTEI, to quantify
diferences in training eficiency and stability under aligned conditions. Together, these metrics can
support targeted diagnosis of underperformance, guide ansatz design, and enable meaningful evaluation
across model types.</p>
        <sec id="sec-3-3-1">
          <title>3.3.1. Training Stability Index</title>
          <p>Training Stability Index (TSI) [54] quantifies the variability in training and validation losses near
convergence. It measures how consistently the model performs in the final training phase by comparing
the standard deviation of losses over the last 10% of epochs. This percentage will be up to the user in
future versions of QMetric.</p>
          <p>The metric is defined as
(10)
(11)
where  train and  val denote the standard deviations of training and validation losses, respectively. Here,
”losses” refer to the recorded values of the loss function (e.g., cross-entropy or MSE) over training and
validation batches during training.</p>
          <p>A low TSI indicates stable and consistent generalization, while a high value may reveal overfitting or
noisy training dynamics. In QMetric, TSI is computed by evaluating standard deviations over the tail
segment of the loss curves, i.e., we take into account the standard deviation of the last  outputs of the
loss function.</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>3.3.2. Training Eficiency Index</title>
          <p>Training Eficiency Index (TEI) [55] measures how quickly a model reaches a high level of performance
relative to its size. It is defined as the ratio between the epoch at which validation accuracy first reaches
a minimum  = 90% and the number of trainable parameters. While the value 90% is usually used, this
threshold will be up to the user in future versions of QMetric.</p>
          <p>Formally,</p>
          <p>TEI =
epoch  ≥
 params</p>
          <p>,
TSI =  val ,
 train
where epoch  ≥ is the earliest epoch in which accuracy reaches desired threshold  and  params is
the total parameter count.</p>
          <p>Lower TEI values indicate faster convergence per parameter, making this metric a useful tool for
evaluating training eficiency across diferently sized architectures.</p>
        </sec>
        <sec id="sec-3-3-3">
          <title>3.3.3. Quantum Gradient Norm</title>
          <p>Quantum Gradient Norm (QGN) [56] measures the magnitude of gradients associated with quantum
circuit parameters. It reflects the overall strength of parameter updates and can signal the presence of
vanishing or exploding gradients.</p>
          <p>The metric is defined as</p>
          <p>QGN = ‖∇  ℒ‖2 , (12)
where   denotes the quantum parameters and ℒ is the training loss, i.e., the value of the loss function
on the training dataset.
calculation
model.</p>
          <p>Low QGN may indicate a barren plateau or excessively deep circuits [57].

where ℒ is the loss function and {  }=1 are the trainable parameters of the hybrid quantum-classical
where TSIhybrid and TSIclassical are the training stability indices for the hybrid and classical models,</p>
          <p>A value less than 1 suggests that the quantum-enhanced model is more stable during training. This
metric can support empirical comparison between model types under matched conditions.</p>
        </sec>
        <sec id="sec-3-3-4">
          <title>3.3.6. Relative Quantum Training Eficiency Index</title>
          <p>Relative Quantum Training Eficiency Index (r-QTEI)
[61] evaluates whether a hybrid model trains
more eficiently than a classical counterpart by comparing their respective
TEI scores.</p>
          <p>RQLSI =</p>
          <p>TSIhybrid
TSIclassical</p>
          <p>,
r-QTEI =</p>
          <p>TEIhybrid
TEIclassical
,</p>
        </sec>
        <sec id="sec-3-3-5">
          <title>3.3.4. Barren Plateau Indicator</title>
          <p>during optimization.</p>
          <p>It is defined as
Barren Plateau Indicator (BPI) [58] estimates whether a model sufers from barren plateaus by evaluating
the average squared magnitude of quantum gradients. This captures the extent of vanishing gradients
where  is the same as in eq. (9) and the rest of the parameters are as in eq. (12).</p>
          <p>Values near zero suggest that gradients are vanishing, which can hinder efective training. In QMetric,
BPI is computed over the flattened list of quantum gradients and averaged into a final value, making it
an eficient early diagnostic tool during model tuning [ 59].</p>
        </sec>
        <sec id="sec-3-3-6">
          <title>3.3.5. Relative Quantum Layer Stability Index</title>
          <p>Relative Quantum Layer Stability Index (RQLSI) [60] compares the training stability of hybrid
quantumclassical models to that of purely classical ones using the TSI metric. It helps quantify whether
introducing quantum layers improves or worsens loss stability.</p>
          <p>In QMetric, gradients are extracted from the backpropagation step and concatenated for L2 norm
BPI =  [‖∇  ℒ‖ ] ,
2
(13)
(14)
(15)
(16)
where TEIhybrid and TEIclassical are the training eficiency indices.</p>
          <p>A value below 1 means the hybrid model reaches target performance faster relative to its parameter
size. This metric can support head-to-head benchmarking of model variants in practical scenarios.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Case Study: Hybrid vs Classical on MNIST</title>
      <p>To illustrate the diagnostic capabilities of QMetric, we evaluate a hybrid quantum-classical neural
network against a classical baseline using a binary classification task on the MNIST dataset [ 62]. This
case study provides a practical scenario where quantum neural networks are tested under realistic
constraints. We describe the model architectures, data pipeline, training configuration, and
metricdriven analysis.</p>
      <sec id="sec-4-1">
        <title>4.1. Hybrid Model and its Training Parameters</title>
        <p>The hybrid model connects a parameterized quantum circuit with a classical output layer to perform
binary classification. The quantum component is implemented using Qiskit’s EstimatorQNN and is
connected to PyTorch via the TorchConnector, allowing seamless integration with PyTorch’s autograd
system.</p>
        <p>The quantum circuit is composed of a feature map and an ansatz. The feature map is a ZZFeatureMap
with one repetition that encodes classical inputs into quantum states. The ansatz is a RealAmplitudes
circuit that introduces trainable parameters and entanglement. The latter is repeated three times.
RealAmplitudes circuits are composed into a single parameterized circuit, which is then used to define
the quantum neural network. The quantum component outputs a single expectation value, which is
passed through a trainable classical linear layer followed by a sigmoid activation. The full model maps
input vector  to output  ( ⋅ QNN() + ) where  and  are trainable classical parameters.</p>
        <p>To match the number of qubits in the circuit, the MNIST images are projected into a lower-dimensional
space using principal component analysis. The original 784-dimensional vectors are reduced to three
components. This projection ensures compatibility with a three-qubit quantum circuit while preserving
as much variance as possible.</p>
        <p>The dataset is constructed by filtering the MNIST training set to include only samples corresponding
to digits 0 and 1. From this filtered subset, the first 500 examples are selected to simulate a small-data
regime. The images are flattened into vectors, normalized, and then transformed using Principal
Component Analysis (PCA) to produce a dataset suitable for quantum encoding.</p>
        <p>The hybrid model is trained for 30 epochs using the Adam optimizer with a learning rate of 0.01.
Binary cross-entropy is used as the loss function. Training and validation losses are tracked at each
epoch along with validation accuracy. Additionally, gradients concerning the quantum parameters
are collected to enable computation of metrics such as the quantum gradient norm and barren plateau
indicator.</p>
        <p>After training, the quantum outputs are evaluated using QMetric. Metrics such as the feature map
compression ratio, efective dimension of the quantum feature space, layer activation diversity, and
output sensitivity are computed from the post-quantum activations. Quantum circuit diagnostics such
as expressibility, locality ratio, entanglement entropy, mutual information, and noise robustness are
also evaluated. This model provides a complete use case for applying QMetric during model selection,
architectural tuning, and training analysis.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Classical Baseline</title>
        <p>The classical baseline model is a fully connected neural network designed to match the input
dimensionality and output behavior of the hybrid model. It takes as input the same three-dimensional data
produced by PCA and outputs a binary classification probability using a sigmoid activation.</p>
        <p>The architecture consists of an input layer with three neurons, a hidden layer with ten neurons using
the ReLU activation function, and an output layer with one neuron followed by a sigmoid activation.
The network approximates a function  () =  ( 2 ⋅ ReLU( 1 +  1) +  2) where  is the PCA-reduced
input vector and  1,  2,  1, and  2 are trainable parameters.</p>
        <p>The model is trained using the same subset of the MNIST dataset as the hybrid model. The inputs
are 500 grayscale images corresponding to digits 0 and 1, flattened and reduced to three principal
components. The preprocessing pipeline is identical, ensuring a fair comparison in terms of input
dimensionality and task complexity.</p>
        <p>The training procedure mirrors that of the hybrid model. The optimizer is Adam with a learning
rate of 0.01, the loss function is binary cross-entropy, and the number of training epochs is set to 30.
At each epoch, training loss, validation loss, and validation accuracy are recorded to allow for direct
comparison of convergence dynamics, generalization performance, and learning stability.</p>
        <p>This classical model serves as a baseline for interpreting the added value or limitations of quantum
components under identical data, dimensionality, and optimization conditions. It enables a controlled
Quantum Feature Map</p>
        <p>(ZZFeatureMap)
Parameterized Circuit
(RealAmplitudes)</p>
        <p>Expectation Value
Fully Connected Layer</p>
        <p>Sigmoid Activation</p>
        <p>Prediction</p>
        <p>(0 or 1)
Classical Input</p>
        <p>(3D vector)</p>
        <p>Hidden Layer
(10 neurons, ReLU)
Fully Connected Layer</p>
        <p>(1 neuron)
Sigmoid Activation</p>
        <p>Prediction</p>
        <p>(0 or 1)
analysis of the efects of quantum layers on expressivity, robustness, and trainability using QMetric’s
evaluation framework.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Quantum Circuit Metrics</title>
        <p>Table 3 summarizes the metrics that characterize the structure, expressibility, and robustness of the
quantum circuit used in the hybrid model.</p>
        <p>The quantum circuit demonstrates high expressibility (QCE = 0.939), suggesting that it explores
a diverse set of quantum states across the Hilbert space. The fidelity score ( QCF = 1.000) confirms
robustness to noise under simulation with a basic noise model. The locality ratio of 0.6364 indicates
a well-balanced design between local and entangling operations. Entanglement is both substantial
and well-structured, as shown by high values of EEE = 0.8345 and QMI = 1.6691, supporting rich
correlations necessary for quantum information processing.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Feature Space Metrics</title>
        <p>The geometry and structure of the quantum feature space are assessed through the metrics in Table 4,
which evaluate compression, variance distribution, activation diversity, and sensitivity to perturbations.</p>
        <p>The feature map achieves perfect compression (FMCR = 3.0), indicating that all input variance
is concentrated in one efective principal component. However, the efective dimension ( EDQFS =
1.0) confirms that the quantum feature space lacks spread. Activation diversity is absent ( QLAD =
0.000), signaling possible circuit over-regularization or symmetry that collapses measurement outputs.
Meanwhile, the high sensitivity (QOS = 9.644) indicates the model reacts sharply to small perturbations,
suggesting brittle or sharp decision boundaries.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Training Dynamics</title>
        <p>Training dynamics of the hybrid and classical models are evaluated in Table 5. These metrics reflect
convergence behavior, parameter eficiency, gradient stability, and vanishing gradient issues.</p>
        <p>The hybrid model exhibits lower validation loss variability in late training (TSI = 0.0025), indicating
consistent behavior, whereas the classical model converges quickly but shows slightly more fluctuation
(TSI = 0.0144)</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Outlook</title>
    </sec>
    <sec id="sec-6">
      <title>6. Availability</title>
      <p>QMetric provides a structured approach for diagnosing hybrid quantum-classical models beyond
conventional performance metrics. It highlights key aspects such as training behavior, encoding
robustness, and circuit design quality. Future developments will include migration to Qiskit’s Estimator
v28, support for additional platforms like PennyLane, and expanded metric coverage for multi-class
tasks and generative models.</p>
      <p>All source code, examples, metric definitions, and plotting utilities are available at
https://gitlab.com/illesova.silvie.scholar/qmetric. The repository includes a Conda
environment file to reproduce the case study.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Acknowledgments</title>
      <p>MB was supported by Italian Government (Ministero dell’Università e della Ricerca, PRIN 2022
PNRR)—cod. P2022SELA7: “RECHARGE: monitoRing, tEsting, and CHaracterization of performAnce
Regressions”—D.D. n. 1205 del 28/7/2023. TR gratefully acknowledges the funding support by program
”Excellence initiative—research university” for the AGH University in Kraków, as well as the ARTIQ
project: UMO-2021/01/2/ST6/00004 and ARTIQ/0004/2021.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used OpenAI ChatGPT (GPT-4) in order to: assist with
text formatting, clarify structure, and edit grammar. After using these tools, the author reviewed and
edited the content as needed and takes full responsibility for the publication’s content.
8https://quantum.cloud.ibm.com/docs/en/api/qiskit-ibm-runtime/estimator-v2
[8] C. A. Riofrio, O. Mitevski, C. Jones, F. Krellner, A. Vuckovic, J. Doetsch, J. Klepsch, T. Ehmer,
A. Luckow, A characterization of quantum generative models, ACM Transactions on Quantum
Computing 5 (2024) 1–34.
[9] B. Coyle, M. Henderson, J. C. J. Le, N. Kumar, M. Paini, E. Kashefi, Quantum versus classical
generative modelling in finance, Quantum Science and Technology 6 (2021) 024013.
[10] R. Divya, J. D. Peter, Quantum machine learning: A comprehensive review on optimization of
machine learning algorithms, in: 2021 Fourth International Conference on Microelectronics,
Signals &amp; Systems (ICMSS), IEEE, 2021, pp. 1–6.
[11] N. A. AL Ajmi, M. Shoaib, Optimization strategies in quantum machine learning: A performance
analysis, Applied Sciences 15 (2025) 4493.
[12] V. Novák, I. Zelinka, V. Snášel, Optimization strategies for variational quantum algorithms in
noisy landscapes, arXiv preprint arXiv:2506.01715 (2025).
[13] A. Bílek, J. Hlisnikovskỳ, T. Bezděk, R. Kukulski, P. Lewandowska, Experimental study of
multipleshot unitary channels discrimination using the ibm q computers, arXiv preprint arXiv:2505.17731
(2025).
[14] P. Lewandowska, M. Beseda, Benchmarking gate-based quantum devices via certification of qubit
von neumann measurements, arXiv preprint arXiv:2506.03514 (2025).
[15] V. Novák, I. Zelinka, L. Přibylová, L. Martínek, V. Benčurik, Predicting post-surgical
complications with quantum neural networks: A clinical study on anastomotic leak, arXiv preprint
arXiv:2506.01708 (2025).
[16] L. Wei, H. Liu, J. Xu, L. Shi, Z. Shan, B. Zhao, Y. Gao, Quantum machine learning in medical image
analysis: A survey, Neurocomputing 525 (2023) 42–53.
[17] S. Rani, P. K. Pareek, J. Kaur, M. Chauhan, P. Bhambri, Quantum machine learning in healthcare:
Developments and challenges, in: 2023 IEEE International Conference on Integrated Circuits and
Communication Systems (ICICACS), IEEE, 2023, pp. 1–7.
[18] J. Thiyagalingam, M. Shankar, G. Fox, T. Hey, Scientific machine learning benchmarks, Nature</p>
      <p>Reviews Physics 4 (2022) 413–420.
[19] M. Sajjan, J. Li, R. Selvarajan, S. H. Sureshbabu, S. S. Kale, R. Gupta, V. Singh, S. Kais, Quantum
machine learning for chemistry and physics, Chemical Society Reviews 51 (2022) 6475–6573.
[20] B. Huang, N. O. Symonds, O. A. von Lilienfeld, Quantum machine learning in chemistry and
materials, Handbook of Materials Modeling: Methods: Theory and Modeling (2020) 1883–1909.
[21] N. Bauer, K. Yeter-Aydeniz, G. Siopsis, Eficient quantum chemistry calculations on noisy quantum
hardware, arXiv preprint arXiv:2503.02778 (2025).
[22] D. A. Fedorov, B. Peng, N. Govind, Y. Alexeev, Vqe method: a short survey and recent developments,</p>
      <p>Materials Theory 6 (2022) 2.
[23] S. Illésová, V. Novák, T. Bezděk, M. Beseda, C. Possel, Numerical optimization strategies for the
variational hamiltonian ansatz in noisy quantum environments, arXiv preprint arXiv:2505.22398
(2025).
[24] C. Ciaramelletti, M. Beseda, M. Consiglio, L. Lepori, T. J. G. Apollaro, S. Paganelli, Detecting
quasidegenerate ground states in topological models via the variational quantum eigensolver,
Phys. Rev. A 111 (2025) 022437. URL: https://link.aps.org/doi/10.1103/PhysRevA.111.022437. doi:10.
1103/PhysRevA.111.022437.
[25] C. Zhang, L. Jiang, F. Chen, Qracle: A graph-neural-network-based parameter initializer for
variational quantum eigensolvers, arXiv preprint arXiv:2505.01236 (2025).
[26] M. Beseda, S. Illésová, S. Yalouz, B. Senjean, State-averaged orbital-optimized vqe: A quantum
algorithm for the democratic description of ground and excited electronic states, Journal of Open
Source Software 9 (2024) 6036.
[27] S. Illésová, M. Beseda, S. Yalouz, B. Lasorne, B. Senjean, Transformation-free generation of a
quasidiabatic representation from the state-average orbital-optimized variational quantum eigensolver,
Journal of Chemical Theory and Computation (2025).
[28] H. L. Tang, V. Shkolnikov, G. S. Barron, H. R. Grimsley, N. J. Mayhall, E. Barnes, S. E. Economou,
qubit-adapt-vqe: An adaptive algorithm for constructing hardware-eficient ansätze on a quantum
processor, PRX Quantum 2 (2021) 020310.
[29] K. M. Nakanishi, K. Mitarai, K. Fujii, Subspace-search variational quantum eigensolver for excited
states, Physical Review Research 1 (2019) 033062.
[30] M. K. Gupta, M. Beseda, P. Gawron, How quantum computing-friendly multispectral data can be?,
in: IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, IEEE, 2022,
pp. 4153–4156.
[31] Y. Suzuki, H. Yano, Q. Gao, S. Uno, T. Tanaka, M. Akiyama, N. Yamamoto, Analysis and synthesis
of feature map for kernel-based quantum classifier, Quantum Machine Intelligence 2 (2020) 9.
[32] H. Kwon, H. Lee, J. Bae, Feature map for quantum data in classification, in: 2024 International
Conference on Quantum Communications, Networking, and Computing (QCNC), IEEE, 2024, pp.
41–48.
[33] S. Gupta, R. Zia, Quantum neural networks, Journal of Computer and System Sciences 63 (2001)
355–383.
[34] P. Rebentrost, M. Mohseni, S. Lloyd, Quantum support vector machine for big data classification,</p>
      <p>Physical review letters 113 (2014) 130503.
[35] C. Wilson, J. Otterbach, N. Tezak, R. Smith, A. Polloreno, P. J. Karalekas, S. Heidel, M. S. Alam,
G. Crooks, M. da Silva, Quantum kitchen sinks: An algorithm for machine learning on near-term
quantum computers, arXiv preprint arXiv:1806.08321 (2018).
[36] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, J. M. Gambetta,</p>
      <p>Supervised learning with quantum-enhanced feature spaces, Nature 567 (2019) 209–212.
[37] J.-G. Liu, L. Wang, Diferentiable learning of quantum circuit born machines, Physical Review A
98 (2018) 062324.
[38] D. F. Locher, L. Cardarelli, M. Müller, Quantum error correction with quantum autoencoders,</p>
      <p>Quantum 7 (2023) 942.
[39] F. Zhang, J. Li, Z. He, H. Situ, Transformer for parameterized quantum circuits expressibility
prediction, arXiv e-prints (2024) arXiv–2405.
[40] D. Ter Haar, Foundations of statistical mechanics, Reviews of Modern Physics 27 (1955) 289.
[41] R.-Q. Yang, Gravity duals of quantum distances, Journal of High Energy Physics 2021 (2021) 1–12.
[42] S. Kullback, Kullback-leibler divergence, Tech. Rep. (1951).
[43] P. E. Mendonça, R. d. J. Napolitano, M. A. Marchiolli, C. J. Foster, Y.-C. Liang, Alternative fidelity
measure between quantum states, Physical Review A—Atomic, Molecular, and Optical Physics 78
(2008) 052330.
[44] A. Vadali, R. Kshirsagar, P. Shyamsundar, G. N. Perdue, Quantum circuit fidelity estimation using
machine learning, Quantum Machine Intelligence 6 (2024) 1.
[45] B. Bertini, P. Kos, T. Prosen, Operator entanglement in local quantum circuits i: Chaotic
dualunitary circuits, SciPost Physics 8 (2020) 067.
[46] P. Boes, J. Eisert, R. Gallego, M. P. Müller, H. Wilming, Von neumann entropy from unitarity,</p>
      <p>Physical review letters 122 (2019) 210402.
[47] B. Schumacher, M. D. Westmoreland, Quantum mutual information and the one-time pad, Physical</p>
      <p>Review A—Atomic, Molecular, and Optical Physics 74 (2006) 042305.
[48] G. De Tomasi, S. Bera, J. H. Bardarson, F. Pollmann, Quantum mutual information as a probe for
many-body localization, Physical Review Letters 118 (2017) 016804.
[49] A. Maćkiewicz, W. Ratajczak, Principal components analysis (pca), Computers &amp; Geosciences 19
(1993) 303–342.
[50] J. Crossley, A. Nerode, Efective dimension, Journal of Algebra 41 (1976) 398–412.
[51] M. Schuld, N. Killoran, Quantum machine learning in feature hilbert spaces, Physical review
letters 122 (2019) 040504.
[52] B. Xie, Y. Liang, L. Song, Diverse neural network learns true target functions, in: Artificial</p>
      <p>Intelligence and Statistics, PMLR, 2017, pp. 1216–1224.
[53] D. Liu, L. Y. Wu, B. Li, F. Boussaid, M. Bennamoun, X. Xie, C. Liang, Jacobian norm with selective
input gradient regularization for interpretable adversarial defense, Pattern Recognition 145 (2024)
109902.
[54] N. Chandramoorthy, A. Loukas, K. Gatmiry, S. Jegelka, On the generalization of learning algorithms
that do not converge, Advances in Neural Information Processing Systems 35 (2022) 34241–34257.
[55] R. Livni, S. Shalev-Shwartz, O. Shamir, On the computational eficiency of training neural networks,</p>
      <p>Advances in neural information processing systems 27 (2014).
[56] N. Galegale, C. I. Shimabukuro, et al., Deep learning applied to stock prices: Epoch adjustment in
training an lstm neural network, International Journal of Business and Management 19 (2024).
[57] A. Gilyén, S. Arunachalam, N. Wiebe, Optimizing quantum optimization algorithms via faster
quantum gradient computation, in: Proceedings of the Thirtieth Annual ACM-SIAM Symposium
on Discrete Algorithms, SIAM, 2019, pp. 1425–1444.
[58] M. Larocca, P. Czarnik, K. Sharma, G. Muraleedharan, P. J. Coles, M. Cerezo, Diagnosing barren
plateaus with tools from quantum optimal control, Quantum 6 (2022) 824.
[59] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, H. Neven, Barren plateaus in quantum
neural network training landscapes, Nature communications 9 (2018) 4812.
[60] Y. Liu, Q. Li, J. Tan, Y. Shi, L. Shen, X. Cao, Understanding the stability-based generalization
of personalized federated learning, in: The Thirteenth International Conference on Learning
Representations, ????
[61] M. Tan, Q. Le, Eficientnetv2: Smaller models and faster training. arxiv 2021, arXiv preprint
arXiv:2104.00298 5 (2021).
[62] L. Deng, The mnist database of handwritten digit images for machine learning research, IEEE
Signal Processing Magazine 29 (2012) 141–142.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. H.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guo</surname>
          </string-name>
          , H.-L. Huang,
          <article-title>Hybrid quantum-classical convolutional neural networks</article-title>
          ,
          <source>Science China Physics, Mechanics &amp; Astronomy</source>
          <volume>64</volume>
          (
          <year>2021</year>
          )
          <fpage>290311</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.-Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-J.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-W.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-R.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <article-title>Deep q-learning with hybrid quantum neural network on solving maze problems</article-title>
          ,
          <source>Quantum Machine Intelligence</source>
          <volume>6</volume>
          (
          <year>2024</year>
          )
          <article-title>2</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <article-title>A multi-classification hybrid quantum neural network using an all-qubit multi-observable measurement strategy</article-title>
          ,
          <source>Entropy</source>
          <volume>24</volume>
          (
          <year>2022</year>
          )
          <fpage>394</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Biamonte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wittek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pancotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rebentrost</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Lloyd,</surname>
          </string-name>
          <article-title>Quantum machine learning</article-title>
          ,
          <source>Nature</source>
          <volume>549</volume>
          (
          <year>2017</year>
          )
          <fpage>195</fpage>
          -
          <lpage>202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schuld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Sinayskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Petruccione</surname>
          </string-name>
          ,
          <article-title>An introduction to quantum machine learning</article-title>
          ,
          <source>Contemporary Physics</source>
          <volume>56</volume>
          (
          <year>2015</year>
          )
          <fpage>172</fpage>
          -
          <lpage>185</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Senokosov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sedykh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sagingalieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kyriacou</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Melnikov,</surname>
          </string-name>
          <article-title>Quantum machine learning for image classification</article-title>
          ,
          <source>Machine Learning: Science and Technology</source>
          <volume>5</volume>
          (
          <year>2024</year>
          )
          <fpage>015040</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sigov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ratkin</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. A. Ivanov,</surname>
          </string-name>
          <article-title>Quantum machine learning: Classifications, challenges, and solutions</article-title>
          ,
          <source>Journal of Industrial Information Integration</source>
          (
          <year>2024</year>
          )
          <fpage>100736</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>