<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Training Feed-Forward Neural Networks for Medical Image Registration Using Sine-Cosine and Teaching-Learning-Based Fusion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tapas Sangiri</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Md Ajij</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Technology, University of North Bengal</institution>
          ,
          <addr-line>Raja Rammohunpur,Darjeeling, 734013,West Bengal</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>8</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>This study presents a novel hybrid metaheuristic algorithm, Sine-Cosine Adaptive Teaching-Learning-Based Optimization (SCATLBO),designed to train Feed-Forward Neural Networks (FNNs) for mono and multi-modal medical image registration. SCATLBO combines the strengths of the Sine-Cosine Algorithm (SCA) for exploration with Teaching-Learning-Based Optimization (TLBO) for exploitation, achieving a balance that enhances the algorithm's capability to avoid local minima and improve convergence rates. Medical image registration, essential for accurate medical analysis, benefits from this hybrid approach as it aligns complex multi-modal images efectively. In this work, SCATLBO was applied to train FNNs on breast MRI images from the Cancer Genome Atlas Breast Invasive Carcinoma (TCGA-BRCA) dataset. The performance of SCATLBO is benchmarked against several well-known metaheuristic algorithms, including TLBO, Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Grey Wolf Optimizer (GWO), and Evolution Strategy (ES), with evaluations based on Mean Squared Error (MSE) for mono-modal and Mutual Information (MI) for multi-modal registration. Experimental results demonstrate that SCATLBO outperforms other techniques in terms of accuracy, convergence speed, and robustness, establishing it as a promising tool for neural network-based image registration tasks. This work contributes to the advancement of metaheuristic training approaches for FNNs, with potential applications in diverse medical imaging fields.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Medical Image Registration</kwd>
        <kwd />
        <kwd>Metaheuristic Optimization</kwd>
        <kwd>Sine-Cosine Algorithm (SCA)</kwd>
        <kwd>Teaching-Learning-Based Optimization (TLBO)</kwd>
        <kwd>Feed-Forward Neural Network (FNN)</kwd>
        <kwd>Multimodal Image Alignment</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Image registration, which is very useful in medical applications, entails aligning various image files
inside a similar coordinate system to match imaging content. Comparing images captured from diferent
angles, at diferent times, or with diferent sensors or modalities [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].The fundamental components of
the human brain, biological neurons, serve as the inspiration for the machine learning modality known
as an Artificial Neural Network (ANN) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].Just as the vast complexity of the human brain, artificial
neural systems form a maze of computational pathways that are primed for acquiring knowledge. From
computer eyes deciphering visual scenes to clever algorithms mastering games of strategy, and from
medical machines diagnosing disorders to programs facilitating cross-language communication, neural
nets have demonstrated adaptable intelligence in many diverse fields. Whether discerning patterns in
social webs or recognizing speech, neural networks continue to surprise with their knack for teasing
out the intricate structures underlying immense troves of unrefined data, enabling novel insight and
innovative progress across a growing roster of promising applications. The most basic artificial neural
network capable of resolving non-linear issues is a Feed-forward Neural Network (FNN). Neuronal
connections in a FNN do not form cycle. Input, hidden, and output layers are the three layers in
which neurons are placed. Every neuron in the deepest layer connects to all neurons above it.
Feedforward neural networks can efectively categorize and forecast continuous values. Firstly, FNNs learn
patterns, classes, or clusters within information to perform categorization and prediction.Supervised,
unsupervised, and reinforcement learning represent the three principle categories of machine instruction.
Whereas datasets for unsupervised learning only comprise attribute values alone, training and testing
materials for supervised learning have both goal and attribute values paired. Across many iterations or
epochs, the interactions between two neurons in an FNN are assigned weights for optimal functioning
based on the issue at hand and therefore enable knowing. Some linkages may have great impact whilst
others contribute less. The most fruitful connections are reinforced through ongoing alteration as the
network is exposed to more information.
      </p>
      <p>
        Methods of training strategy can be classified into two stochastic and exact methods. Exact methods
are classical mathematical methods based on the gradient. The training algorithm most commonly
used is back-propagation, which relies on gradient descent techniques [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].Back-propagation algorithm
primarily uses gradient descent method to determine the optimal weights. Not only the gradient
descent algorithm itself, many other methods based on gradient data such as Newton method,
QuasiNewtonmethod and conjugate-gradient method etc are also applicable. Although these algorithms are
quicker than other approaches, they sufer from local minima entrapment. Such techniques typically
provide a local optimum (or perhaps a close local optimum) of the basin in which the original solution is
found. Consequently, initialization plays a crucial role in the solution at the end. Stochastic techniques
are often employed to facilitate the training of FNNs through mitigating the proclivity toward becoming
ensnared by local optima. Non-determinism is leveraged by metaheuristics, general-purpose stochastic
optimization algorithms, to escape the captivity of local optima. The malleability and derivative-free
nature of metaheuristics, permitting handling of non-continuous and non-diferentiable activation
functions, represents an additional benefit. These advantages have rendered metaheuristics a
fascinating domain of inquiry for FNN training. In this paper, we presented a workflow using hybrid
SCATLBO algorithm for training feed forward Neural Networks (FFNs) to learn the mapping functions
corresponding to mono and multimodal image registration, and compared our results with some of
the state-of-the-art algorithms that are conventionally used for training such FFN systems, namely
Teaching-Learning Based Optimization (TLBO)[
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], Particle Swarm Optimization (PSO)[
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
        ], Ant
Colony optimization(ACO)[
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 11, 12</xref>
        ], Grey Wolf optimizer (GWO)[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], Evolution Strategy (ES)[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
The Hybrid SCATLBO algorithm trained the Feed forward Neural Networks (FNNs) more efectively
than other metaheuristic algorithms. Most of them are metaheuristics in the sense that they are
natural inspired algorithms based on physical laws, biological evolution, neurobiological system, swarm
behaviour. Metaheuristics can be roughly divided into two broad categories: single-solution-based or
population-based. Single-solution metaheuristics: One candidate solution at a time explores the search
space, while population-based methods have multiple solutions searching the space simultaneously.
This incorporation of SCA and TLBO exploit both the exploration and exploitation abilities can increase
the training performance as well as ofers a robust FNN for image registration problems.
      </p>
      <p>
        In this study, we explore a novel approach for training Feed-Forward Neural Networks (FNNs) using
a hybridized metaheuristic algorithm, combining Sine-Cosine Algorithm (SCA) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]and
TeachingLearning-Based Optimization (TLBO), referred to as SCATLBO.This approach optimizes FNNs for
medical image registration, specifically aligning multi-modal medical images for enhanced analysis.
Below, we outline each section and the specific methodologies and concepts they address.
      </p>
      <p>We begin with the Methodology section(2), where we detail the phases of the SCATLBO algorithm,
including the Teaching Phase (Exploitation) (2.1), where the model learns from the best solutions, and
the Learning Phase (Exploration) (2.2), enhancing the algorithm’s robustness by promoting diverse
learning strategies. We also introduce the Sine-Cosine Modification (SCA) technique (2.3) to improve
exploration capabilities and avoid local minima.</p>
      <p>The Feed-Forward Neural Network (FNN) framework (2.4) is explained, including the calculation
of Mean Squared Error (MSE) (2.4.1) as a primary loss function for monomodal image registration and
Mutual Information (MI) (2.5) for multimodal scenarios. This section also discusses the dataset used in
this study (2.6) and how we configured parameters for the algorithm (2.7).</p>
      <p>In the Result and Discussion section (3), we analyze the algorithm’s performance. We assess the
statistical significance of our results using the Wilcoxon Signed-Rank Test (3.1) and make decisions
based on multiple criteria through the TOPSIS method (3.2).</p>
      <p>Finally, the Conclusion (4) highlights our main findings and suggests future research directions for
optimizing FNN structures and tuning parameters in advanced applications.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>The proposed methodology leverages the SCATLBO (Sine-Cosine Algorithm combined with
TeachingLearning-Based Optimization) for training a FNN to enhance image registration accuracy. The workflow
of proposed method is outlined in Figure 1. This process can be summarized as follows:
1. Read Input Image: The workflow starts by reading the input image, such as a DCE-MRI scan,
that requires registration.
2. Preprocessing the Image: The input image undergoes preprocessing, which involves
normalization, resizing, or other necessary transformations. This step ensures that the image data is
consistent and free of noise, preparing it for efective processing by the neural network.
3. Randomly Initialize Weight FNN: The FNN model is initialized with random weights, creating
a baseline model that can be iteratively refined. This initialization is essential for ensuring that
the model starts from an unbiased point.
4. Apply SCATLBO Optimization: The core of the proposed methodology involves applying
SCATLBO optimization to fine-tune the weights of the FNN. The SCATLBO algorithm enhances
the optimization process by combining the exploration and exploitation capability. This hybrid
approach allows the model to search the weight space more efectively, finding optimal weights
that improve the accuracy and robustness of the registration.
5. Validation: After optimization, the model is evaluated through a validation process. The
registered image produced by the model is assessed to determine if it meets specific accuracy criteria.
6. Optimized FNN Model: Once the model passes validation, the final output is an optimized FNN
model capable of performing high-quality image registration. This optimized model is now ready
for deployment in tasks requiring accurate and reliable for image registration.</p>
      <p>This methodology outlines a systematic approach for developing an optimized FNN model through
SCATLBO, which iteratively adjusts the model’s parameters to achieve improved accuracy in image
registration. By combining adaptive mechanisms and hybrid optimization strategies, the SCATLBO-FNN
model provides an efective solution for tasks requiring precise alignment of medical images, improving
both performance and reliability in medical image analysis applications.</p>
      <p>1. Read Input Image: The workflow starts by reading the input image, such as a DCE-MRI scan,
that requires registration.
2. Preprocessing the Image: The input image undergoes preprocessing, which involves
normalization, resizing, or other necessary transformations. This step ensures that the image data is
consistent and free of noise, preparing it for efective processing by the neural network.
3. Randomly Initialize Weight FNN: The FNN model is initialized with random weights, creating
a baseline model that can be iteratively refined. This initialization is essential for ensuring that
the model starts from an unbiased point.
4. Apply SCATLBO Optimization: The core of the proposed methodology involves applying
SCATLBO optimization to fine-tune the weights of the FNN. The SCATLBO algorithm enhances
the optimization process by combining the exploration capability of the sine-cosine mechanism
with the exploitation capability of TLBO. This hybrid approach allows the model to search the
weight space more efectively, finding optimal weights that improve the accuracy and robustness
of the registration.
5. Validation: After optimization, the model is evaluated through a validation process. The
registered image produced by the model is assessed to determine if it meets specific accuracy criteria.
6. Optimized FNN Model: Once the model passes validation, the final output is an optimized FNN
model capable of performing high-quality image registration. This optimized model is now ready
for deployment in tasks requiring accurate and reliable image registration.</p>
      <p>This methodology outlines a systematic approach for developing an optimized FNN model through
SCATLBO, which iteratively adjusts the model’s parameters to achieve improved accuracy in image
registration. By combining adaptive mechanisms and hybrid optimization strategies, the SCATLBO-FNN
model provides an efective solution for tasks requiring precise alignment of medical images, improving
both performance and reliability in medical image analysis applications.</p>
      <sec id="sec-2-1">
        <title>2.1. Teaching phase (Exploitation)</title>
        <p>In the TLBO algorithm, the Teacher phase adjusts the network parameters (such as weights and biases)
based on the best-performing solution in the population. This phase can be seen as an exploitation
phase, where the best solution tries to bring other solutions closer to its performance. The parameter
update equation for the teacher phase is given by:
new = old +  × (best −  × mean)
(1)
Where:
1. new and old are the updated and current weights.
2. best is the weight of the best-performing solution (teacher).
3. mean is the mean weight of the population.
4.  is a random number between 0 and 1.</p>
        <p>5.  is a teaching factor, usually chosen as 1 or 2.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Learning phase (Exploration)</title>
        <p>In the Learning phase, individuals learn from each other by updating their weights based on the
diference between two randomly chosen individuals:</p>
        <p>new = old +  × ( − )
 and  be the weights of two randomly selected solutions from the population, and  be a random
number.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Sine-Cosine Modification (SCA)</title>
        <p>The sine-cosine modification introduces a non-linear exploration mechanism that enhances global
search capability. The SCA operator is incorporated as follows:
new =
{︃old +  × sin( ) × | best − old|</p>
        <p>
          old +  × cos( ) × | best − old|
Where  is an angle that modulates exploration using sine or cosine waves, and  is a random number
that controls the intensity of exploration. The SCA phase helps balance exploration and exploitation by
using these sine and cosine functions, making the search space more diverse and preventing the network
from being trapped in local minima. The Feedforward Neural Network (FNN) is trained to minimize a
registration loss, such as Mean Squared Error (MSE)[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] between intensities of corresponding pixels
in the two images, or Mutual Information (MI) to capture the statistical dependency of the images in
multimodal registration. The intensity features are mapped to the ideal transformation parameters
using the FNN. Consider the following:
(2)
(3)
(4)
the FNN, represented as ℱ , maps input intensity pairs  to transformation parameters.  and 
represent the network’s weights and biases.
        </p>
        <p>A transformation  is applied to the output  in order to align  with  .</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Feed-forward Neural Network (FNN)</title>
        <p>A feed-forward neural network (FNN) is the most fundamental type of artificial neural network,
characterized by a layer-structured architecture where connections between neurons do not form cycles.
In FNNs, neurons are organized into three layers: the input layer, the hidden layer, and the output layer.
Data flows unidirectionally from one layer to the next, with each neuron in a layer connected to every
neuron in the subsequent layer, ensuring there are no backward connections, loops, or cycles.</p>
        <p>The output signals generated by artificial neurons are determined by applying an activation function,
frequently called a transfer function, to the weighted linear combination () +  of inputs. By
introducing non-linearity through these activation functions, feedforward neural networks gain the
ability to learn intricate, non-trivial patterns in vast amounts of data. Figure 2 depicts a basic schematic
of a simple feed-forward network, demonstrating how information flows through it in one direction
from input to output. Additionally, Figure 3 displays common activation functions along with their
corresponding graphs, which are integral to comprehending how these mathematical operators mold a</p>
        <p>Rotation, translation, scaling, and other transformation parameters are represented by the vector
 = [ 1,  2, . . . ,  ].</p>
        <p>Using the formula
 = [ 1,  2, . . . ,  ]</p>
        <p>= ℱ (; , )
network’s behavior during the learning process. The interplay between weights, inputs, and activation
functions unlocks neural networks potential to model complex patterns beyond the capabilities of
shallow architectures like logistic regression.</p>
        <p>Input Layer</p>
        <p>Hidden Layer</p>
        <p>Output Layer
(5)
(6)
2.4.1. Mean Squared Error (MSE)
The loss function  measures the diference in intensity between  and the transformed ′ :
where ℱ (; , ) represents the FNN model with weights  and biases .</p>
        <p>MSE =</p>
        <p>1 ∑︁ ( () − ′ ())2
 =1
min (ℱ (; , ))
,</p>
        <p>For monomodal registration, we used Mean Squared Error (MSE), where  () and ′ () are the
intensities at corresponding points in the fixed and moving images.</p>
        <p>The goal of training is to minimize this loss function:</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Mutual Information (MI)</title>
        <p>MI measures the shared information between the two images:
MI = −
 
∑︁ ∑︁  ( (), ′ ( )) log
=1 =1</p>
        <p>( (), ′ ( ))
 ( ()) ·  (′ ( ))
(7)
Where
•  ( ()) is the marginal probability of the intensity  () in the fixed image  ,
•  (′ ( )) is the marginal probability of the intensity ′ ( ) in the transformed moving image
′ ,
•  ( (), ′ ( )) is the joint probability of the intensities  () and ′ ( ) occurring together.
For multi-modal registration, we used Mutual Information (MI).</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. Dataset</title>
        <p>
          The Cancer Genome Atlas Breast Invasive Carcinoma (TCGA-BRCA) dataset, which is accessible via
the Cancer Imaging Archive (TCIA), provided 40 pairs of 2D T2-weighted DCE-MRI slices in total
[
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Every MR image that had a resolution higher than 256 x 256 pixels was reduced to that size. An
experienced radiologist manually segmented the images to provide ground truth images, which were
the gold standard for assessment. In this work, we register breast MRI images Ten separate runs of 40
pairs of breast MR images are used to test this technique, with 10 images examined for each patient.
Four distinct impacted individuals’ mean and standard deviation for these registrations were determined
using data from 10 separate runs
        </p>
      </sec>
      <sec id="sec-2-7">
        <title>2.7. Configuring appropriate parameters for the algorithm</title>
        <p>In the experiment, we used SCATLBO, TLBO, GWO, PSO, ACO, and ES algorithms to test their
performance. In the TLBO algorithm, the population size is set to 25 potential solutions in each iteration.
WEPMax (Weighted Exploitation Probability Maximum) controls the maximum exploitation probability,
with a value of 1 indicating the highest level of exploitation. WEPMin (Weighted Exploitation Probability
Minimum) is set to the minimum level of exploitation.</p>
        <p>In PSO, the population size of 25 particles remains constant across iterations, much like TLBO. The
velocity of each particle is influenced by its previous velocity through the inertia weight  . Additionally,
particles are cognitively drawn towards their personal best location via the cognitive coeficient 1.</p>
        <p>In GWO, the number of search agents is set to 40, and the maximum number of iterations is 50.
In ES, the population size ( ) is set to 40, and the number of neurons ( ) is set to 1.</p>
        <p>In ACO, the initial pheromone level ( ) is set to a very small value of 1 × 10− 6. The pheromone
update constant () is 20 while the evaporation constant () is 1. Pheromone decays globally at 0.9 and
locally at 0.5 per iteration. Solutions are found based on pheromone strength ( ) of 1 and visibility ( )
of 5. Additionally, particles are socially attracted towards the global best location through 2.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Result and Discussion</title>
      <p>The results demonstrate that the SCATLBO algorithm exhibits strong exploration capabilities, which
significantly contributes to its efectiveness in training FNNs. SCATLBO’s opposition-based learning
mechanism enhances its exploration by generating diverse solutions, allowing the algorithm to identify
promising new areas in the search space while avoiding local minima. This balance of exploration
and exploitation is essential for successful stochastic optimization, as it enables SCATLBO to navigate
the search space efectively and converge on optimal solutions without being trapped in suboptimal
regions.</p>
      <p>
        Algorithm 1 SCATLBO (Sine-Cosine Adaptive Teaching-Learning-Based Optimization)
1: Input: Objective function  ( ), population size  , dimension , max iterations max
2: Initialize: Population  = {1, 2, . . . ,  } randomly.
3: Evaluate fitness  () for all individuals in the population.
4: for  = 1 to max do
5: Teacher Phase (Exploitation):
6: Identify the teacher best as the solution with the best fitness.
7: Compute the mean of the population mean.
8: for each  do
9: Generate a random number  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
10: Compute the teaching factor  = 1 or  = 2 (randomly chosen).
11: Update :
      </p>
      <p>new = old +  × (best −  × mean)
Apply SCA modification:
new = new +  × sin( ) × | best − new|
new = new +  × cos( ) × | best − new|
where  ∈ [0, 2 ] is a random angle.</p>
      <p>end for
Learning Phase (Exploration):
for each  do</p>
      <p>Select two random individuals  and  ( ̸= ).</p>
      <p>
        Generate a random number  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
      </p>
      <p>Update :
Apply SCA modification:</p>
      <p>new = old +  × ( − )
new = new +  × sin( ) × |  − |
new = new +  × cos( ) × |  − |
12:
13:
14:
15:
16:
17:
18:
19:
or
or
20: end for
21: Evaluate fitness  () for all updated solutions.
22: Replace  with new if  (new) &lt;  ().
23: Check stopping criteria (e.g., maximum iterations or desired accuracy).
24: end for
25: Output: Best solution best and its fitness  (best).</p>
      <p>Tables 1 and 2 compare SCATLBO-FNN with other well-known metaheuristic algorithms
TeachingLearning-Based Optimization (TLBO), Particle Swarm Optimization (PSO), Ant Colony Optimization
(ACO), Grey Wolf Optimizer (GWO), and Evolution Strategy (ES)-using two key performance metrics:
Mean Squared Error (MSE) for mono-modal image registration and Mutual Information (MI) for
multimodal registration. These metrics are critical for assessing the accuracy and stability of the registration
process, with lower MSE indicating better alignment for mono-modal images and higher MI signifying
greater statistical dependency in multi-modal cases.</p>
      <p>From the tables, it is evident that SCATLBO-FNN achieves the lowest average error (AVG) and highest
accuracy in both mono-modal (Table 1) and multi-modal (Table 2) registrations, indicating superior
alignment performance and consistent accuracy. Additionally, SCATLBO demonstrates a competitive
execution time compared to other algorithms, balancing performance eficiency with computational cost.
The low standard deviation (STD) values for SCATLBO also reflect its stability, showing less variation
across diferent trials, which is crucial in achieving reliable results in medical image registration.</p>
      <p>Figure 5 illustrates the convergence behavior of SCATLBO-FNN in both mono-modal and multi-modal
scenarios. The convergence graph shows that SCATLBO quickly minimizes the error, highlighting its
strong initial exploration phase. As the iterations proceed, the algorithm stabilizes, indicating efective
exploitation of promising solutions. This stability in convergence confirms that SCATLBO is less likely
to be trapped in local optima, allowing it to achieve optimal or near-optimal solutions across diferent
registration tasks.</p>
      <p>Overall, these comparative metrics underscore the robustness of SCATLBO-FNN. The hybrid approach,
which combines SCA’s exploration with TLBO’s exploitation, makes SCATLBO a powerful and eficient
tool for training FNNs in complex medical imaging tasks. This combination enables SCATLBO to
outperform other metaheuristic algorithms in terms of accuracy, stability, and convergence speed,
establishing it as a highly efective optimization technique for neural network-based image registration.</p>
      <sec id="sec-3-1">
        <title>3.1. Statistical significance</title>
        <p>
          The Wilcoxon Signed-Rank Test [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], a non-parametric statistical method, was implemented in order to
assess significance as appropriate for paired data. Here, we did this analysis on the average and standard
deviation of MI and MSE from our results. In particular, we estimated p-values for the comparison
between SCATLBO-FNN and all other techniques. A p-value below 0.05 is considered statistically
significant.
        </p>
        <p>Our results, shown in both mono-modality (Table 3) and multi-modality (Table 4) evaluations, yielded
a p-value of 0.002, which is well below the 0.05 threshold. This confirms that SCATLBO-FNN outperforms
other methods with statistical significance, validating its efectiveness in FNN optimization for image
registration tasks.</p>
        <p>Furthermore, Table 5 provides a comparative performance ranking of the algorithms based on their
overall eficiency and accuracy in training Feed-Forward Neural Networks (FNNs). SCATLBO-FNN
achieved the highest rank, indicating its superior performance across both mono-modal and multi-modal
registration tasks. This advantage is attributed to SCATLBO’s balanced exploitation and exploration
phases, achieved by combining the strengths of SCA for exploration and TLBO for exploitation.</p>
        <p>In contrast, other algorithms such as TLBO, PSO, ACO, GWO, and ES ranked lower due to slower
convergence or a tendency to become trapped in local minima. The rankings underscore the benefits of
SCATLBO’s hybrid approach, where the integration of SCA and TLBO enables faster and more accurate
convergence compared to standalone metaheuristics. This comparative ranking, together with the
Wilcoxon test results, supports SCATLBO as a robust and efective choice for FNN optimization in
medical image registration, ofering greater accuracy and consistency than other established methods.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Decision-Making Based on Multiple Criteria</title>
        <p>
          The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is a widely used
multicriteria decision-making (MCDM) method that facilitates the ranking and selection of optimal solutions
based on multiple evaluation standards [
          <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
          ]. Developed in the 1980s, TOPSIS ranks options by
calculating their Euclidean distance from an ideal solution (the best possible outcome) and a
negativeideal solution (the worst possible outcome). The option closest to the ideal and farthest from the
negative-ideal solution is assigned the highest ranking, making it the preferred choice.
        </p>
        <p>In this study, TOPSIS was employed to evaluate and rank SCATLBO against other metaheuristic
algorithms based on key performance metrics such as accuracy, convergence speed, and error minimization.
By taking both ideal and worst-case scenarios into account, TOPSIS provides an objective framework to
compare SCATLBO?s efectiveness in optimizing Feed-Forward Neural Networks (FNNs) for medical
image registration tasks. This robust decision-making approach ensures that SCATLBOs performance
is assessed comprehensively, validating it as a highly efective algorithm for complex, multi-modal
registration problems.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and future work</title>
      <p>In this work, we introduced SCATLBO, a hybrid metaheuristic combining the Sine-Cosine Algorithm
(SCA) and Teaching-Learning-Based Optimization (TLBO), to train Feed-Forward Neural Networks
(FNNs) for medical image registration. SCATLBO was evaluated on the TCGA-BRCA dataset and
compared with five established algorithms, demonstrating superior accuracy, faster convergence, and
robustness against local minima. Its balanced exploration-exploitation strategy and opposition-based
learning contribute to its high performance in both mono-modal and multi-modal registration tasks.</p>
      <p>Future Work: SCATLBO could be extended to other neural architectures, such as CNNs, to handle
more complex imaging tasks. Additionally, future studies could explore adaptive mechanisms for
parameter tuning, as well as applications across diverse medical datasets to assess generalizability and
robustness in various medical imaging contexts.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Thanks to the developers of ACM consolidated LaTeX styles https://github.com/borisveytsman/acmart
and to the developers of Elsevier updated LATEX templates https://www.ctan.org/tex-archive/macros/
latex/contrib/els-cas-templates.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Haskins</surname>
          </string-name>
          , U. Kruger,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <article-title>Deep learning in medical image registration: a survey</article-title>
          ,
          <source>Machine Vision and Applications</source>
          <volume>31</volume>
          (
          <year>2020</year>
          )
          <article-title>8</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mambo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Djouani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hamam</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. van Wyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Siarry</surname>
          </string-name>
          ,
          <article-title>A review on medical image registration techniques</article-title>
          ,
          <source>International Journal of Computer and Information Engineering</source>
          <volume>12</volume>
          (
          <year>2018</year>
          )
          <fpage>48</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Hertz</surname>
          </string-name>
          ,
          <article-title>Introduction to the Theory of Neural Computation</article-title>
          , CRC Press, Boca Raton,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F. W.</given-names>
            <surname>Glover</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Kochenberger</surname>
          </string-name>
          , Handbook of Metaheuristics, volume
          <volume>57</volume>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>Science</given-names>
          </string-name>
          &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          , New York,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R. V.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. J.</given-names>
            <surname>Savsani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Vakharia</surname>
          </string-name>
          ,
          <article-title>Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems</article-title>
          , Computer-aided design
          <volume>43</volume>
          (
          <year>2011</year>
          )
          <fpage>303</fpage>
          -
          <lpage>315</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. V.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. J.</given-names>
            <surname>Savsani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vakharia</surname>
          </string-name>
          ,
          <article-title>Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems</article-title>
          ,
          <source>Information sciences 183</source>
          (
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mendes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cortez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rocha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Neves</surname>
          </string-name>
          ,
          <article-title>Particle swarms for feedforward neural network training</article-title>
          ,
          <source>in: Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No. 02CH37290)</source>
          , volume
          <volume>2</volume>
          , IEEE,
          <year>2002</year>
          , pp.
          <fpage>1895</fpage>
          -
          <lpage>1899</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Meissner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schmuker</surname>
          </string-name>
          , G. Schneider,
          <article-title>Optimized particle swarm optimization (opso) and its application to artificial neural network training</article-title>
          ,
          <source>BMC bioinformatics 7</source>
          (
          <year>2006</year>
          )
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.-T.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-H.</given-names>
            <surname>Chou</surname>
          </string-name>
          , T.-K. Liu,
          <article-title>Tuning the structure and parameters of a neural network by using hybrid taguchi-genetic algorithm</article-title>
          ,
          <source>IEEE Transactions on Neural Networks</source>
          <volume>17</volume>
          (
          <year>2006</year>
          )
          <fpage>69</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Blum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Socha</surname>
          </string-name>
          ,
          <article-title>Training feed-forward neural networks with ant colony optimization: An application to pattern classification</article-title>
          ,
          <source>in: Fifth International Conference on Hybrid Intelligent Systems (HIS'05)</source>
          , IEEE,
          <year>2005</year>
          , pp.
          <fpage>6</fpage>
          -pp.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>K.</given-names>
            <surname>Socha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Blum</surname>
          </string-name>
          ,
          <article-title>An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training</article-title>
          ,
          <source>Neural computing and applications 16</source>
          (
          <year>2007</year>
          )
          <fpage>235</fpage>
          -
          <lpage>247</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dorigo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Maniezzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Colorni</surname>
          </string-name>
          ,
          <article-title>Ant system: optimization by a colony of cooperating agents</article-title>
          ,
          <source>IEEE transactions on systems, man, and cybernetics, part b (cybernetics) 26</source>
          (
          <year>1996</year>
          )
          <fpage>29</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mirjalili</surname>
          </string-name>
          ,
          <article-title>How efective is the grey wolf optimizer in training multi-layer perceptrons</article-title>
          ,
          <source>Applied intelligence</source>
          <volume>43</volume>
          (
          <year>2015</year>
          )
          <fpage>150</fpage>
          -
          <lpage>161</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Pavlidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Tasoulis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. P.</given-names>
            <surname>Plagianakos</surname>
          </string-name>
          , G. Nikiforidis,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vrahatis</surname>
          </string-name>
          ,
          <article-title>Spiking neural network training using evolutionary algorithms</article-title>
          ,
          <source>in: Proceedings. 2005 IEEE International Joint Conference on Neural Networks</source>
          ,
          <year>2005</year>
          ., volume
          <volume>4</volume>
          , IEEE,
          <year>2005</year>
          , pp.
          <fpage>2190</fpage>
          -
          <lpage>2194</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mirjalili</surname>
          </string-name>
          ,
          <article-title>Sca: a sine cosine algorithm for solving optimization problems, Knowledge-based systems 96 (</article-title>
          <year>2016</year>
          )
          <fpage>120</fpage>
          -
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Rumelhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <article-title>Learning internal representations by error propagation, parallel distributed processing, explorations in the microstructure of cognition</article-title>
          , ed.
          <source>de rumelhart and j. mcclelland. vol. 1</source>
          .
          <year>1986</year>
          , Biometrika
          <volume>71</volume>
          (
          <year>1986</year>
          )
          <article-title>6</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vendt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Freymann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kirby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koppel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mafitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pringle</surname>
          </string-name>
          , et al.,
          <article-title>The cancer imaging archive (tcia): maintaining and operating a public information repository</article-title>
          ,
          <source>Journal of digital imaging 26</source>
          (
          <year>2013</year>
          )
          <fpage>1045</fpage>
          -
          <lpage>1057</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>F.</given-names>
            <surname>Wilcoxon</surname>
          </string-name>
          ,
          <article-title>Individual comparisons by ranking methods</article-title>
          ,
          <source>in: Breakthroughs in Statistics: Methodology and Distribution</source>
          , Springer, New York,
          <year>1992</year>
          , pp.
          <fpage>196</fpage>
          -
          <lpage>202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>C.-L. Hwang</surname>
            ,
            <given-names>Y.-J.</given-names>
          </string-name>
          <string-name>
            <surname>Lai</surname>
          </string-name>
          , T.-Y. Liu,
          <article-title>A new approach for multiple objective decision making</article-title>
          ,
          <source>Computers &amp; operations research 20</source>
          (
          <year>1993</year>
          )
          <fpage>889</fpage>
          -
          <lpage>899</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Panda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jagadev</surname>
          </string-name>
          ,
          <article-title>Topsis in multi-criteria decision making: A survey</article-title>
          ,
          <source>in: 2018 2nd International Conference on Data Science and Business Analytics (ICDSBA)</source>
          , IEEE, Changsha, China,
          <year>2018</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>