<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improving the quality of cytological cell nuclei classification using ensembles⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleh Pitsun</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Myroslav Shymchuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>West Ukrainian National University</institution>
          ,
          <addr-line>11 Lvivska st., Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Biomedical image classification is an important point of any automated diagnostic system. The disadvantage of existing studies is the use of a limited number of machine learning algorithms for classification, data segmentation. The impact of image quality on classification and especially segmentation results is particularly noticeable when processing biomedical images. Since the nuclei of cells in biomedical images are characterized by the complexity of processing, the use of one or two algorithms for all types of images is insufficient. In this work, modern machine learning algorithms with and without a teacher are used to classify the quantitative characteristics of cell nuclei. In this work, the use of an ensemble approach with a combination of several algorithms and the use of the soft, hard voting principle is proposed. The proposed solution allowed to obtain an accuracy of 99.22% for the training sample and 83.12% for the test sample.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;CNN</kwd>
        <kwd>images</kwd>
        <kwd>parallel processing 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Methods for classifying biomedical images are key to modern medical diagnostics, since
classification can increase the automation of processes and increase the accuracy of detecting
pathological changes in cells. However, the efficiency of classification of individual models may
decrease due to the influence of noise, artifacts, or significant variability of the data. To solve these
problems, ensemble machine learning methods, such as Random Forest, Gradient Boosting, etc., are
increasingly used.</p>
      <p>The relevance of suction is due to the ability to combine the results of many weak models into a
mixed hybrid predictor. This approach provides increased accuracy and reliability of classification.
Also, the use of several algorithms provides a reduction in errors and resistance to changes in data.
The use of such methods in the classification of cytological, histological, immunohistochemical
images is a promising direction for improving automatic diagnostics and decision support in
biomedical research and clinical practice.</p>
      <p>Calculation of quantitative characteristics of cell nuclei on immunohistochemical, histological
and cytological images is relevant, as it allows for an objective assessment of morphological
changes accompanying pathological processes. The main parameters that describe cell nuclei are:
area, perimeter, circumference, length of the major and minor axes. Automated quantitative
analysis increases the objectivity of the assessment and contributes to the development of
computer-aided decision-making systems in medical practice. Another advantage of using such an
approach to diagnosis is the absence of the need to process large-sized images with a large noise
component and erroneous data. The use of quantitative characteristics for classification allows you
to speed up the classification process and avoid the need for large processing resources. Hard
voting and soft voting algorithms play an important role in increasing the accuracy and reliability
of classification in machine learning tasks. Their use allows for more efficient combination of the
results of a specific set of basic models, which in turn provides more accurate forecasting.</p>
      <p>The object of the study is the quantitative characteristics of biomedical image kernels.</p>
      <p>The subject of the study is an ensemble method for classifying quantitative characteristics,
which uses hard voting and soft voting.</p>
      <p>The purpose of this work is to develop an ensemble method for classifying quantitative
characteristics of biomedical image kernels. A feature of the proposed work is the use of the PCA
algorithm to increase the accuracy of classification in combination with the ensemble method.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statements</title>
      <p>The following tasks were performed in this work:
1. an analysis of machine learning algorithms for classifying quantitative characteristics of
objects was performed;
2. an analysis of approaches to calculating quantitative characteristics of cell nuclei in
biomedical images was performed;
3. a comparative analysis of machine learning algorithms using ensemble methods to obtain
the best result was performed.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Literature review</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the architecture of the ensemble framework for data classification is presented to
improve the quality of classification. The scope of application of this framework is medicine. Many
data in medicine are unbalanced, so in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] the Undersampling Balanced Ensemble (USBE) algorithm
is proposed. As a result, the authors achieved better data classification performance using two
different breast cancer datasets. Quantitative characteristics are the basic element for classification
tasks, however, in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] an approach is proposed that allows the use of convolutional neural
networks as one of the classification methods. The proposed approach improves accuracy
compared to SVM. In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] the authors developed an ensemble method that is designed for classifying
medical datasets.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] the authors present a review and analysis of modern ensemble methods for forecasting.
The work also clearly highlights the problems and trends in this field. In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], a method is proposed
that aims to minimize the subset of features to achieve a satisfactory diagnosis of a wide range of
diseases with the highest accuracy, sensitivity and specificity. The ensemble method is also used in
the analysis of medical datasets.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the authors tested the effectiveness of the proposed ensemble learning method on nine
unbalanced medical datasets. The experimental results showed that this paradigm outperforms
other modern classification models. In [0], the authors classify the quantitative characteristics of
cell nuclei and use different approaches to classify medical data. The authors also propose to use
existing approaches for further research in other fields than medicine. In the study [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the raw
data set was first pre-processed, then a machine learning method was applied, including artificial
neural network, decision tree, support vector machine, naive Bayes, and nearest neighbor (KNN)
with one ensemble method (which collects 30 KNN algorithms as weak learners). The prediction
result was obtained using the majority vote method based on the generator output data.
      </p>
      <p>
        The work [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] emphasizes the importance of obtaining and comparing the performance of
different types of ensemble machine learning models during electronic medical record screening.
Papers [
        <xref ref-type="bibr" rid="ref11">11-13</xref>
        ] present an analysis of modern approaches to the analysis of cell nucleus
characteristics, which allows us to highlight their features. In paper [14-18], modern techniques for
using ensemble methods for classifying images of cell nuclei, in particular cytological and
histological ones, were analyzed. The analysis performed allows us to determine the main
algorithms for ensembles. Application of mlops practices for biomedical image classificationshown
in [19].
      </p>
      <p>The above analysis demonstrates the relevance of the application of ensemble methods in
medicine, in particular in the tasks of classifying data in the form of quantitative characteristics of
cell nuclei.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Materials and methods</title>
      <sec id="sec-4-1">
        <title>Calculation of quantitative characteristics of cell nuclei</title>
        <p>To obtain information about the quantitative characteristics of cell nuclei, image processing
was performed using preprocessing and segmentation. The main characteristics of cells are:
1.
2.
3.</p>
        <p>area;
perimeter;
length of the main and lateral axes;
2.</p>
        <p>After the calculations are performed, the results are stored in a csv file for further processing by
the classifier.</p>
        <p>Classification Algorithms</p>
        <p>Ensemble learning helps improve the performance of a machine learning model by combining
multiple models. This approach allows for better prediction performance compared to a single
model. The main causes of training errors are noise, bias, and variance. Ensemble helps minimize
these factors.</p>
        <p>Bagging is the application of the Bootstrap procedure to a high-variance machine learning
algorithm, typically a decision tree. Boosting is a sequential process where each subsequent model
tries to correct the errors of the previous model. Subsequent models depend on the previous model.</p>
        <p>Bagging and Boosting reduce the variance of a single estimate because they combine multiple
estimates from different models.</p>
        <p>Logistic Regression</p>
        <p>The essence of regression analysis is to analytically or experimentally determine the coefficients
of the features of objects in such a way as to minimize the total error between the values of the
model function and the experimental data for the entire input sample.</p>
        <p>
          Logistic regression is a type of multiple regression used to study the relationship between a
binary or categorical outcome and several influencing factors. The paper [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] demonstrates the basic
principle, the choice and role of the independent variable, the conditions of application, the model
estimation and diagnostics of multiple logistic regression. The goal of regression is to determine
whether an object belongs to one of two classes, where a set of object features represents the input
variable, and the output variable (analysis result) is binary and takes the values 0 or 1. The
advantages of logistic regression include:
1. Ease of implementation;
2. High efficiency of working with large data sets;
3. Relatively high quality of working with unbalanced data.
        </p>
        <p>
          The objective function of linear regression is defined as the minimization of the mean square
error between the predicted and actual values of the observed parameter [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]:
        </p>
        <p>F =
1 n</p>
        <p>∑ ( ^yi - yi)2 → min
n i=1
where ^yi – predicted (analytical) value, linear concerning the desired coefficients;
yi – the actual value;
n – data sample length.</p>
        <p>General linear regression model:</p>
        <p>m
^y = w0 + w1 x1 + w2 + … + wm xm = ∑ wi xi = W T X ϵ RI ,</p>
        <p>i=0
where W T =( w0 , w1 , … , wm ) - weights of object features;
wi ϵ RI; X=(1,x1,x2 , … , xm) – object predictors;
m – number of features of the object; Τ – transposition symbol.</p>
        <p>Random Forest</p>
        <p>
          Random forest is an ensemble machine learning method used to solve classification, regression,
and other types of prediction problems. Its working principle is to create a set of decision trees
during model training, after which a class is selected for classification based on the majority of
votes. For regression, the average value of the predictions of all trees is calculated. The algorithm is
given in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
        </p>
        <p>Algorithm for building a committee:</p>
        <p>Generating a random subsample with repetition of size n from the training sample;
Randomly select m predictors (features) from M;
(1)
(2)</p>
        <p>The decision tree is constructed by selecting the features based on which the partition is
performed, not from all M features, but only from m randomly selected ones;
Dividing the trait X into two classes X i ≥ Si and X i &lt; Si;
Measure homogeneity in two new classes using the Gini criterion;
Take the following value of the "split point" Si feature Х, for which maximum class
homogeneity is achieved;
The tree is built until the subsample is fully used without applying the pruning procedure;
Returning to step 1, generate a new sample and repeat steps 2–4 to build the next tree.</p>
        <p>Classification of objects is carried out by voting: each tree in the ensemble assigns the object to
one of the classes, and the class that received the most significant number of votes from the trees is
chosen.</p>
        <p>Gradient Boosting</p>
        <p>Gradient Boosting is a machine learning technique used to solve classification and regression
problems. The basic idea is that a collection of weak models can produce a more accurate predictor
when working together. Gradient boosting combines several weak models to form a single strong
model. Although algorithms with fast learning can be challenging to optimize, their accuracy is
significantly improved by sequential coupling. In contrast, models with a slower learning curve
adapt better to statistical patterns in the data. Weak learners are added in such a way that each
subsequent learner takes into account the residuals obtained in the previous stage during the model
development process. The final model combines the results of all stages, forming a strong learner.
The residuals are calculated using a loss function.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Hard voting</title>
        <p>Hard voting is a simple algorithm that can be used to combine predictions from multiple
classifiers. The algorithm works by first predicting each classifier. The ensemble prediction is then
simply the majority vote of the individual classifiers.</p>
        <p>The simplicity of hard voting makes it a popular choice for machine learning practitioners. Hard
voting is also very efficient and can often outperform other ensemble learning algorithms. Finally,
hard voting can be used with any classifier, making it a versatile tool. Hard voting can also be less
robust to noise in the data. This is because hard voting is based on the majority votes of the
individual classifiers. If the data is noisy, it is more likely that the majority of votes will be wrong.</p>
        <p>Predict the class label ^y by the majority vote of each classifierC j
where X A is the characteristic function [C j ( x ) = i ϵ A ,]and A is the set of unique class labels
Soft Voting
^y = mode {C1 ( x ) , C2 ( x ) , … , Cm ( x )}
^y = mode {0 ,0 ,1}= 0</p>
        <p>m
^y = arg max ∑ w j X A ( C j ( x ) = i )
j =1
(3)
(4)
(5)
For example, if two classes provided zero and one class provided 1:</p>
        <p>In addition to simple majority voting as described above, a weighted majority vote can be
calculated by relating the weight:</p>
        <p>Soft voting is an algorithm that can be used to combine the predictions of multiple classifiers
based on their probabilities. The algorithm works by first assigning a probability to each class. The
ensemble prediction is then simply the class with the highest overall likelihood.</p>
        <p>Soft voting has several advantages over hard voting. First, it is more accurate than hard voting.
Second, it is more robust to noise in the data. Third, it can be used with any classifier
where w j is the weight that can be assigned to the jth classifier.</p>
        <p>Using uniform weights, we calculate the average probabilities</p>
        <p>m
^y = arg max ∑ w p</p>
        <p>j ij
j =1
^y = arg max [ p (i0 | X ) , p (i0 | X )]
(6)
(7)</p>
        <p>A statistical description of the parameters of cytological examination of quantitative
characteristics of cell nuclei is given in Figure 3.</p>
        <p>The correlation matrix of the input data is shown in Figure 4.
The results of the classifiers without PCA are given in Table 2.</p>
        <p>The correlation between classes after PCA is shown in Figure 5.
The above description allows us to evaluate the input data better and understand the range of
values for each of the parameters after PCA processing.</p>
        <p>The correlation matrix of the input data after PCA is shown in Figure 6.</p>
        <p>The classification results after using data reduction are shown in Table 3.
Voting hard
Decission Tree
SVM
58.31
61.91
20.85
73.4
38.03
47.35
82.94
75.32
78.12
99.75
95.3
86.38
97.81
97.02
99.86</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>As a result of the comparative analysis, it was found that the ensemble method based on the
random_forest and bagging_classifier algorithms showed the best results compared to other
classical data classification algorithms.</p>
      <p>Based on the experimental approach, it was also found that the use of data reduction
significantly increases the classification accuracy.</p>
      <p>The classification accuracy using ensembles is 99.22% for the training sample and 83.12% for
the test sample. The worst indicators were shown by approaches based on neural networks, the
support vector method, and Adaboost.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
Approach and Nuclei Count," 2023 IEEE 11th Region 10 Humanitarian Technology Conference
(R10-HTC), Rajkot, India, 2023, pp. 462-467, doi: 10.1109/R10-HTC57504.2023.10461806.
[12] O. Berezsky, O. Pitsun, T. Datsko, I.Tsmots, V.Teslyuk. Specified diagnosis of breast cancer on
the basis of immunogistochemical images analysis - Ceur Workshop Proceedings, 2020, 2753,
pp. 129–135 https://ceur-ws.org/Vol-2753/short5.pdf
[13] R. Saha, M. Bajger and G. Lee, "Prior Guided Segmentation and Nuclei Feature Based
Abnormality Detection in Cervical Cells," 2019 IEEE 19th International Conference on
Bioinformatics and Bioengineering (BIBE), Athens, Greece, 2019, pp. 742-746, doi:
10.1109/BIBE.2019.00139
[14] A. B. Silva et al., "CNN Ensembles for Nuclei Segmentation on Histological Images of OED,"
2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS),
L'Aquila, Italy, 2023, pp. 601-604, doi: 10.1109/CBMS58004.2023.00286
[15] A. B. Silva et al., "CNN Ensembles for Nuclei Instance Segmentation in OED Histological
Images," 2025 IEEE 38th International Symposium on Computer-Based Medical Systems
(CBMS), Madrid, Spain, 2025, pp. 369-374, doi: 10.1109/CBMS65348.2025.00082.
[16] P. Das, R. Sharma, S. Dey Roy, N. Nath and M. K. Bhowmik, "Ensemble Segmentation of
Nucleus Regions from Histopathological Images towards Breast Abnormality Detection," 2022
25th International Conference on Computer and Information Technology (ICCIT), Cox's Bazar,
Bangladesh, 2022, pp. 1137-1142, doi: 10.1109/ICCIT57492.2022.10055451.
[17] M. Kadaskar and N. Patil, "Nuclei Classification in Histopathology Images Using Fuzzy
Ensemble of Convolutional Neural Networks," 2023 14th International Conference on
Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 2023, pp.
16, doi: 10.1109/ICCCNT56998.2023.10308315.
[18] O. Berezsky, O. Pitsun, N. Batryn, K. Berezska, L.Dubchak. Modern automated microscopy
systems in oncology - Ceur Workshop Proceedings, 2018, 2255, pp. 311–325
https://ceurws.org/Vol-2255/paper28.pdf
[19] O. Berezsky, O. Pitsun, G. Melnyk, B. Derysh, P. Liashchynskyi. Application Of MLOps
Practices For Biomedical Image Classification - Ceur Workshop ProceedingsOpen source
preview, 2022, 3302, pp. 69–77 https://ceur-ws.org/Vol-3302/short3.pdf</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Firoz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Balusupati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Syed</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ashraf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ramasamy</surname>
          </string-name>
          .
          <article-title>An Efficient, Ensemble-Based Classification Framework for Big Medical Data - Big Data</article-title>
          - Vol.
          <volume>10</volume>
          , No.
          <fpage>2</fpage>
          - 2022 https://doi.org/10.1089/big.
          <year>2021</year>
          .0132
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Krawczyk</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Schaefer</surname>
          </string-name>
          ,
          <article-title>"Ensemble fusion methods for medical data classification," 11th Symposium on Neural Network Applications in Electrical Engineering</article-title>
          , Belgrade, Serbia,
          <year>2012</year>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>146</lpage>
          , doi: 10.1109/NEUREL.
          <year>2012</year>
          .6419993
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Nanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brahnam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Loreggia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Barcellona</surname>
          </string-name>
          .
          <year>2023</year>
          .
          <article-title>"Heterogeneous Ensemble for Medical Data Classification" Analytics 2</article-title>
          , no.
          <volume>3</volume>
          :
          <fpage>676</fpage>
          -
          <lpage>693</lpage>
          . https://doi.org/10.3390/analytics2030037
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.R.</given-names>
            <surname>Namamula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chaytor</surname>
          </string-name>
          ,
          <article-title>Effective ensemble learning approach for large-scale medical data analytics</article-title>
          .
          <source>Int J Syst Assur Eng Manag</source>
          <volume>15</volume>
          ,
          <fpage>13</fpage>
          -
          <lpage>20</lpage>
          (
          <year>2024</year>
          ). https://doi.org/10.1007/s13198-021- 01552-7
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>O.</given-names>
            <surname>Sagi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Rokach</surname>
          </string-name>
          .
          <article-title>"Ensemble learning: A survey." Wiley interdisciplinary reviews: data mining and knowledge discovery 8</article-title>
          , no.
          <issue>4</issue>
          (
          <year>2018</year>
          )
          <article-title>: e1249</article-title>
          . https://doi.org/10.1002/widm.1249
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Al-Tashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Rais</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Abdulkadir</surname>
          </string-name>
          ,
          <article-title>"Hybrid Swarm Intelligence Algorithms with Ensemble Machine Learning for Medical Diagnosis,"</article-title>
          <source>2018 4th International Conference on Computer and Information Sciences (ICCOINS)</source>
          ,
          <source>Kuala Lumpur, Malaysia</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/ICCOINS.
          <year>2018</year>
          .
          <volume>8510615</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>"A Novel Ensemble Learning Paradigm for Medical Diagnosis With Imbalanced Data,"</article-title>
          <source>in IEEE Access</source>
          , vol.
          <volume>8</volume>
          , pp.
          <fpage>171263</fpage>
          -
          <lpage>171280</lpage>
          ,
          <year>2020</year>
          , doi: 10.1109/ACCESS.
          <year>2020</year>
          .3014362
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sidey-Gibbons</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sidey-Gibbons</surname>
          </string-name>
          ,
          <article-title>Machine learning in medicine: a practical introduction</article-title>
          .
          <source>BMC Med Res Methodol</source>
          <volume>19</volume>
          ,
          <issue>64</issue>
          (
          <year>2019</year>
          ). https://doi.org/10.1186/s12874-019-0681-4
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Asghari Varzaneh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shanbehzadeh</surname>
          </string-name>
          &amp; H.
          <string-name>
            <surname>Kazemi-Arpanahi</surname>
          </string-name>
          .
          <article-title>Prediction of successful aging using ensemble machine learning algorithms</article-title>
          .
          <source>BMC Med Inform Decis Mak</source>
          <volume>22</volume>
          ,
          <issue>258</issue>
          (
          <year>2022</year>
          ). https://doi.org/10.1186/s12911-022-02001-6
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Stevens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lyons</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. Dharmayat.</surname>
          </string-name>
          <article-title>Ensemble machine learning methods in screening electronic health records: A scoping review</article-title>
          .
          <source>DIGITAL HEALTH</source>
          .
          <year>2023</year>
          ;
          <article-title>9</article-title>
          . doi:
          <volume>10</volume>
          .1177/20552076231173225
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Aghera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. V.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vaishnani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Oza</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Gohel</surname>
          </string-name>
          ,
          <article-title>"Segmentation of Nuclei in H&amp;EStained Histological Images using Deep Learning Framework: A Perspective on Ensemble</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>