<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AI-Powered Platform for Comprehensive Diabetes Management</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yousra Beldjebel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Meftah Zouai</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ahmed Aloui</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ilyes Naidji</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Mohamed Khider University</institution>
          ,
          <addr-line>Biskra</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper introduces an innovative AI-powered platform designed to enhance comprehensive diabetes management. The platform leverages advanced machine learning (ML) and deep learning (DL) algorithms to significantly improve the processes of diagnosis, continuous monitoring, and overall patient care. By utilizing a substantial dataset obtained from a Taipei Municipal medical center, the platform integrates a range of AI techniques, such as Logistic Regression, Decision Trees, Random Forest, K-Nearest Neighbor (KNN), Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks. These algorithms work in tandem to provide accurate predictions and personalized insights into patient health. Key pre-processing steps ensure high data quality, including handling missing values, assessing the relevance of attributes, and balancing the dataset using the Synthetic Minority Over-sampling Technique (SMOTE). These measures enhance the robustness of the models, resulting in improved prediction accuracy and model performance. Notably, the Random Forest model emerged as a standout performer, achieving an impressive accuracy rate of 92.78%, significantly advancing the accuracy, sensitivity, and specificity of diabetes prediction. The platform is built with a scalable software architecture, complemented by an intuitive user interface that caters to a variety of clinical applications, making it a valuable tool for healthcare providers. This study highlights the transformative potential of AI in revolutionizing diabetes care, empowering clinicians to make informed decisions, and creating personalized treatment plans. Future research aims to expand the diversity of datasets, further refine the AI models, and incorporate real-time patient feedback to optimize the platform's efectiveness.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;LinFiSim</kwd>
        <kwd>Smart Home Simulation</kwd>
        <kwd>Internet of Things (IoT)</kwd>
        <kwd>Java</kwd>
        <kwd>JavaFX</kwd>
        <kwd>Home Automation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        ing (ML) technologies[
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
        ] has opened new avenues
for improving diabetes care by enabling more accurate
Diabetes mellitus is a chronic metabolic disorder char- predictions, continuous monitoring, and personalized
acterized by elevated blood sugar levels, which, if in- treatment strategies.
adequately managed, can result in severe health com- AI technologies, particularly ML and deep learning
plications such as cardiovascular disease, neuropathy, (DL)[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], have shown immense potential in
revolutionnephropathy, and retinopathy[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The prevalence of di- izing healthcare. These technologies can analyze large
abetes is steadily increasing worldwide, posing a signif- datasets to uncover hidden patterns, predict outcomes,
icant public health challenge. According to the Inter- and provide actionable insights. In the context of
dianational Diabetes Federation, approximately 463 million betes management, AI can enhance various aspects such
adults were living with diabetes in 2019, with this number as early diagnosis through predictive modeling, real-time
projected to rise to 700 million by 2045. This growing bur- monitoring using wearable devices, and personalized
den necessitates innovative approaches to improve the treatment plans based on patient-specific data[
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
diagnosis, monitoring, and management of diabetes[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Recent studies have demonstrated the efectiveness of
      </p>
      <p>
        The management of diabetes involves multiple compo- AI in diabetes diagnosis and monitoring. For instance, ML
nents, including early diagnosis, continuous monitoring algorithms have been used to analyze patient data and
of blood glucose levels, lifestyle modifications, and per- predict the onset of diabetes with high accuracy. DL
modsonalized treatment regimens[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Traditional methods of els, such as convolutional neural networks (CNNs)[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
diabetes management rely heavily on manual monitoring and long short-term memory (LSTM) networks[
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ],
and periodic clinical visits, which can be cumbersome and have been applied to continuous glucose monitoring
sysless efective in providing real-time feedback[
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ]. The tems to provide real-time predictions of blood glucose
advent of artificial intelligence (AI) and machine learn- levels. These advancements highlight the potential of
SYSTEM 2025: 11th Sapienza Yearly Symposium of Technology, Engi- AI to improve clinical outcomes and patient quality of
neering and Mathematics. Rome, June 4-6, 2025 life[
        <xref ref-type="bibr" rid="ref16">16, 17</xref>
        ].
$ beldjebelyousra@gmail.com (Y. Beldjebel); This paper aims to present a comprehensive
AImeftah.zouai@univ-biskra.dz (M. Zouai); a.aloui@univ-biskra.dz powered platform for diabetes management that
inte(A. Aloui); ilyes.naidji@univ-biskra.dz (I. Naidji) grates various ML and DL algorithms to enhance
diag(A.0A00lo0u-0i)0;0030-0009-5000-0216-68774(M7-.07Z6o6u(aIi.);N0a0i0d0ji-)0003-2623-5118 nosis, monitoring, and overall management. The specific
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License objectives of the study are as follows:
Attribution 4.0 International (CC BY 4.0).
1. Diagnosis: To evaluate and compare the perfor- classification accuracy of 95.35% after Principal
Commance of diferent ML algorithms in predicting ponent Analysis (PCA) and 96.5% before PCA. Prior to
diabetes using a comprehensive dataset. using PCA, XGBoost achieved 95.33% accuracy, while
2. Monitoring: To develop and assess DL models SVM (RBF) obtained 74.83%. After applying PCA, SVM
for real-time blood glucose level prediction. (RBF) maintained an accuracy of 74.14%, and XGBoost’s
3. Platform Development: To design a scalable accuracy slightly decreased to 93.33%. These findings
and user-friendly software architecture that inte- indicate the reliable performance of the Random Forest
grates the AI models and supports clinical appli- model in diagnosing diabetes.
      </p>
      <p>cation. Navya Pratyusha Miriyala et al. [18] suggested a
di4. Evaluation: To analyze the empirical findings in agnostic analysis of diabetes mellitus using a machine
terms of accuracy, sensitivity, and specificity, and learning approach. The study utilized the Pima Indians
to discuss the implications for diabetes care and Diabetes Dataset (PIDD) to train six diferent machine
future research directions. learning (ML) algorithms, including Naïve Bayes, KNN,</p>
      <p>Random Forest, Logistic Regression, Decision Tree, and</p>
      <p>
        The subsequent sections of this paper will provide eXtreme Gradient Boosting (XGBoost). According to the
a detailed literature review, describe the methodology observed experimental data, the Decision Tree algorithm
used in developing the platform, present the results and delivered an accuracy of 85.3%, while XGBoost provided
ifndings, explore the technical aspects of the software the best accuracy at 88.2%. The study suggests that future
architecture, discuss the implications and future research work could focus on handling the sampling strategy to
directions, and conclude with the key takeaways from balance the data, as there is a slight imbalance present.
the study. Jobeda JK et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] suggested a comparison of
machine learning algorithms for diabetes prediction using
2. Literature Review the Pima Indian Diabetes (PID) dataset, which contains
data on 768 patients. They used seven diferent machine
This section provides a brief overview of related work in learning algorithms, including Decision Tree (DT),
Kthe field of AI-driven diabetes management. Nearest Neighbors (KNN), Random Forest (RF), Naïve
Bayes (NB), AdaBoost (AB), Logistic Regression (LR), and
Support Vector Machine (SVM). Every model ofered an
2.1. Diagnosis of diabetes accuracy of at least 70%, with LR and SVM providing
Chandrashekar D. K. et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] conducted a study on the approximately 77-78% accuracy for both train/test split
prediction of gestational diabetes utilizing the PIMA In- and K-fold cross-validation methods. Additionally, they
dian dataset from the UCI Machine Learning Repository, tested a neural network (NN) model with varying hidden
which comprises 8 features. The objective of the research layers (1, 2, 3) and epochs (200, 400, 800). The best
accuwas to evaluate the eficacy of several machine learning racy, achieved by the NN with two hidden layers and 400
algorithms in predicting the onset of gestational diabetes epochs, was 88.6%.
in female patients. The algorithms tested included Naive Sireesha et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] proposed implementing a model
Bayes (NB), K-Nearest Neighbors (KNN), Support Vector to detect diabetes using machine learning classifiers to
Machine (SVM), K-Means Clustering, Artificial Neural achieve high accuracy with the Pima Indian Diabetes
Networks (ANN), and Random Forest (RF). Dataset. They applied several classification algorithms,
      </p>
      <p>The study reported varying degrees of accuracy for including K-Nearest Neighbor (KNN), Decision Tree (DT),
each algorithm. The Artificial Neural Network (ANN) Random Forest (RF), AdaBoost, Naive Bayes, and
XGachieved an accuracy of 72%, Support Vector Machine Boost. The results showed that the Decision Tree
Clas(SVM) attained 79%, K-Means Clustering and K-Nearest sifier achieved 85.2% accuracy, the XGBoost Classifier
Neighbors (KNN) both reached 77%, Random Forest (RF) achieved 88.8% accuracy, the KNN Classifier achieved
showed 80%, and Naive Bayes (NB) achieved the highest 86.2% accuracy, the Random Forest Classifier achieved
accuracy at 82%. This research highlights the significant 88.1% accuracy, the AdaBoost Classifier achieved 87.7%
potential of machine learning techniques in improving accuracy, and the Naive Bayes Classifier achieved 80.7%
the prediction and early detection of gestational diabetes, accuracy. Consequently, the study concluded that the
ofering valuable insights for developing more eficient XGBoost Classifier is the best among all the classifiers
diagnostic tools. mentioned.</p>
      <p>
        Thotad et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], in their study, analyze machine Zhu et al. [19] recently conducted a comprehensive
learning-based classifiers to diagnose diabetes in India review of how deep learning is being utilized in diabetes
using data from the Indian Demographic and Health Sur- care. They categorized their findings into three main
vey (2019–21). The study demonstrates that the Random areas: diagnosing diabetes, monitoring blood sugar
levForest algorithm achieved remarkable accuracy, with a els, and identifying complications associated with the
disease. The review included 40 studies that compared coupled with long short-term memory (LSTM)
stackingdeep learning models with traditional machine learning based Kalman smoothing to address sensor failures. The
methods, and found that deep learning models generally goal of this method was to establish ground truth by
outperformed the traditional approaches. comparing fingerprint blood glucose readings with
ex
      </p>
      <p>The review also examined how continuous glucose pected continuous glucose monitoring (CGM) values. To
monitoring and artificial pancreas devices could aid in evaluate the model, their study utilized the OhioT1DM
diabetes management. However, it highlighted the chal- dataset, which includes eight weeks of data from five T1D
lenges of dealing with significant fluctuations in blood patients. The proposed method outperformed previous
sugar levels and maintaining them within target ranges. approaches, achieving root mean squared errors (RMSE)
The authors discussed various deep learning architec- of 6.45 mg/dL and 17.24 mg/dL for the 30-minute and
tures, such as Deep Multilayer Perceptrons (DMLPs), 60-minute prediction ranges, respectively.
Convolutional Neural Networks (CNNs)[20, 21? ], and Hatice Vildan Dudukcu et al. [25] proposed a method
Recurrent Neural Networks (RNNs), that have been used for blood glucose prediction using deep neural networks
in diabetes research. They noted that these models excel with weighted decision level fusion, leveraging patients’
at handling complex data but face issues such as limited past BG data to address the challenge of accurately
foredata for training and the interpretability of their predic- casting BG levels for diabetic patients. The authors
emtions[? ]. ployed three neural network architectures: Long
Short</p>
      <p>The authors concluded that future advancements in Term Memory (LSTM), WaveNet, and Gated Recurrent
deep learning have the potential to significantly improve Units (GRU). They combined these models to enhance
diabetes management strategies. prediction accuracy by fusing the outputs of these
net</p>
      <p>
        Rahman et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] introduced an innovative method works.
for detecting diabetes using a Convolutional Long Short- The study utilized the OhioT1DM dataset, which
interm Memory (Conv-LSTM) model. This study was the cludes blood glucose history from 12 diabetic patients,
ifrst to apply this type of model for diabetes detection. and evaluated the performance of the models over 30, 45,
      </p>
      <p>
        The researchers utilized the Pima Indians Diabetes and 60-minute prediction horizons. The results
demonDatabase (PIDD) to test their Conv-LSTM model against strated that the fusion of the three models yielded the
three other well-known models: CNN-LSTM, Traditional best results for short-term blood glucose prediction, with
LSTM (T-LSTM), and Convolutional Neural Network RMSE values of 21.90 mg/dL for 30 minutes, 29.12 mg/dL
(CNN)[22]. They employed the Boruta method to iden- for 45 minutes, and 35.10 mg/dL for 60 minutes.
tify the most significant features in the data, such as Martinsson et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] employed long short-term
memage, blood pressure, insulin, glucose, and BMI. The Conv- ory (LSTM) networks, a variant of RNNs that efectively
LSTM model achieved the highest performance with an capture temporal dependencies in sequential data. Their
accuracy of 97.26% when tested with cross-validation, model processes historical blood glucose measurements
outperforming the other models and previous techniques. to predict future levels, requiring no additional feature
      </p>
      <p>The study underscores the importance of using ad- engineering or complex data preprocessing. The study
vanced methods and feature selection techniques for dia- demonstrated that the model performs comparably to
betes prediction. The Conv-LSTM model addressed sev- state-of-the-art methods on the OhioT1DM dataset,
useral issues inherent in other LSTM models, such as the ing metrics such as root-mean-squared error (RMSE) and
vanishing gradient problem and challenges related to the blood glucose-specific surveillance error grid (SEG)
temporal data changes. to evaluate performance. Furthermore, by incorporating</p>
      <p>Swapna G. et al. [23] present a methodology for the a variance estimation method, the model generates a
conclassification of diabetic and normal HRV signals using ifdence measure in the form of a univariate Gaussian
disdeep learning architectures. They employed a combina- tribution for every prediction. This feature enhances the
tion of convolutional neural networks (CNN) and long interpretability and reliability of the forecasts, allowing
short-term memory (LSTM) networks applied to HRV users to know when to exercise caution based on
predata, achieving an accuracy of 95.1%. The study fur- dicted accuracy. Because this method is computationally
ther improves upon this methodology by incorporating eficient, it can be used on devices with low computing
a support vector machine (SVM) for classification, which capacity, such as cell phones and CGM devices.
increased the accuracy to 95.7%.</p>
      <sec id="sec-1-1">
        <title>2.2. Monitoring of diabets</title>
        <sec id="sec-1-1-1">
          <title>Rabbi et al. [24] performed a groundbreaking study using a novel approach for blood glucose prediction by employing a deep recurrent neural network (RNN) model</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Methodology</title>
      <sec id="sec-2-1">
        <title>To develop a comprehensive diabetes management platform. We used a combination of AI techniques:</title>
        <sec id="sec-2-1-1">
          <title>3.1. Diagnosis of diabetes</title>
          <p>
            3.1.6. Artificial neural networks
In the diagnosis of diabetes we used several Machine ANNs mimic real neural networks by connecting their
learning and deep learning algorithms as follows: artificial neurons in a manner akin to that of the brain
network. The brain, or neural network, is made up of
3.1.1. Logistic Regression (LR) connections between these cells, also known as neurons
[
            <xref ref-type="bibr" rid="ref17">29</xref>
            ] Information enters a biological neuron by its
denThe statistics branch is where the LR models were ob- drite, is processed by the soma, and then is transferred
tained. This approach has been modified for problem via an axon [
            <xref ref-type="bibr" rid="ref18">30</xref>
            ]. When it comes to artificial neurons,
statements including binary classification. The primary they are simply mathematical models (functions). This
goal of LR is to determine the coeficient values. The model comprises three simple sets of rules:
multiplicavalue is converted to 0–1 by +e LR. The LR model de- tion, summation, and activation. Artificial neuron inputs
termines whether to anticipate a given data instance of are weighted, ensuring that each value is considered.is
the class as 0 or 1. This method can be used to solve multiplied by individual weight. The sum function in the
issues if there are several plausible explanations for our middle of an artificial neural network adds all weighted
predictions .Standard function of lr is shown as follow inputs and biases. At the exit of an artificial neuron, the
[26]: total of previously weighted inputs and bias is passed
ℎ () = 1 + − (10+ 1) (1) tfehrrofuugnhcttiohne.activation function, also known as the
trans
          </p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Equation 1 represents the logistic decision for the pro</title>
        <p>jected data. The data label  represents the constants as
 1 and  0.
3.1.2. Decision Trees (DTs)</p>
      </sec>
      <sec id="sec-2-3">
        <title>DTs form a tree structure by determining thresholds for input features. The classifier creates judgment rules to forecast the target class or value [27].</title>
        <p>3.1.3. Random Forest</p>
      </sec>
      <sec id="sec-2-4">
        <title>RF is a supervised learning system. The RF classifier</title>
        <p>
          consists of many decision trees for diferent subjects from
the provided dataset. To boost forecast accuracy, the
algorithm takes the average of subsets from each tree.
Instead of depending on a single decision tree, RF uses
the majority vote from all trees to forecast the result.
Each node in the decision tree answers a query about the
data [26].
3.1.4. K-Nearest Neighbor (K-NN)
KNN is a popular machine learning algorithm that uses
the Supervised Learning approach. According to
Brownlee (2016b), K-NN is commonly used for regression and
classification. The K-NN method compares the
similarities between new and current cases/data. The new case is
allocated to the most similar category from the available
possibilities [28].
3.1.5. Support Vector Machines (SVM)
SVM is non-parametric algorithms that solve regression
and classification problems with linear and non-linear
functions. These functions assign input feature vectors
to an n-dimensional space known as the feature space
[27].
3.1.7. Convolutional Neural Networks
Convolutional Neural Networks (CNN) are a powerful
class of deep learning models that are widely applied to
various tasks, including object detection, speech
recognition, computer vision [? ], classification imaging [ ? ?
] and bioinformatics. CNN is a feed forward neural
network that leverages convolutional structures to extract
features from data. Unlike traditional methods, CNN
automatically learns and recognizes features in data without
the need for manual feature extraction by humans. The
design of CNN is inspired by visual perception. The main
components of a CNN include a convolutional layer, a
pooling layer, and a fully connected layer [
          <xref ref-type="bibr" rid="ref19">31</xref>
          ].
3.1.8. Dataset
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>This study used outpatient examination data from a</title>
        <p>Taipei Municipal medical center, with 15,000 women aged
20-80 as samples. These ladies were hospitalized between
2018 and 2020, as well as 2021 and 2022, with or without
a diabetes diagnosis. The study looked at eight patient
parameters, including number of pregnancies, plasma
glucose level, diastolic blood pressure, sebum thickness,
insulin level, BMI, diabetes pedigree function, and age
where the patients with diabetic are 5000 and the healthy
patients count is 10000. Initial inspection shows an
imbalance in the dataset, with more non-diabetic instances
than diabetic ones (Figure 1).
3.1.9. Data Pre-processing</p>
      </sec>
      <sec id="sec-2-6">
        <title>Data pre-processing is a critical step to ensure the efectiveness of AI techniques. Structured data is essential for accurate modeling and prediction.</title>
        <p>
          substantial correlation, while a value close to zero
indicates no correlation [
          <xref ref-type="bibr" rid="ref20">32</xref>
          ].
        </p>
        <sec id="sec-2-6-1">
          <title>1. Handling Missing Values: We checked for and 3.2. Monitoring of diabets</title>
          <p>
            addressed any missing values in the dataset by
either eliminating rows/columns with missing 3.2.1. AI Background
data or imputing them using statistical methods. An improved version of recurrent neural networks (RNN)
In our dataset, there were no missing values, as called long short-term memory (LSTM) deals with the
shown in Figure 2. problem of storing long-term dependencies. The LSTM
2. Determining Attribute Relevance: The rele- was first presented by in 1997.The current input in an
vance of each attribute was assessed using Pear- LSTM network at a given moment in time step and the
son’s correlation coeficient, illustrated in Figure output from the preceding time step are supplied to the
3. This method calculates a correlation coeficient Long Short-Term Memory (LSTM) unit, which produces
between − 1 and 1 to quantify the relationship an output that is forwarded to the subsequent time step.
between input and output properties. A coefi- For categorization purposes, the last hidden layer of the
cient value above 0.5 or below − 0.5 indicates a last time step—and occasionally all hidden layers—are
frequently used [
            <xref ref-type="bibr" rid="ref19">31</xref>
            ]. An LSTM network’s general
architecture is shown in figure 4:
          </p>
        </sec>
      </sec>
      <sec id="sec-2-7">
        <title>The equations below describe how the gates are computed [33].</title>
        <p>where the input and output gate vectors are denoted
by the letters f, i, and o, respectively. W , w, b and ⊗</p>
        <p>
          By including three gated units—a forget gate, input represent weights of input,weights of recurrent output,
gate, and output gate—that allow for efective control bias and element-wise multiplication respectively.
over the memory of previous states, LSTM circumvents
the vanishing gradient problem in RNN .Based on the 3.2.2. Dataset
current input and the prior internal state, the input gate
determines how to update the internal state. How much
of the prior internal state should be lost is decided by the
forget gate. Lastly, the output gate controls how much
the internal state afects the system [
          <xref ref-type="bibr" rid="ref19">31</xref>
          ].
 =  (  +  ℎ− 1 +  )
 =  ( + ℎ− 1 + )
 =  ( + ℎ− 1 + )
 =  ⊗ − 1 +  ⊗  ( + ℎ− 1 + ) (5)
ℎ =  ⊗ tanh()
(2)
(3)
(4)
(6)
We trained and evaluated our method on the Ohio T1DM
dataset, which is developed to advance research in blood
glucose levels. This data was gathered over eight weeks
from 12 individuals with type 1 diabetes. Each participant
supplied self-reported life events, insulin delivery records,
physiological sensor metrics, and continuous glucose
monitoring (CGM) data, all of which were anonymized by
a random ID. The dataset facilitates research on machine
learning with the goal of improving blood glucose level
prediction accuracy, which is important for managing
diabetes and developing artificial pancreas devices. The
dataset contains extensive data points: CGM readings
every 5 minutes, blood glucose levels from finger sticks,
insulin doses (bolus and basal), self-reported meal times
with carbohydrate estimates, exercise, sleep, work, stress,
and illness records, along with physiological data from
iftness bands. The first cohort used Basis Peak fitness
bands, while the second cohort used Empatica Embrace
bands, providing detailed metrics such as heart rate, skin
temperature, galvanic skin response, and step count. To
determine the ideal attribute set for the BG prediction
model, we test each of these attributes individually. The
quantity of training and test examples for every patient
is displayed in Table 2 [24].
        </p>
      </sec>
      <sec id="sec-2-8">
        <title>Rounding Timestamps Each timestamp in the collection is rounded to the nearest defined period (in this example, 120 minutes).</title>
        <p>• Study 1: Extracting glucose level. Figure 6: preprocessing steps .
• Study 2: Extracting glucose level and
carbohy</p>
        <p>drates to analyze their efect on glucose levels.
• Study 3: Extracting glucose level, carbohydrates, 4. Results and Findings
and steps.
• Study 4: Extracting glucose level, carbohydrates, 4.1. Diagnosis of diabetes
and quality of sleep (1 for Poor, 2 for Fair, 3 for</p>
        <p>
          Good). 4.1.1. Performance criteria
• Study 5: Extracting glucose level, carbohydrates, Model Evaluation Metrics The following measures
and intensity of exercise (on a scale of 1 to 10, were utilized to assess the suggested model. When
makwith 10 being the most physically active). ing predictions about occurrences, there will be four
cat• Study 6: Extracting glucose level, carbohydrates, egories of outcomes[
          <xref ref-type="bibr" rid="ref22">34</xref>
          ]:
        </p>
        <p>quality of sleep, and intensity of exercise.</p>
      </sec>
      <sec id="sec-2-9">
        <title>Merging Data The extracted data are merged into a single DataFrame indexed by the rounded timestamps.</title>
        <p>Handling Missing Values Missing values for
extracted data expected glucose level are filled with -1,
indicating nothing was recorded at those times. Rows
with missing glucose levels are dropped to maintain data
integrity.</p>
      </sec>
      <sec id="sec-2-10">
        <title>Removing Duplicates Data is grouped by timestamps, and the maximum value for each group is retained to ensure each time span has a distinct entry.</title>
        <p>Loading and Splitting Data The integrated data for
each patient is loaded, with timestamps converted to
datetime objects. The data is split into training (80%)
and testing (20]%) sets and then consolidated into two
comprehensive data frames: integrated_train_data and
integrated_test_data.</p>
      </sec>
      <sec id="sec-2-11">
        <title>True Positives (TP): Someone with diabetes who was anticipated to develop diabetes.</title>
      </sec>
      <sec id="sec-2-12">
        <title>False Positives (FP): A person who did not have diabetes was projected to have it.</title>
      </sec>
      <sec id="sec-2-13">
        <title>False Negatives (FN): Someone with diabetes was not expected to have diabetes.</title>
      </sec>
      <sec id="sec-2-14">
        <title>True Negatives (TN): A person without diabetes was not expected to have diabetes.</title>
      </sec>
      <sec id="sec-2-15">
        <title>Accuracy (Acc.,) refers to the overall performance of a classifier and its ability to properly predict data [ 18].as this formula 7:</title>
        <p>Acc =</p>
        <p>TP + TN
TP + TN + FP + FN</p>
      </sec>
      <sec id="sec-2-16">
        <title>Sensitivity (SN): The metrics describe the classifier’s positive results , as follows:</title>
        <p>Sensitivity (Recall) =
Specificity (Sp.) =
TP
TP + FN</p>
        <p>TN
TN + FP</p>
      </sec>
      <sec id="sec-2-17">
        <title>Specificity (Sp.) refers to the negative result discovered</title>
        <p>Data Scaling The data is normalized using MinMaxS- by the classifier and is expressed as:
caler to ensure all features are on a similar scale.
(7)
(8)
(9)</p>
      </sec>
      <sec id="sec-2-18">
        <title>Precision (Pr.,) is the ratio of total positive findings to projected positive results , represented as:</title>
        <p>Precision (Pr.) =</p>
        <p>TP
TP + FP</p>
        <p>
          The F1-Score (F1.,) represents the precision and recall
harmonic mean, with a range of [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ]. The F1-Score
indicates classifier robustness, with the mathematical
expression :
 1 = 2 ×
        </p>
        <p>1
1 1
 +  
4.1.2. Machine learning results
(10)
(11)</p>
      </sec>
      <sec id="sec-2-19">
        <title>The ROC curve plot provides a visual comparison of</title>
        <p>the performance of the five machine learning models.</p>
        <p>The Area Under the Curve (AUC) measures the model’s
ability to discriminate between classes, with a higher
AUC indicating better performance. As shown in Figure 4.1.3. Deep learning results
7 , the logistic regression model (blue curve) has a mod- Artificial Neural Network The following table 3
proerate AUC, indicating adequate but not optimal perfor- vides a detailed overview of the architecture and training
mance. The decision tree model (orange curve) performs details of the Artificial Neural Network (ANN) employed
slightly better with an AUC of 0.89. The random forest in this study.
model (green curve) demonstrates the best performance
with an AUC of 0.98, indicating excellent classification • Results Before Enhancement Strategies The
power. The support vector machine (SVM) model (red initial performance of the ANN was evaluated
curve) also performs well with an AUC of 0.94. The K- using the original architecture and training setup.
Nearest Neighbor (KNN) model (purple curve) shows The plots in Figures 9 and 10 indicates that the
good performance with an AUC of 0.92, although it is model performs well and learns eficiently. Both
slightly less eficient than the random forest and SVM the training and validation accuracy curves
indimodels cate a consistent rise, beginning at 0.75 and
attain</p>
        <p>Based on the evaluation metrics in Figure 8, the Ran- ing 0.93 after 50 epochs. The validation accuracy
dom Forest classifier stands out as the best-performing closely tracks the training accuracy, indicating
model for diabetes prediction in this study achieved the high generalization with minimal overfitting. The
highest accuracy (92.78%) .
Building the
Neural Network
Model
Compilation
Callbacks
Enhancement
Strategies
red dashed line at 0.93 represents the best
validation accuracy attained, and the model’s accuracy
plateaus around this value after about 20 epochs,
suggesting convergence. The loss curves for both
training and validation data fall significantly in
the early epochs before stabilizing at low values,
showing efective learning and minimal
overfitting. Overall, the closely aligned training and
validation curves for both accuracy and loss show
that the model generalizes efectively to unknown
data.
• Results After Enhancement Strategies
Following the implementation of enhancement
strategies, the performance of the ANN was
reevaluated.</p>
        <p>Based on the plot in Figure 11 The validation
accuracy curve, although plateauing after a certain
epoch (around 30), still remains high throughout.</p>
        <p>This suggests the model has achieved a good level
of performance on unseen data. Even though it
might not be significantly improving after that
point, it’s maintaining a strong performance
overall. From the classification report in Figure 12 we
observe that the Precision is high for both classes,
at 0.94 for class 0 and 0.92 for class 1.Recall (how
many of the actual positive cases did the model
predict correctly) is also high for both classes, at
0.96 for class 0 and 0.88 for class 1. This means
that the model is good at not missing actual
positive cases.finally the Accuracy is 0.94, which is
also high. This means that the model is
performing well overall.</p>
      </sec>
      <sec id="sec-2-20">
        <title>Based on the plots in Figure 17, the validation accuracy (orange curve) reaches a high value of</title>
        <p>around 0.93, which is a positive sign. The
training loss and validation loss generally decrease
over time, which is what you expect as the model
learns. which suggests the model is generalizing
well to unseen data. Based on confusion matrix
and classification report the model performs well
in classifying occurrences, as evidenced by its 93%
overall accuracy. It shows good recall (0.94) and
precision (0.95) for class 0, indicating that true
negatives can be identified efectively, however
there are some misclassifications as class 1. Class
1 precision (0.89) and recall (0.92) are marginally
poorer, suggesting that some cases were
incorrectly classified as class 0. Overall, the model
works well in both classes; however, it could be
even more accurate if it could be optimized to
accurately categorize instances of class 1.</p>
      </sec>
      <sec id="sec-2-21">
        <title>CNN model The table 4 summarizes these essential steps for building a CNN architecture, illustrating how each component contributes to the construction and training of the CNN model.</title>
        <p>The plot in Figure 15 shows the accuracy of the
model over 100 epochs for both training and validation
datasets.The validation accuracy is consistently higher
than the training accuracy, indicating that the model is
performing well on unseen data. In the right plot, both
training and validation losses decrease rapidly, indicating
efective learning. The close alignment of training and
validation losses suggests the model generalizes well and
does not sufer from significant overfitting.The confusion
matrix (Figure 16 ) demonstrates that the model has a
large number of right predictions for both the negative
and positive classes, indicating good performance.
4.2.2. Lstm model</p>
      </sec>
      <sec id="sec-2-22">
        <title>Here in table 5 is a summary of the compilation, training, and evaluation details for the LSTM model:</title>
        <p>Compiling the Model
Training the Model
Evaluating the Model</p>
        <p>Details
- Optimizer: Adam
- Learning Rate: 0.001
- Loss Function: Mean Squared Error (MSE)
- Epochs: 100
- Batch Size: 32
- Callbacks:
- Early Stopping: Monitors validation
loss, patience=10
- Model Checkpoint: Saves best model based
on validation loss to best_model_g.keras
- Metrics:
- Root Mean Squared Error (RMSE)
- Mean Absolute Error (MAE)</p>
        <p>based in figure 24 in all studeis the validation loss
generally remains lower than the training loss after the initial
epochs, indicating good generalization performance. The
consistent patterns across diferent studies suggest
robustness in the model training process.</p>
        <p>Based on Table 6 provided for Mean Absolute Error
(mae) and Root Mean Squared Error (rmse) for each study,
here’s a comparison of models performance. Generally,
as more features are added to the model, the MAE tends
Figure 17: Classification report. to decrease, indicating improved accuracy in predicting
blood glucose levels. Models including sleep quality and
exercise intensity show slightly lower MAE values
com4.2. Monitoring of diabets pared to those with fewer features.
Similarly, RMSE decreases as more features are
incor4.2.1. Performance criteria porated, suggesting better overall predictive performance.
We utilize two standard performance metrics: root-mean- Models with more features show lower RMSE values,
insquare error (RMSE) and mean absolute error (MAE). Let dicating more accurate predictions.
 be the actual value, ˆ the predicted value, and  the Including additional features such as carbs, steps, sleep
sample size. quality, and exercise intensity consistently improves
prediction accuracy (lower MAE and RMSE). However, the
diferences between models with more features compared
⎯  to those with fewer are relatively small but generally
conRoot-Mean-Square Error (RMSE) : RMSE = ⎷⎸⎸ 1 ∑︁(sis− te nˆt.)2
=1
(12)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Technical Exploration</title>
      <p>This section discusses the prototype implementation of
the AI-powered platform for comprehensive diabetes
management, focusing on its architecture, key
components, and functionalities that are important for ensuring
the platform’s scalability, usability and efectiveness in
real-world clinical settings.</p>
      <sec id="sec-3-1">
        <title>5.1. Software Architecture</title>
        <p>The software architecture of the AI-powered diabetes
management platform consists of several key
components that collaborate to collect, process, store, and
analyze data, providing valuable insights and supporting
clinical decisions. The general architecture is illustrated
in Figure 25 and includes the following steps:
and deep learning algorithms, the proposed system
effectively predicts diabetes onset, monitors glucose levels,
and assists healthcare professionals in providing
personalized care.</p>
        <p>The performance evaluation indicates that the DNN
achieved a validation accuracy of 94%, showcasing its
robustness and generalization capabilities. Enhancement
strategies, including increased model complexity and
k-fold cross-validation, further improved the model’s
performance, ensuring minimal overfitting and high
precision. Similarly, the LSTM model demonstrated a strong
ability to predict blood glucose levels, with validation
losses indicating good generalization to unseen data.</p>
        <p>Moreover, the inclusion of user-friendly interfaces for
healthcare professionals and patients ensures that the
platform is accessible and practical for everyday use. This
fosters better communication between patients and
doctors, streamlining the management of health records,
prescriptions, and medical analyses.</p>
        <p>Future work will focus on expanding dataset diversity,
refining AI models, and incorporating real-time patient
feedback to further optimize the platform, ultimately
improving clinical decision-making and personalizing
treatment plans for diabetes care.
insights to assist with clinical decisions, which
may include changes to prescriptions, lifestyle
advice, or scheduling further appointments</p>
      </sec>
      <sec id="sec-3-2">
        <title>5.2. User Interface Design</title>
        <p>The foremost aim of the UI design is to create a clean
and easy-to-use enjoy for users. This method making
the interface simple to navigate, ensuring all functions
are smooth to discover, and displaying data in reality. By
focusing on user-friendly layout, the platform objectives
to improve consumer engagement and satisfaction. The
following figures ofer an overview of the platform’s
person interface:</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Discussion</title>
    </sec>
    <sec id="sec-5">
      <title>7. Conclusion</title>
      <p>This study assessed various machine learning methods
for diabetes management. Using 5-fold cross-validation,
the second DNN architecture achieved the highest
accuracy of 94%, demonstrating the efectiveness of deep
learning techniques for diabetes prediction. Compared
to our results, other studies have shown lower accuracies
for most algorithms, with their Random Forest model
achieving only 80% accuracy, a notable diference from
our 93%. This discrepancy can be attributed to diferences Declaration on Generative AI
in feature selection, data preparation, or hyperparameter
optimization methods. During the preparation of this work, the authors used</p>
      <p>In monitoring results, models incorporating additional ChatGPT, Grammarly in order to: Grammar and spelling
relevant features beyond glucose levels exhibited slightly check, Paraphrase and reword. After using this
tool/serbetter predictive performance in terms of MAE and RMSE. vice, the authors reviewed and edited the content as
However, the diferences between models were relatively needed and take full responsibility for the publication’s
minor, suggesting diminishing returns as more features content.
are added.</p>
      <p>It is important to consider that diferences in datasets
used across studies can significantly impact results. Vari- References
ations in data characteristics, such as sample size,
demographics, and data quality, may influence machine
learning model performance. While our study shows
promising results, future research should focus on
reifning models, exploring advanced feature engineering
methodologies, and validating these strategies across
diverse datasets to ensure robustness and generalizability.</p>
      <p>Future work may also involve developing new AI models
for predicting diabetes-related complications and risk
assessments and integrating wearable devices for real-time
monitoring to enhance analytics capabilities and improve
prediction accuracy.</p>
      <sec id="sec-5-1">
        <title>In summary, this study demonstrates the significant potential of AI-powered platforms in transforming diabetes management. By leveraging advanced machine learning</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R</given-names>
            <surname>. A. S. K. S. G. M. Chandrashekar D. K</surname>
          </string-name>
          ,
          <string-name>
            <surname>Imran Pasha</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <article-title>Diabetes prediction using machine learning algorithms</article-title>
          ,
          <source>Indian Scientific Journal Of Research In Engineering And Management</source>
          <volume>07</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .55041/ijsrem17771.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>P. N. Thotad,</surname>
          </string-name>
          <article-title>A machine learning-based diagnosis and prediction of diabetes mellitus disease (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sireesha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Prakash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sumathi</surname>
          </string-name>
          ,
          <article-title>Implementing a model to detect diabetes prediction using machine learning classifiers</article-title>
          ,
          <source>Journal of Algebraic Statistics</source>
          <volume>13</volume>
          (
          <year>2022</year>
          )
          <fpage>558</fpage>
          -
          <lpage>566</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Puglisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , et al.,
          <article-title>Exploiting robots as healthcare resources for epidemics management and support caregivers</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          , volume
          <volume>3686</volume>
          ,
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Atoussi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Real-time synchronisation of multiple fractional-order chaotic systems: an application care</article-title>
          <source>Informatics Research</source>
          <volume>4</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          . study in secure communication, Fractal and Frac- [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>eddine Boukredine</surname>
          </string-name>
          , E. Mehallel,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Boualleg, tional
          <volume>8</volume>
          (
          <year>2024</year>
          )
          <article-title>104</article-title>
          . O.
          <string-name>
            <surname>Baitiche</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rabehi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Guermoui</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Douara</surname>
            ,
            <given-names>I. E.</given-names>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>I.</given-names>
            <surname>Naidji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tiber- Tibermacine</surname>
          </string-name>
          ,
          <article-title>Enhanced performance of microstrip macine</article-title>
          , et al.,
          <article-title>Semi-mind controlled robots based antenna arrays through concave modifications and on reinforcement learning for indoor application., cut-corner techniques</article-title>
          ,
          <source>ITEGAM-JETIA</source>
          <volume>11</volume>
          (
          <year>2025</year>
          ) in: ICYRIME,
          <year>2023</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>59</lpage>
          .
          <fpage>65</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Mukti</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Saha</surname>
          </string-name>
          , A deep [18]
          <string-name>
            <given-names>N. P.</given-names>
            <surname>Miriyala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Kottapalli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Miriyala</surname>
          </string-name>
          ,
          <article-title>learning approach based on convolutional lstm for G</article-title>
          . Lorenzini,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ganteda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Bhogapurapu</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>detecting</surname>
            <given-names>diabetes</given-names>
          </string-name>
          ,
          <source>Computational biology and Diagnostic analysis of diabetes mellitus using machemistry 88</source>
          (
          <year>2020</year>
          )
          <article-title>107329. chine learning approach</article-title>
          , Revue d'Intelligence Arti-
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , W. Guettala, ifcielle
          <volume>36</volume>
          (
          <year>2022</year>
          )
          <fpage>347</fpage>
          -
          <lpage>352</lpage>
          . C. Napoli,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , Enhancing sentiment anal- [19]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Herrero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Georgiou</surname>
          </string-name>
          ,
          <article-title>Deep learning ysis on seed-iv dataset with vision transformers: A for diabetes: a systematic review</article-title>
          ,
          <source>IEEE Journal of comparative study, in: Proceedings of the 2023 11th Biomedical and Health Informatics</source>
          <volume>25</volume>
          (
          <year>2020</year>
          )
          <fpage>2744</fpage>
          - international conference on information technol-
          <volume>2757</volume>
          . ogy: IoT and smart city,
          <year>2023</year>
          , pp.
          <fpage>238</fpage>
          -
          <lpage>246</lpage>
          . [20]
          <string-name>
            <given-names>A.</given-names>
            <surname>TIBERMACINE</surname>
          </string-name>
          , W. GUETTALA, I. E. TIBERMA-
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bouchelaghem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Balsi</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Mo- CINE, Eficient one-stage deep learning for text roni, C. Napoli, Cross-domain machine learning detection in scene images., Electrotehnica, Elecapproaches using hyperspectral imaging for plas- tronica</article-title>
          ,
          <source>Automatica</source>
          <volume>72</volume>
          (
          <year>2024</year>
          ).
          <article-title>tics litter detection</article-title>
          , in: 2024 IEEE Mediterranean [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Djaidir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , and
          <string-name>
            <surname>Middle-East Geoscience</surname>
          </string-name>
          and
          <string-name>
            <surname>Remote Sensing N. Haidour</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Abdelaziz</surname>
          </string-name>
          ,
          <source>Gas turbine vibration Symposium (M2GARSS)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>40</lpage>
          .
          <article-title>monitoring based on real data and neuro-fuzzy sys-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boutarfaia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber- tem,
          <source>Diagnostyka</source>
          <volume>25</volume>
          (
          <year>2024</year>
          ).
          <article-title>macine, Deep learning for eeg-based motor imagery</article-title>
          [22]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ladjal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bechouat</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Seclassification: Towards enhanced human-machine draoui</article-title>
          , C. Napoli,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rabehi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lalmi</surname>
          </string-name>
          ,
          <article-title>Hybrid modinteraction and assistive robotics, life 2 (</article-title>
          <year>2023</year>
          )
          <article-title>4. els for direct normal irradiance forecasting: A case</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Khanam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Y.</given-names>
            <surname>Foo</surname>
          </string-name>
          ,
          <article-title>A comparison of machine study of ghardaia zone (algeria), Natural Hazards learning algorithms for diabetes prediction</article-title>
          ,
          <source>Ict</source>
          <volume>120</volume>
          (
          <year>2024</year>
          )
          <fpage>14703</fpage>
          -
          <lpage>14725</lpage>
          . Express 7 (
          <year>2021</year>
          )
          <fpage>432</fpage>
          -
          <lpage>439</lpage>
          . [23]
          <string-name>
            <given-names>G.</given-names>
            <surname>Swapna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vinayakumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Soman</surname>
          </string-name>
          , Diabetes de-
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Zouai, tection using deep learning algorithms, ICT express A. Rabehi, Eeg classification using contrastive learn- 4 (</article-title>
          <year>2018</year>
          )
          <fpage>243</fpage>
          -
          <lpage>246</lpage>
          .
          <article-title>ing and riemannian tangent space representations</article-title>
          , [24]
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Rabby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Hossen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Maida</surname>
          </string-name>
          , in: 2024 International Conference on Telecommuni- X. Hei,
          <article-title>Stacked lstm based deep recurrent neural cations and Intelligent Systems (ICTIS)</article-title>
          , IEEE,
          <year>2024</year>
          ,
          <article-title>network with kalman smoothing for blood glucose</article-title>
          pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . prediction,
          <source>BMC Medical Informatics and Decision</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , Making
          <volume>21</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . D.
          <string-name>
            <surname>Chebana</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Nahili</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Starczewscki</surname>
            , C. Napoli, [25]
            <given-names>H. V.</given-names>
          </string-name>
          <string-name>
            <surname>Dudukcu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Taskiran</surname>
          </string-name>
          , T. Yildirim,
          <article-title>Blood Analyzing eeg patterns in young adults exposed to glucose prediction with deep neural networks using diferent acrophobia levels: a vr study, Frontiers in weighted decision level fusion</article-title>
          ,
          <source>Biocybernetics and Human Neuroscience</source>
          <volume>18</volume>
          (
          <year>2024</year>
          )
          <fpage>1348154</fpage>
          . Biomedical Engineering
          <volume>41</volume>
          (
          <year>2021</year>
          )
          <fpage>1208</fpage>
          -
          <lpage>1223</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , En- [26]
          <string-name>
            <given-names>R.</given-names>
            <surname>Krishnamoorthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. Z.</given-names>
            <surname>Almarzouki</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. K.</surname>
          </string-name>
          <article-title>hancing eeg signal reconstruction in cross-domain Shukla, A</article-title>
          . Rizwan,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kalpana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          , et al.,
          <article-title>A adaptation using cyclegan, in: 2024 International novel diabetes healthcare disease prediction frameConference on Telecommunications and Intelligent work using machine learning techniques</article-title>
          ,
          <source>Journal Systems (ICTIS)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . of Healthcare Engineering
          <year>2022</year>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Akrour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Khamar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber- [27]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fregoso-Aparicio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Noguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Montesinos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>macine</surname>
          </string-name>
          , A. Rabehi,
          <article-title>Comparative analysis of svm García-García, Machine learning and deep learning and cnn classifiers for eeg signal classification in predictive models for type 2 diabetes: a systemresponse to diferent auditory stimuli, in: 2024 atic review</article-title>
          ,
          <source>Diabetology &amp; metabolic syndrome 13 International Conference on Telecommunications</source>
          (
          <year>2021</year>
          )
          <article-title>148. and Intelligent Systems (ICTIS)</article-title>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . [28]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ahammed</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Islam</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Uddin</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Martinsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schliep</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Eliasson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Mogren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Akhter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Talukder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. K.</given-names>
            <surname>Paul</surname>
          </string-name>
          ,
          <article-title>Machine Blood glucose prediction with variance estimation learning based diabetes prediction and development using recurrent neural networks</article-title>
          ,
          <source>Journal of Health- of smart web application</source>
          ,
          <source>International Journal of Cognitive Computing in Engineering</source>
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <fpage>229</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shiruru</surname>
          </string-name>
          ,
          <article-title>An introduction to artificial neural network</article-title>
          ,
          <source>International Journal of Advance Research and Innovative Ideas in Education 1</source>
          (
          <year>2016</year>
          )
          <fpage>27</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Krenker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bešter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kos</surname>
          </string-name>
          ,
          <article-title>Introduction to the artificial neural networks</article-title>
          ,
          <source>Artificial Neural Networks: Methodological Advances and Biomedical Applications</source>
          .
          <source>InTech</source>
          (
          <year>2011</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Shiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Perumal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mustapha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <article-title>A comprehensive overview and comparative analysis on deep learning models: Cnn, rnn</article-title>
          , lstm, gru,
          <source>arXiv preprint arXiv:2305.17473</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Tung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. N.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Q.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. V.</given-names>
            <surname>Binh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T. N.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <article-title>The utilization of machine learning algorithms for assisting physicians in the diagnosis of diabetes</article-title>
          ,
          <source>Diagnostics</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <year>2087</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shrestha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mahmood</surname>
          </string-name>
          ,
          <article-title>Review of deep learning algorithms and architectures</article-title>
          ,
          <source>IEEE access 7</source>
          (
          <year>2019</year>
          )
          <fpage>53040</fpage>
          -
          <lpage>53065</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [34]
          <string-name>
            <surname>C.-Y. Chou</surname>
            , D.-
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hsu</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-H. Chou</surname>
          </string-name>
          ,
          <article-title>Predicting the onset of diabetes with machine learning methods</article-title>
          ,
          <source>Journal of Personalized Medicine</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <fpage>406</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>