<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Customized Convolutional Neural Networks for Plant Disease Detection on Leaf Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Walid Ben Mesmia</string-name>
          <email>benmesmiawalid77@yahoo.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ghazala Hcini</string-name>
          <email>hcinighazala@fstsbz.u-kairouan.tn</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Omar Mansouri</string-name>
          <email>omarmansouri858@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kamel Barkaoui</string-name>
          <email>kamel.barkaoui@cnam.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Imen Jdey</string-name>
          <email>imen.jdey@fstsbz.u-kairouan.tn</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CEDRIC-CNAM</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Higher Institute of Management of Gabès, University of Gabès</institution>
          ,
          <addr-line>Gabès</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>ReGIM-Lab. REsearch Group in Intelligent Machines (LR11ES48)</institution>
          ,
          <addr-line>ENIS, Sfax</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Sys'Com</institution>
          ,
          <addr-line>ENIT, Tunis</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Ensuring agricultural productivity and food security depends on efectively detecting and classifying plant leaf diseases. In this study, we present a tailored Convolutional Neural Network (CNN) approach specifically designed to identify plant leaf diseases. Our methodology is based on a carefully curated dataset that includes 29 distinct disease classes. By using transfer learning and fine-tuning techniques, we carefully optimize the CNN architecture to suit the unique characteristics of the dataset, resulting in improved accuracy and robustness. Through extensive experimentation and evaluation, we demonstrate the efectiveness of our approach in accurately diagnosing plant leaf diseases across multiple classes. Our model achieves impressive performance, with a notable accuracy of 96.53% and a minimal loss of 0.1. Outperforming existing methods on a range of evaluation metrics, we emphasize the superiority of our customized CNN approach through comprehensive comparative analyses. This study represents a significant advancement in computer vision techniques for agriculture by providing a reliable and eficient solution for automated plant disease diagnosis. The proposed methodology holds great promise for practical implementation in agricultural systems, enabling early disease detection and management to reduce crop losses and promote sustainable agricultural practices.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Plant diseases represent a major threat to global food security [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], leading to significant yield losses
and imposing heavy economic burdens on agricultural industries worldwide. Timely and accurate
detection of these diseases is critical for implementing efective intervention strategies to prevent
extensive crop damage. Recent advancements in computer vision and machine learning have shown
great promise in automating the detection and diagnosis of plant diseases through the analysis of leaf
images [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Convolutional Neural Networks (  ) have proven to be highly efective for image
classification tasks [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], including the detection of plant diseases. However, achieving optimal
performance in this area involves addressing several challenges, such as dataset heterogeneity, class
imbalance, and symptom variability across diferent plant species. Moreover, existing CNN architectures
often require customization to efectively manage the specific characteristics of plant disease datasets
[29]. In this paper, we introduce a customized CNN approach designed specifically for identifying plant
leaf diseases. Our study utilizes a modified dataset encompassing 29 distinct disease classes, representing
a wide variety of plant species and pathologies. Our contribution has two primary objectives: first,
to develop a robust and precise model capable of accurately distinguishing between multiple disease
classes; and second, to address the limitations of existing CNN architectures by tailoring the model
to better accommodate the unique characteristics of the plant disease dataset. By utilizing transfer
learning and fine-tuning techniques, we optimize the CNN architecture to adapt to the unique features
of the dataset, thereby improving both accuracy and robustness. Through extensive experimentation
and evaluation, we showcase the efectiveness of our approach in accurately diagnosing plant leaf
diseases across multiple classes. Furthermore, we conduct comprehensive comparative analyses to
highlight the advantages of our customized CNN approach over traditional methods. Overall, our
study aims to advance computer vision techniques in agriculture by ofering a reliable and eficient
solution for automated plant disease diagnosis. By tackling the challenges associated with plant disease
classification, our customized CNN approach shows significant potential for practical implementation
in agricultural systems. This can facilitate early disease detection and management, helping to mitigate
crop losses and promote sustainable agricultural practices. We begin with an Introduction section that
delineates the motivation, objectives, and primary contributions of our study. Subsequently, the Problem
Statement section delineates the challenges in plant disease detection and articulates the specific aims of
our research. In the Related Works section, we scrutinize existing methodologies, elucidating how our
approach advances the field. The Methodology section elaborates on the specialized dataset, the design
of the customized CNN architecture, the training process, and techniques for hyperparameter tuning.
The Results and Discussion section presents the performance metrics of our model, deliberates on its
generalization and scalability, and confronts the challenges of deploying deep learning models on diverse
datasets, including issues pertaining to interpretability and transparency. Finally, the Conclusion and
Future Works section recapitulates our findings and delineates potential avenues for further research
and practical implementations.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem Statment</title>
      <p>In the fields of agriculture and forestry, detecting plant diseases is vital for ensuring healthy crop
production and forest management [26]. However, traditional methods of disease detection have
significant disadvantages [27]. They rely on manual inspection and semi-automated processes, which
are costly, time-consuming, prone to error, and lack precision and specificity [28]. Furthermore, these
methods are not exhaustive and struggle to predict disease outbreaks accurately, resulting in delayed
responses and potential agricultural losses (figure 1). The current detection methods involve visual
inspection and manual data collection, which are labor-intensive, time-consuming, and prone to human
error. These methods do not provide real-time results and often fail to accurately predict disease
outbreaks, leading to delayed responses and significant agricultural losses. Given these drawbacks,
there is an urgent need for an advanced solution that can overcome these limitations. The proposed
solution should be automated, cost-efective, provide real-time results, be precise, and ofer accurate
predictions with a low error rate. Additionally, it should be exhaustive in its analysis, covering a
wide range of potential diseases and environmental stressors. Artificial Intelligence (AI) presents a
promising alternative to traditional detection methods. By using AI technologies, it is possible to
develop systems that meet these criteria, thereby enhancing disease detection and management in
agriculture and forestry. AI can process large amounts of data quickly and accurately, providing
realtime insights and predictions that enable proactive measures to protect plant health against various
threats. In conclusion, transitioning to AI-based detection methods can address the critical shortcomings
of traditional approaches, ofering a comprehensive and eficient solution for managing plant diseases
and adapting to environmental changes.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Background of the study</title>
      <sec id="sec-3-1">
        <title>3.1. Deep learning theory</title>
        <p>
          The concept of Deep Learning (DL) was introduced in a seminal paper by Hinton et al., published
in Science in 2006 [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Deep learning involves the use of neural networks for data analysis and
feature learning [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. In this approach, multiple hidden layers are used to extract features from the
data. Each hidden layer acts as a perceptron, extracting low-level features that are then combined to
form more abstract, high-level features. This approach efectively addresses the issue of local minima.
Unlike traditional algorithms that rely on manually designed features, deep learning automates feature
extraction, which has generated significant interest among researchers. It has been successfully applied
in various domains, including computer vision [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], pattern recognition [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], speech recognition, natural
language processing, and recommendation systems. Conventional methods for image classification and
recognition rely on manually designed features, which only capture basic features and struggle to extract
deep, complex information from images. Deep learning overcomes this limitation by directly learning
from raw images, obtaining multi-level feature information that ranges from low-level to high-level
semantic features [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Traditional algorithms for detecting plant diseases and pests primarily rely
on manually designed features, which is a challenging and experience-dependent process that cannot
automatically learn from images. On the other hand, deep learning models, with their multiple layers,
possess robust autonomous learning and feature representation capabilities, allowing for automatic
feature extraction in image classification and recognition tasks. This makes deep learning highly
promising in the field of plant disease and pest image recognition[ 32]. Currently, several well-known
deep neural network models have been developed within the realm of deep learning, including deep belief
networks (DBN), deep Boltzmann machines (DBM), stacked de-noising autoencoders (SDAE), and deep
CNNs. These models ofer significant advantages over traditional manual feature extraction methods by
automating feature extraction from high-dimensional feature spaces in image recognition tasks [21]. As
the volume of training data increases and computational power improves, the representational power
of deep neural networks continues to grow. The rise of deep learning is transforming both industry
and academia, with deep neural network models consistently outperforming traditional approaches.
Among these models, deep convolutional neural networks have become the most popular framework in
recent years.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Convolutional neural network</title>
        <p>Convolutional Neural Networks (CNNs) are a type of deep learning model [22] that is particularly
well-suited for image classification tasks, such as detecting leaf diseases. The architecture of a CNN
consists of multiple layers, including fully connected layers, max pooling, and normalization layers. The
initial layer is the input layer, followed by convolutional layers, which apply various 2D filters to the
image to extract features [23]. These features are then downsampled through pooling layers, creating
a more compact representation. Fully connected (FC) layers, which are considered learnable features,
process these extracted features to learn and optimize weights [24]. These FC layers are also crucial for
making classifications, such as recognizing diferent plant diseases. The CNN [ 30],[31]learning process
begins with training, using labeled images as input. Once trained, the model can accurately identify
diferent types of diseases.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Related Works</title>
      <p>
        We’ll examine various studies that have utilized machine learning and deep learning methods for plant
disease detection. Throughout this review, we’ll focus on their methodologies, findings, and pinpoint
the gaps our approach seeks to address. So, In the literature we detect several works related to our
study. We illustrate some of them as follows:
∙ Xu et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] tackled the task of identifying corn leaf diseases (healthy, leaf blight, rust) in intricate
ifeld settings with a scarcity of data. They introduced a CNN model leveraging VGG16 and
transfer learning. By utilizing weight parameters pre-trained on ImageNet, they attained an
average recognition accuracy of 95.33%.
∙ In (Hatuwal, B., and Thapa, D. (2020)) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the authors aimed to tackle significant crop losses
in developing nations like Nepal, stemming from the delayed identification of plant diseases.
They proposed a method to classify and predict plant diseases utilizing machine learning models,
including Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Random Forest Classifier
(RFC), and CNN. The study utilized Haralick texture features such as contrast, correlation, entropy,
and inverse diference moments for SVM, KNN, and RFC models, while CNNs were directly fed
with images. Their findings revealed that CNN achieved the highest accuracy at 97.89%, surpassing
RFC (87.43%), SVM (78.61%), and KNN (76.96%) across sixteen distinct image categories. These
results underscore the superior eficacy of CNNs in plant disease classification.
∙ In (Roy, A., and Patel, D. (2024))[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], the authors tackled the prevalent issue of leaf diseases in
agriculture through disease classification employing deep learning techniques. They evaluated
the performance of VGG-16, VGG-19, InceptionV3, and DenseNet-121 architectures, providing
a comparative analysis. DenseNet-121 achieved the highest accuracy at 91.75%. The study
encompassed training a dataset containing 13 diferent diseases, accompanied by an analysis
based on validation accuracy, loss, and the number of epochs. This research underscored
DenseNet121’s superiority over other deep learning models, afirming its eficacy in accurately classifying
leaf diseases.
∙ The work by (Lingwal et al. (2023)) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] addresses the critical need for early detection and
classification of plant diseases to prevent their spread and minimize crop damage. By using deep
learning techniques, specifically CNNs, the study focuses on classifying tomato leaf diseases using
the PlantVillage dataset. This dataset includes nearly 16,000 leaf images, which were divided into
training, test, and validation sets with ratios of 70%, 20%, and 10%, respectively[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The research
compares a custom-developed CNN model with four transfer learning models: DenseNet121,
ResNet50, Inception-V3, and VGG-16. Performance evaluations based on accuracy and
crossentropy loss indicated that both VGG-16 and the custom CNN model achieved impressive results,
with validation accuracies of 90% and 83% on the test set, respectively[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. These findings
underscore the efectiveness of CNNs, especially transfer learning approaches, in accurately
classifying tomato leaf diseases, thereby providing a valuable tool for early disease detection in
agriculture[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
∙ The study by (Zheng et al. (2023)) [18]emphasizes the critical role of computer vision in detecting
plant diseases, particularly the necessity for accurate pattern recognition. A CNN was trained
using a dataset comprising 22,930 tomato leaf images. The baseline model achieved a training
accuracy of 90% [18]. Comparative analysis of various architectures, including VGG16, MobileNet,
and InceptionV3, revealed that MobileNet had the highest training accuracy of 91% and was the
most eficient[ 18]. Despite MobileNet’s superior performance, the proposed CNN architecture
ofers faster training due to its shallower design. This research lays the foundation for future
eforts in developing lightweight, fast, and accurate algorithms for classifying plant diseases,
ensuring their practical application in agriculture[18].
      </p>
      <p>The summarized studies encompass a variety of crops and diseases, including those afecting corn,
wheat, tomatoes, apples, and various fungal infections. They employ a range of methods, from traditional
machine learning algorithms such as SVM, KNN, and RFC, to advanced deep learning architectures
like VGG16, CNN, DenseNet-121, MobileNet, and SqueezeNet. The dataset sizes vary considerably,
with image counts ranging from a few thousand to nearly 37,000. The reported accuracy metrics
generally demonstrate high efectiveness in disease detection. Studies utilizing advanced deep learning
methods like CNNs and DenseNets typically achieve higher accuracy levels, highlighting their eficacy
in image-based plant disease detection. Table 1 summarizes these studies, including the accuracy metrics
used for model evaluation.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Methodology</title>
      <p>
        Our methodology (see in Figure2) is designed to ensure accurate classification of plant diseases through
a systematic process. First, we curate a customized dataset and preprocess it, carefully splitting it
into training and test sets. Data augmentation techniques are then applied to enhance the dataset,
improving the model’s robustness. We construct a CNN model with precision, selecting convolutional
and pooling layers to efectively extract key features while maintaining eficiency and accuracy. The
model is optimized to balance performance with minimized parameters. After constructing the model,
it is rigorously trained using advanced optimization techniques, followed by performance evaluation on
the test set to assess its efectiveness in real-world scenarios. The final model is deployed for swift and
accurate plant disease prediction, providing a reliable tool for early crop health detection. Throughout
Reference Object
Xu et al. (2020) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] Corn
Kiruthika et al. Leaf
(2019)[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
Hatuwal and Lee Leaf
(2020)[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
Roy et al. Leaf
(2024)[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]
Lingwal et al. (2023) Tomato VGG-16
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
Zheng et al. Tomato
(2023)[18]
Shin et
(2021)[19]
Zhong et al. (2020) Apple
[20]
      </p>
      <p>Method
VGG16
ANN
SVM
KNN
RFC
CNN
DenseNet-121</p>
      <p>Dataset
–
6108 images
36,958 images
–</p>
      <p>Performance
Accuracy=95.33%
Accuracy=93.33%
Accuracy=78.61%
Accuracy=76.96%
Accuracy=87.43%
Accuracy=97.89%</p>
      <p>Accuracy=91.75%
16,000 images</p>
      <p>Accuracy=90%
al. Fungal SqueezeNet-MOD2
11,600 images</p>
      <p>Accuracy=92.61%
MobileNet
22,930 images</p>
      <p>Accuracy=91%
DenseNet-121
2462 images</p>
      <p>Accuracy=92.29%
the process, we carefully manage the dataset, model architecture, and parameter optimization to ensure
precise and eficient disease classification.</p>
      <sec id="sec-5-1">
        <title>5.1. Customized Dataset</title>
        <p>The database contains a collection of plant leaf images, consisting of 39 diferent classes (figure 3). These
classes include healthy leaves as well as leaves afected by various diseases or health issues. Each class
represents a specific type of plant disease or health condition, such as apple scab, common corn rust, or
tomato leaf mold. The dataset consists of a total of 61,486 images. After modifying and cleaning the
database, we have created a new dataset with 29 diferent classes, representing various plant diseases
and health states. The total number of images in the new dataset is now 31,573. To better manage our
model training, we have divided this dataset into two parts: a training set and a test set. The training
set contains 26,830 images, while the test set has 4,743 images (table 2). The purpose of modifying and
cleaning the database is to personalize it for our needs. We have used an existing database that includes
16 types of plants and their associated diseases. This optimization helps to streamline processing on the
server by avoiding the need to manage two separate processes.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. CNN architecture</title>
        <p>The CNN architecture, outlined in (table 3 and figure4), contains a total of eight layers. It starts with an
input layer that accommodates images of dimensions 256x256 pixels with three color channels (RGB).
The next three layers are Conv2D layers for feature extraction. The first Conv2D layer contains 32
iflters of size 5x5, followed by a Rectified Linear Unit (ReLU) activation function. This is followed
by MaxPooling2D layers with pool sizes of 3x3 and 2x2 to downsample the spatial dimensions of the
feature maps. The subsequent two Conv2D layers contain 32 and 64 filters of size 3x3, respectively, each
with ReLU activation. These are followed by additional MaxPooling2D layers to further downsample
the feature maps. The seventh layer flattens the resulting feature maps into a one-dimensional array.
This is followed by two Dense layers with ReLU activation, containing 512 and 128 units, respectively.
Dropout regularization is applied to the first Dense layer to mitigate overfitting. The final layer is
the output layer, which contains 29 units with a Softmax activation function, facilitating multi-class
classification by outputting class probabilities.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Hyperparameters configuration</title>
        <p>The hyperparameters presented in the table 4 have been carefully selected and fine-tuned through
a series of preliminary tests in order to optimize the performance of the model. These values were
determined using a systematic approach.</p>
        <p>• Image Dimensions: The image dimensions were chosen based on the typical size and aspect ratio
of the input images in the dataset. We experimented with various dimensions and found that
256 × 256 pixels struck a good balance between capturing important features and computational
eficiency.
• Batch Size: We tried diferent batch sizes and settled on 32 because it allowed for eficient training
without excessive memory consumption or compromising model performance.
• Number of Classes: The number of classes in the dataset was predetermined based on the nature
of the classification problem. In this case, there are 29 distinct classes representing diferent types
of plant diseases.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Layer Type</title>
      <p>Input (InputLayer)</p>
      <p>Conv2D
MaxPooling2D</p>
      <p>Conv2D
MaxPooling2D</p>
      <p>Conv2D
MaxPooling2D</p>
      <p>Flatten</p>
      <p>Dense
Dropout</p>
      <p>Dense
Dense</p>
      <p>These specific values were selected through an iterative process of experimentation and evaluation,
ensuring optimal performance and generalization ability of the model.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Results and discussion</title>
      <sec id="sec-7-1">
        <title>6.1. Results</title>
        <p>The customized CNN architecture applied to the customized dataset achieved an outstanding
performance with an accuracy of 96.53% and a loss value of 0.1. These results indicate that the model is highly
efective at classifying plant diseases, demonstrating both precision and reliability in its predictions.</p>
        <p>The figure 5 illustrates the performance of our customized CNN model on the training and validation
datasets.</p>
        <p>The confusion matrix (Figure 6) provides a detailed breakdown of the performance of our customized
CNN model on the test dataset. Each cell in the matrix represents the number of predictions made
by the model for each class, allowing us to see where the model performs well and where it might be
making errors. The confusion matrix is a crucial tool for evaluating the performance of a classification
model. By examining the diagonal and of-diagonal values, we can identify which classes the model
predicts well and which ones it struggles with. This can guide further improvements in the model or
dataset. It allows us to calculate these metrics for each class:
• Precision: It tells us how many of the predicted positive instances are actually correct.
• Recall: It indicates how many actual positive instances are correctly identified by the model.
• F1-Score: The harmonic mean of precision and recall, providing a single metric that balances
both.</p>
        <p>By leveraging the insights from the confusion matrix, we can iteratively refine our model, making
targeted adjustments to enhance its overall performance and reliability.</p>
      </sec>
      <sec id="sec-7-2">
        <title>6.2. Discussion</title>
        <p>In this study, we have developed a customized CNN architecture tailored for a specialized dataset
containing images of various plant diseases. Our primary objective was to create a model capable of
accurately identifying diferent plant diseases, making it a vital tool for agricultural management and
disease control.</p>
        <sec id="sec-7-2-1">
          <title>6.2.1. Model Performance</title>
          <p>Our CNN model achieved an impressive accuracy of 96.53% and a loss value of 0.1 on the test dataset.
These metrics underscore the model’s ability to learn and recognize intricate patterns and features
associated with each plant disease class efectively. The robustness of our model is further evidenced by
the training and validation accuracy and loss graphs, as well as the confusion matrix, which illustrates
the model’s proficiency in correctly classifying the majority of plant disease images with minimal
misclassifications.</p>
        </sec>
        <sec id="sec-7-2-2">
          <title>6.2.2. Hyperparameter Tuning</title>
          <p>Finding the ideal hyperparameters for training a deep learning model, such as the learning rate, batch
size, and number of layers, can be a challenging and time-consuming task. It often requires extensive
experimentation.</p>
        </sec>
        <sec id="sec-7-2-3">
          <title>6.2.3. Interpretability and transparency</title>
          <p>Interpretability and transparency are essential when deploying machine learning models, particularly
in scenarios where understanding and justifying decisions is critical. Deep learning models like CNNs
are often considered "black boxes" due to their complexity, making it hard to interpret their internal
workings. This is a significant issue in fields like medical diagnosis or autonomous driving, where trust
in the model’s decisions is paramount. In our study, we tackled this challenge by creating a custom CNN
architecture for plant disease classification. Unlike pre-trained models with many layers and parameters,
our model is simpler, with fewer parameters, reducing complexity and enhancing interpretability. This
streamlined design makes it easier to understand how features are processed and which ones are
most influential in predictions. Additionally, the architecture incorporates domain-specific knowledge,
which aligns the model’s decisions with biological factors behind plant diseases, further increasing
transparency.
6.2.4. Scalability and Adaptability
• Adaptation to New Data: A significant challenge is ensuring that the model can be easily adapted
to new data and diferent contexts, such as new plant diseases or variations, without requiring
extensive retraining.
• Model Scalability: The model should be able to handle a growing dataset and the addition of new
classes without the need for extensive reengineering, while still maintaining its performance.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>7. Conclusion</title>
      <p>In this study, we have developed a customized Convolutional Neural Network for identifying plant
diseases using a specialized dataset. The model achieved an impressive accuracy of 96.53% and a low
loss value of 0.1, demonstrating its robustness and potential for practical agricultural applications.
Moving forward, expanding the dataset to include more plant species and diseases, as well as images
from diferent geographic regions, will improve the model’s generalizability. Additionally, testing the
model under real-world conditions and employing advanced hyperparameter tuning techniques can
further enhance its performance. Exploring model pruning and quantization techniques can also help
reduce computational requirements, allowing for deployment on mobile and low-power edge devices.
To make the model accessible to farmers and agricultural professionals, it is crucial to integrate it with
Internet of Things (IoT) devices for real-time monitoring. Furthermore, developing user-friendly mobile
and web applications will facilitate the use of the model. These eforts will contribute to agricultural
sustainability and food security. In conclusion, our customized CNN model holds great promise for
plant disease detection. With ongoing development and refinement, it has the potential to become a
crucial tool in global agriculture.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
Security and Data Sciences: Select Proceedings of 3rd International Conference on MIND 2021.</p>
      <p>Singapore: Springer Nature Singapore, 2023.
[18] Zheng, Ji, and Minjie Du. "Study on tomato disease classification based on leaf image recognition
based on deep learning technology." International Journal of Advanced Computer Science and
Applications 14.4 (2023).
[19] Shin, Jaemyung, et al. "A deep learning approach for RGB image-based powdery mildew disease
detection on strawberry leaves." Computers and electronics in agriculture 183 (2021): 106042.
[20] Zhong, Yong, and Ming Zhao. "Research on deep learning in apple leaf disease recognition."</p>
      <p>Computers and electronics in agriculture 168 (2020): 105146.
[21] Ben Mesmia, Walid, and Kamel Barkaoui. "Production Chain Modeling based on Learning Flow</p>
      <p>Stochastic Petri Nets.", Soft Computing (2024), https://doi.org/10.1007/s00500-024-09865-y
[22] Krichen, Moez. "Convolutional neural networks: A survey." Computers 12.8 (2023): 151.
[23] Hcini, Ghazala, Imen Jdey, and Habib Dhahri. "Investigating Deep Learning for Early Detection
and Decision-Making in Alzheimer’s Disease: A Comprehensive Review." Neural Processing Letters
56.3 (2024): 1-38.
[24] Hcini, Ghazala, Imen Jdey, and Hela Ltifi. "Improving Malaria Detection Using L1 Regularization</p>
      <p>Neural Network." JUCS: Journal of Universal Computer Science 28.10 (2022).
[25] Topaloglu, Ismail. "Deep learning based convolutional neural network structured new image
classification approach for eye disease identification." Scientia Iranica 30.5 (2023): 1731-1742.
[26] Abbas, Aqleem, et al. "Drones in plant disease assessment, eficient monitoring, and detection: a
way forward to smart agriculture." Agronomy 13.6 (2023): 1524.
[27] Ray, Monalisa, et al. "Fungal disease detection in plants: Traditional assays, novel diagnostic
techniques and biosensors." Biosensors and Bioelectronics 87 (2017): 708-723.
[28] Ngugi, Lawrence C., Moataz Abelwahab, and Mohammed Abo-Zahhad. "Recent advances in image
processing techniques for automated leaf pest and disease recognition–A review." Information
processing in agriculture 8.1 (2021): 27-51.
[29] Mastour, I., Sliman, L., Charroux, B., Djemaa, R. B., Barkaoui, K. . Privacy-preserving Collaborative
Computation: Methods, Challenges, and Directions. In 2023 International Conference on Computer
and Applications (ICCA) (pp. 1-6). IEEE.
[30] Jdey, Imen. "Trusted smart irrigation system based on fuzzy IoT and blockchain." International</p>
      <p>Conference on Service-Oriented Computing. Cham: Springer Nature Switzerland, 2022.
[31] Jdey, Imen, et al. "Fuzzy fusion system for radar target recognition." International Journal of</p>
      <p>Computer Applications &amp; Information Technology 1.3 (2012): 136-142.
[32] Hcini, Ghazala, Imen Jdey, and Hela Ltifi. "HSV-Net: a custom CNN for malaria detection with
enhanced color representation." 2023 International Conference on Cyberworlds (CW). IEEE, 2023.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Fones</surname>
            ,
            <given-names>Helen N.</given-names>
          </string-name>
          , et al.
          <article-title>"Threats to global food security from emerging fungal and oomycete crop pathogens</article-title>
          .
          <source>" Nature Food 1.6</source>
          (
          <year>2020</year>
          ):
          <fpage>332</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>Vipin</given-names>
          </string-name>
          <string-name>
            <surname>Kumar</surname>
          </string-name>
          , et al.
          <article-title>"Current status of plant diseases and food security." Food security and plant disease management</article-title>
          .
          <source>Woodhead Publishing</source>
          ,
          <year>2021</year>
          .
          <fpage>19</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Wani</surname>
            ,
            <given-names>Javaid</given-names>
          </string-name>
          <string-name>
            <surname>Ahmad</surname>
          </string-name>
          , et al.
          <article-title>"Machine learning and deep learning based computational techniques in automatic agricultural diseases detection: Methodologies, applications</article-title>
          , and challenges.
          <source>" Archives of Computational methods in Engineering 29.1</source>
          (
          <year>2022</year>
          ):
          <fpage>641</fpage>
          -
          <lpage>677</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Ouhami</surname>
          </string-name>
          ,
          <string-name>
            <surname>Maryam</surname>
          </string-name>
          , et al.
          <article-title>"Computer vision, IoT and data fusion for crop disease detection using machine learning:</article-title>
          <source>A survey and ongoing research." Remote Sensing 13.13</source>
          (
          <year>2021</year>
          ):
          <fpage>2486</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>Leiyu</surname>
          </string-name>
          , et al.
          <article-title>"Review of image classification algorithms based on convolutional neural networks</article-title>
          .
          <source>" Remote Sensing 13.22</source>
          (
          <year>2021</year>
          ):
          <fpage>4712</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Hossain</surname>
            ,
            <given-names>Md</given-names>
          </string-name>
          <string-name>
            <surname>Anwar</surname>
          </string-name>
          , and Md Shahriar Alam Sajib.
          <article-title>"Classification of image using convolutional neural network (CNN)."</article-title>
          <source>Global Journal of Computer Science and Technology 19.2</source>
          (
          <year>2019</year>
          ):
          <fpage>13</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          , et al.
          <article-title>"Recognition of corn leaf spot and rust based on transfer learning with convolutional neural network."</article-title>
          <source>Transactions of the CSAM 51.2</source>
          (
          <year>2020</year>
          ):
          <fpage>230</fpage>
          -
          <lpage>236</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Hinton</surname>
            ,
            <given-names>Geofrey E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ruslan</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Salakhutdinov</surname>
          </string-name>
          .
          <article-title>"Reducing the dimensionality of data with neural networks</article-title>
          .
          <source>" science 313</source>
          .5786 (
          <year>2006</year>
          ):
          <fpage>504</fpage>
          -
          <lpage>507</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Aggarwal</surname>
          </string-name>
          , Charu C.
          <article-title>Neural networks and deep learning</article-title>
          . Vol.
          <volume>10</volume>
          . No.
          <volume>978</volume>
          . Cham: springer,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Venkateswara</surname>
            , Hemanth,
            <given-names>Shayok</given-names>
          </string-name>
          <string-name>
            <surname>Chakraborty</surname>
            , and
            <given-names>Sethuraman</given-names>
          </string-name>
          <string-name>
            <surname>Panchanathan</surname>
          </string-name>
          .
          <article-title>"Deep-learning systems for domain adaptation in computer vision: Learning transferable feature representations</article-title>
          .
          <source>" IEEE Signal Processing Magazine 34.6</source>
          (
          <year>2017</year>
          ):
          <fpage>117</fpage>
          -
          <lpage>129</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Abiodun</surname>
            ,
            <given-names>Oludare</given-names>
          </string-name>
          <string-name>
            <surname>Isaac</surname>
          </string-name>
          , et al.
          <article-title>"Comprehensive review of artificial neural network applications to pattern recognition." IEEE access 7 (</article-title>
          <year>2019</year>
          ):
          <fpage>158820</fpage>
          -
          <lpage>158846</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Jdey</surname>
            , Imen,
            <given-names>Ghazala</given-names>
          </string-name>
          <string-name>
            <surname>Hcini</surname>
            , and
            <given-names>Hela</given-names>
          </string-name>
          <string-name>
            <surname>Ltifi</surname>
          </string-name>
          .
          <article-title>"Deep Learning and Machine Learning for Malaria Detection: Overview, Challenges</article-title>
          and
          <string-name>
            <given-names>Future</given-names>
            <surname>Directions</surname>
          </string-name>
          .
          <article-title>"</article-title>
          <source>International Journal of Information Technology &amp; Decision Making</source>
          (
          <year>2023</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Hang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Jie</surname>
          </string-name>
          , et al.
          <article-title>"Classification of plant leaf diseases based on improved convolutional neural network</article-title>
          .
          <source>" Sensors</source>
          <volume>19</volume>
          .19 (
          <year>2019</year>
          ):
          <fpage>4161</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Kiruthika</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Usha</surname>
          </string-name>
          , et al.
          <article-title>"Detection and classification of paddy crop disease using deep learning techniques."</article-title>
          <source>International Journal of Recent Technology and Engineering 8</source>
          .3 (
          <year>2019</year>
          ):
          <fpage>4353</fpage>
          -
          <lpage>4359</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Hatuwal</surname>
            ,
            <given-names>B. K.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shakya</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Joshi</surname>
          </string-name>
          .
          <article-title>Plant leaf disease recognition using random forest, KNN, SVM and CNN</article-title>
          .
          <source>Polibits</source>
          <volume>62</volume>
          ,
          <fpage>13</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .17562. PB-
          <volume>62</volume>
          -
          <issue>2</issue>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Roy</surname>
            ,
            <given-names>Vikas</given-names>
          </string-name>
          <string-name>
            <surname>Kumar</surname>
          </string-name>
          , et al.
          <article-title>"Leaf disease recognition: Comparative analysis of various convolutional neural network algorithms." Intelligent Prognostics for Engineering Systems with Machine Learning Techniques</article-title>
          . CRC Press,
          <year>2024</year>
          .
          <fpage>59</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Lingwal</surname>
            , Surabhi, Komal Kumar Bhatia, and
            <given-names>Manjeet</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
          </string-name>
          .
          <article-title>"Deep convolutional neural network approach for tomato leaf disease classification." Machine Learning, Image Processing</article-title>
          , Network
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>