<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Comparison of Convolutional Neural Networks using Transfer Learning for Cannabis Seed Classification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giovanni Acampora</string-name>
          <email>giovanni.acampora@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carme Barrot-Feixat</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Di Nunzio</string-name>
          <email>michele.dinunzio@ub.ed</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marianna Santoro</string-name>
          <email>marianna.santoro18@libero.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Autilia Vitiello</string-name>
          <email>autilia.vitiello@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Convolutional Neural Networks, Transfer Learning, Seed Classification, Cannabis detection</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Physics “Ettore Pancini”, University of Naples Federico II</institution>
          ,
          <addr-line>80126 Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Forensic Genetics Laboratory, Legal Medicine Unit, University of Barcelona</institution>
          ,
          <addr-line>08007 Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IRCCS Policlinico di Sant'Orsola, Azienda Ospedaliero-Universitaria di Bologna</institution>
          ,
          <addr-line>40138 Bologna</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Cannabis has attracted significant attention in recent years due to its medicinal and recreational properties. However, certain cannabis varieties contain high levels of tetrahydrocannabinol (THC), which can pose health risks. This underscores the need for reliable methods to detect and classify diferent cannabis varieties. Traditional manual classification and sorting techniques are often time-consuming and error-prone, highlighting the potential of artificial intelligence (AI) as an efective alternative. Despite its promise, the application of AI, especially deep learning, faces challenges such as limited availability of labeled data and the necessity to deploy models on mobile devices with constrained computational resources. To address these issues, this study explores the use of transfer learning for classifying cannabis seeds. Transfer learning mitigates data scarcity by relying on pre-trained models, while computational eficiency is tackled by selecting architectures optimized for mobile environments or characterized by relatively low resource demands. The experimental evaluation, involving a new dataset comprising two cannabis seed varieties, demonstrates that deep learning models employing transfer learning can achieve high classification performance, even under resource-limited conditions.</p>
      </abstract>
      <kwd-group>
        <kwd>Classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Cannabis is the botanical name of a genus within the Cannabaceae, the same plant family that contains
hops. The genus includes three species, Cannabis sativa, Cannabis ruderalis and Cannabis indica [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
In particular, Cannabis sativa is one of the oldest cultivated crops for various purposes such as food,
medicine, and fiber. It is thought to have originated in central Asia near the northwest Himalayas and
has spread throughout the world. All Cannabis plants share the presence of cannabinoids, or more
specifically
      </p>
      <p>
        phytocannabinoids. Tetrahydrocannabinol (THC) is the most well-known cannabinoid. It
shows considerable medical benefits. Indeed, THC relieves symptoms of sleep disorders, anxiety, and
insomnia and acts as an antidepressant. However, high doses of THC can hurt thinking, concentration,
perception, and mental function, potentially leading to behavioral disorders, hallucinations, delusions,
or psychosis. Because of these side efects and risks, cannabis plants and relative seeds with high level
of THC are illegal in many countries [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Hence, there is a growing demand for eficient and accurate
methods to detect and classify cannabis seeds [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Conventional manual classification and sorting procedures are time-consuming, labor-intensive,
and prone to human error. In some ways, DNA-based forensic botany techniques have outperformed
conventional chemical methods in the analysis of cannabis. Nevertheless, prior eforts to find and
confirm genetic markers on these synthase genes for crop type identification have run into problems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        In response to these challenges, artificial intelligence are emerging as a promising solution. In literature,
deep learning methods have been widely used for classifications of crop seeds [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. For example, in
1st Workshop on supporting CRIme reSolution Through Artificial INtelligence (CRISTAIN), held in conjunction with CHITALY
      </p>
      <p>CEUR
Workshop</p>
      <p>
        ISSN1613-0073
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a Convolutional Neural Network-based model is used to classify soybean seeds. More specifically,
recently, deep learning is emerging for cannabis seed classification. As an example, a non-destructive
assessment of hemp seed vigor using machine learning and deep learning models with hyperspectral
imaging is proposed in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Another example is reported in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], where Heo et al. developed a deep
learning approach to distinguish among 17 diferent cannabis seed varieties using RetinaNet and Faster
R-CNN by achieving good performance.
      </p>
      <p>Unfortunately, one major challenge in deploying deep learning models, especially for specialized
tasks such as identifying cannabis seed types is the limited availability of labeled data. Deep neural
networks, particularly convolutional architectures like AlexNet, GoogLeNet, or RetinaNet, typically
require large datasets to generalize well. Moreover, to make practical the use of these models, deep
learning methods should run on mobile devices. However, mobile and embedded devices face hardware
limitations that restrict the deployment of complex deep learning models such as limited memory and
storage and low processing power. The combination of a small dataset and limited computational power
complicates the use of conventional deep learning approaches on mobile platforms. Even if a model
is trained ofline on a high-performance server, deploying it to a resource-constrained environment
requires adaptation.</p>
      <p>
        Starting from these considerations, in this work, a comparison of CNN architectures using transfer
learning has been carried out to classify cannabis seeds. Transfer Learning is one of the most widely
used pretraining techniques which requires first training a baseline model on a large dataset and
then fine-tuning it on a smaller target dataset. To create a solid foundation model, transfer learning
uses parameters already trained on other source data rather than explicitly training the model on a
comparatively small target dataset [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This characteristic of transfer learning significantly reduces
the need for large amounts of training data, which is typically required by deep learning methods
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Moreover, to better reflect real-world scenarios where CNNs are expected to run on mobile
devices, this study focuses on four CNN architectures, MobileNet [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], EficientNet [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], Xception [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
and NASNetMobile [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], which are either specifically optimized for mobile use or are characterized
by relatively low computational cost. The proposed models were tested in an experimental session
involving a new dataset composed of hemp and marijuana seeds and evaluated through standard
performance metrics for classification tasks.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <p>
        In our study, four diferent state-of-the-art pre-trained CNN architectures have been used to extract
features and classify images of cannabis seeds: MobileNet [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], EficientNet [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], Xception [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and
NasNetMobile [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        The MobileNet family comprises multiple variants of CNNs designed to be lightweight and eficient,
making them ideal for use on mobile and embedded devices with limited computational resources.
It breaks down the standard convolution into two separate operations, depth-wise convolution layers
and point-wise convolution layers, reducing the number of parameters and computations required
while maintaining or even improving accuracy. MobileNetV2 [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] is the architecture chosen in our
experiments. It keeps eficiency and ofers better performance compared to the original MobileNet
thanks to inverted residual blocks and linear bottleneck layer. MobileNetV2 includes a set of
hyperparameters that allow developers to balance between accuracy, speed, and resource usage depending on
their specific needs. Among these, one of the most important hyper-parameter is the width multiplier,
often denoted as  . This parameter controls the number of channels (or filters) in each layer of
the network. By setting  to a value less than 1 (such as 0.75 or 0.5), the model becomes narrower,
significantly reducing both the number of parameters and the amount of computation. This comes at
the cost of some reduction in accuracy. Conversely, values greater than 1 (like 1.3 or 1.4) can be used to
create larger models for higher accuracy when computational resources are not a constraint.
      </p>
      <p>EficientNet is a family of CNN architectures developed by Google AI, designed to achieve high
accuracy with significantly fewer parameters and lower computational cost compared to traditional
models. The key innovation behind EficientNet is the use of a compound scaling method, which
uniformly scales the network’s depth, width, and resolution using a set of fixed scaling coeficients.
EficientNetB0, part of the EficientNet family, is the variant used in our experiments. This variant is
specifically designed to achieve high performance while being computationally eficient. EficientNet-B0
starts with a stem convolution and is followed by seven stages of Mobile Inverted Bottleneck Convolution
(MBConv) blocks, each with specific kernel sizes, strides, and expansion ratios. Inside the convolutional
layers and MBConv blocks, the activation function used is Swish. Instead, at output layer, softmax
function is typically used.</p>
      <p>Xception is a deep convolutional neural network architecture that takes the original Inception idea to
its logical extreme by completely replacing Inception modules with depthwise separable convolutions.
In detail, Xception’s architecture is composed of three main flows: 1) Entry Flow composed of a few
regular convolutions followed by depthwise separable convolutions; 2) Middle Flow composed of
identical blocks of depthwise separable convolutions with residual connections inspired by the success
in ResNet; 3) Exit Flow composed of a final set of depthwise separable convolutions and max-pooling,
followed by global average pooling and fully connected layers. Its hierarchical structure aids in learning
hierarchical representations and facilitates the flow of information through the network.</p>
      <p>NasNetMobile is a convolutional neural network designed using Neural Architecture Search (NAS),
i.e. an automated process that uses machine learning to discover high-performing architectures. Indeed,
NAS framed the problem of finding the best CNN architecture as a Reinforcement Learning problem
through its three main constituents: search space, search strategy, and performance estimation. The
search space defines the set of possible architectural choices that can be explored. The search strategy
determines how the NAS algorithm explores the search space to find the best architecture. Once a
candidate architecture is selected, the evaluation strategy measures its performance.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments and Results</title>
      <p>This section is devoted to describing data, experimental setup and results of our comparison study.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>A total of 84 cannabis seeds, including marijuana and hemp from various geographical origins, were
used in this study. Seeds were legally purchased online or from authorized cannabis shops:
• Italian hemp (N = 10), CB Weed (Le Marche);
• Czech hemp (N = 10), Cannandorra Hemp (CZ-BIO-002);
• Danish hemp (N = 10), Raab Vitalfood (DE-OKO-001);
• Danish hemp (N = 10), Reformhaus Hemp (DE-OKO-003);
• Spanish hemp (N = 10), Gramso (Comunidad Valenciana);
• Marijuana (N = 34), Royal Queen Seeds.</p>
        <p>Despite their diverse origins, all samples were included in a binary classification task (hemp vs.
marijuana).</p>
        <p>Each seed was imaged using a fully automated inverted fluorescence microscope (Leica DMI6000b).
Images were captured at 4000 × 3000 pixels (96 dpi, both horizontal and vertical resolution) and saved
in .JPG format.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Experimental setup</title>
        <p>
          Various preprocessing methods were used on our dataset to improve the quality of the input data, make
the model training more eficient, and ensure model generalization. These images were pre-processed
in the following sequence:
• Data labeling to categorize hemp and marijuana classes;
• Transform non-numerical labels into ordinal encoding scheme;
• Seed segmentation using Rembg [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] tool based on U2-Net to remove the background while
keeping salient seed features;
• Resize operation from 4000x3000 pixels to 1000 × 750 pixels for training eficiency;
• Data normalization (pixel values scaled between 0 and 1) to make the network learning faster
and converge more eficiently.
        </p>
        <p>Figs. 1 and 2 show the efect of the preprocessing operations on the images, after segmentation and
resizing. From the images it is possible to deduce that the transformations have not worsened the
quality of the images and the salient features are still well visible and optimized for the training process.
(a) Original hemp seed
(b) Hemp seed after preprocessing</p>
        <p>After these initial image preprocessing steps, we split the dataset in the ratio of 80:20 for training
and test folders, ensuring class balancing. Validation data have been obtained from training data with a
ratio of 90:10 for training and validation folders for monitoring the model’s generalization performance.
For feature extraction, we have used multiple pre-trained CNN architectures as mentioned before from
Keras Applications. Default hyper-parameters were used. In particular,  = 1 for MobileNetV2 allows a
good trade-of between performance and computational cost.</p>
        <p>
          We get pre-trained weights alongside each model. All of them were trained on the well-known
ImageNet dataset [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], which consists of 1000 object category classifications. There are more than a
million images in ImageNet’s training set, around fity thousand in its validation set, and one hundred
thousand in its test set. Freezing the pre-trained layers allows the network to focus on learning
new, task-specific features while leveraging the knowledge already encoded in the pre-trained model.
Using Transfer learning-based models has the advantage of reducing model training time, reducing
generalization errors, and the need for huge datasets. In the Transfer learning scenario, the following
steps were computed:
• Retrieving layers of the pre-trained model and download the pre-trained weights;
• Freezing layers to avoid weights being re-initialized;
• Adding new trainable layers on top of frozen layers that will turn old features into predictions on
our new dataset;
• Training new layers on our seed dataset.
        </p>
        <p>The added layers in our networks are the GlobalAveragePooling2D and Dense layers. The
GlobalAveragePooling2D layer is used to extract spatial information from feature maps. It’s instrumental in
reducing parameters and simplifying model architectures, as it contains fewer parameters than the
Flatten layer, which reduces the risk of overfitting and helps build a more eficient model. The final layer
of the model is the Dense layer, The goal is to use the pre-trained model, or a part of it, to pre-process
images and get essential features and pass these features to this new classifier with no need to retrain
the base model. The Dense layer contains 1 neuron because there are two possible classes, hemp and
marijuana, and a Sigmoid activation function to output a probability between 0 and 1.</p>
        <p>Details of the training settings are given in Table 1. In this configuration scenario, all 4 models
contained frozen and trainable parameters whose size is displayed in Table 2</p>
        <p>Performance evaluation methods such as Accuracy, Precision, Recall, and F-score were used to
evaluate models created for our binary classification task. These metrics were obtained from the
confusion matrices where true positives, false positives, true negatives and false negatives are reported.
It is important to note that, in a forensic context, if the positive class is defined as representing illegal
seeds, the presence of false negatives in cannabis seed classification means that some illegal seeds may
go undetected, potentially leading to serious operational consequences.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Results</title>
        <p>Table 3 shows the training and test performance metrics of each compared model taken as a weighted
average (taking the support of each class into account) on the acquired dataset.</p>
        <p>(a) MobileNetV2 loss</p>
        <p>(b) Xception loss
(c) NasNetMobile loss
(d) EficientNetB0 loss</p>
        <p>EficientNetB0 produced worse outcomes on both training and testing data. Neither meaningful
patterns nor efective generalization were learned by the model. Due to the small dataset size and lack
of training time, the model may not have learned the relevant patterns in our dataset, which could lead
to poor performance. As shown in Fig. 3, validation loss remains high after 20 epochs, suggesting the
need to use more epochs and data-augmentation technique to fine-tune the weights properly.</p>
        <p>MobileNetV2, Xception and NasNetMobile models confirmed their strength even in the presence
of small data, and they are able to converge, overcoming the trade-of between the amount of data vs.
model complexity. As expressed in Table 3, MobileNetV2 and Xception achieved the best metrics for
marijuana/hemp classification, achieving a 100% score for each metric performance on test data. They are
(a) MobileNetV2 results</p>
        <p>(b) Xception results
(c) NasNetMobile results
(d) EficientNetB0 results
followed by NasNetMobile, with lower performance in terms of accuracy, Recall, F1-score, and Precision
(82% for all metrics). The above metrics are calculated with the help of confusion matrices reported in
Fig. 4 which provide a detailed breakdown of how well each classification model performs. Confusion
matrices confirm the robustness of the models, with the exception of the EficientNet architecture
which, as can be seen, incorrectly classifies all marijuana samples as hemp seeds.</p>
        <p>To conclude, considering also the number of trainable parameters in the comparison, the MobileNetV2
model emerges as the most eficient choice, as it achieves the same performance of Xception while
requiring significantly fewer parameters to be trained.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>As part of our study, four state-of-the-art CNN-based transfer learning models, such as MobileNetV2,
Xception, EficientNetB0 and NasNetMobile were trained on a new dataset of marijuana and hemp
seeds. All four architectures are designed with eficiency in mind, specifically focusing on achieving
high accuracy while keeping the model size and computational cost low. The best accuracy results
were obtained with MobileNetV2 and Xception architectures, reaching 100% accuracy on test data,
correctly classifying each seed class. However, when considering the number of trainable parameters,
MobileNetV2 proves to be the best option, matching Xception’s performance with a much smaller model
size. In general, except for EficientNet, the considered CNN architectures using transfer learning ofer
good performance and computational eficiency, making them a good choice for mobile applications
also in the forensic context.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This study was funded by Ministero dell’Università e della Ricerca (MUR) of Italy in the context of
project denoted as BLOODSTAIN in the program PRIN 2022 (grant number E53D23008040001).</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
M. Bernstein, et al., Imagenet large scale visual recognition challenge, International journal of
computer vision 115 (2015) 211–252.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schilling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Melzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. F.</given-names>
            <surname>McCabe</surname>
          </string-name>
          , Cannabis sativa,
          <source>Current Biology</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>R8</fpage>
          -
          <lpage>R9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Devkota</surname>
          </string-name>
          ,
          <article-title>Hemp (cannabis sativa l</article-title>
          .)
          <article-title>-taxonomy, distribution and uses, in: Revolutionizing the potential of hemp and its products in changing the global economy</article-title>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Rhyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yoon</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-D. Lee</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Beneficial efects of cannabidiol from cannabis</article-title>
          ,
          <source>Applied Biological Chemistry</source>
          <volume>67</volume>
          (
          <year>2024</year>
          )
          <fpage>32</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Islam</surname>
          </string-name>
          , T. T. Sarker,
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lakhssassi</surname>
          </string-name>
          ,
          <article-title>Detection and classification of cannabis seeds using retinanet and faster r-cnn</article-title>
          ,
          <source>Seeds</source>
          <volume>3</volume>
          (
          <year>2024</year>
          )
          <fpage>456</fpage>
          -
          <lpage>478</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] Y.-C. Cheng, R. Houston,
          <article-title>The development of two fast genotyping assays for the diferentiation of hemp from marijuana</article-title>
          ,
          <source>Journal of Forensic Sciences</source>
          <volume>70</volume>
          (
          <year>2025</year>
          )
          <fpage>49</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S. S.</given-names>
            <surname>Aydav</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Minz</surname>
          </string-name>
          ,
          <article-title>Crop seeds classification using traditional machine learning and deep learning techniques: A comprehensive survey</article-title>
          ,
          <source>SN Computer Science</source>
          <volume>5</volume>
          (
          <year>2024</year>
          )
          <fpage>1031</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Teng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <article-title>Deep learning based soybean seed classification, Computers and Electronics in Agriculture 202 (</article-title>
          <year>2022</year>
          )
          <fpage>107393</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Onwimol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chakranon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wonggasem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wongchaisuwat</surname>
          </string-name>
          ,
          <article-title>Non-destructive assessment of hemp seed vigor using machine learning and deep learning models with hyperspectral imaging</article-title>
          ,
          <source>Journal of Agriculture and Food Research</source>
          (
          <year>2025</year>
          )
          <fpage>101836</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Islam</surname>
          </string-name>
          , T. T. Sarker,
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lakhssassi</surname>
          </string-name>
          ,
          <article-title>Detection and classification of cannabis seeds using retinanet and faster r-cnn</article-title>
          ,
          <source>Seeds</source>
          <volume>3</volume>
          (
          <year>2024</year>
          )
          <fpage>456</fpage>
          -
          <lpage>478</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Alzubaidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <article-title>A comparison review of transfer learning and self-supervised learning: Definitions, applications, advantages and limitations</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>242</volume>
          (
          <year>2024</year>
          )
          <fpage>122807</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>A survey on deep transfer learning</article-title>
          ,
          <source>in: Artificial Neural Networks and Machine Learning-ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7</source>
          ,
          <year>2018</year>
          , Proceedings,
          <source>Part III 27</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>270</fpage>
          -
          <lpage>279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kalenichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Weyand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Andreetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Adam</surname>
          </string-name>
          , Mobilenets:
          <article-title>Eficient convolutional neural networks for mobile vision applications</article-title>
          ,
          <source>arXiv preprint arXiv:1704.04861</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          , Eficientnet:
          <article-title>Rethinking model scaling for convolutional neural networks</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6105</fpage>
          -
          <lpage>6114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Chollet</surname>
          </string-name>
          , Xception:
          <article-title>Deep learning with depthwise separable convolutions</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1251</fpage>
          -
          <lpage>1258</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasudevan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shlens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Learning transferable architectures for scalable image recognition</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>8697</fpage>
          -
          <lpage>8710</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sandler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhmoginov</surname>
          </string-name>
          , L.-C.
          <article-title>Chen, Mobilenetv2: Inverted residuals and linear bottlenecks</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>4510</fpage>
          -
          <lpage>4520</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gatis</surname>
          </string-name>
          , rembg,
          <year>2025</year>
          . URL: https://github.com/danielgatis/rembg.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>O.</given-names>
            <surname>Russakovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Krause</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satheesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Karpathy</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Khosla,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>