<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">An Explainable Convolutional Neural Network for the Detection of Drug Abuse</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
				<date type="published" when="2024-10-20">20 October 2024</date>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Giulia</forename><surname>Tufo</surname></persName>
							<email>giulia.tufo@uniroma1.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Basic and Applied Sciences for Engineering</orgName>
								<orgName type="institution">Università degli Studi Roma La Sapienza</orgName>
								<address>
									<addrLine>Via Antonio Scarpa 14</addrLine>
									<settlement>Roma</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Meriam</forename><surname>Zribi</surname></persName>
							<email>meriam.zribi@uniroma1.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Basic and Applied Sciences for Engineering</orgName>
								<orgName type="institution">Università degli Studi Roma La Sapienza</orgName>
								<address>
									<addrLine>Via Antonio Scarpa 14</addrLine>
									<settlement>Roma</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Paolo</forename><surname>Pagliuca</surname></persName>
							<email>paolo.pagliuca@istc.cnr.it</email>
							<affiliation key="aff1">
								<orgName type="department">Institute of Cognitive Sciences and Technologies</orgName>
								<orgName type="institution">National Research Council (CNR)</orgName>
								<address>
									<addrLine>Via Gian Domenico Romagnosi 18/A</addrLine>
									<settlement>Roma</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francesca</forename><surname>Pitolli</surname></persName>
							<email>francesca.pitolli@uniroma1.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Basic and Applied Sciences for Engineering</orgName>
								<orgName type="institution">Università degli Studi Roma La Sapienza</orgName>
								<address>
									<addrLine>Via Antonio Scarpa 14</addrLine>
									<settlement>Roma</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<settlement>Santiago de Compostela</settlement>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">An Explainable Convolutional Neural Network for the Detection of Drug Abuse</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
						<imprint>
							<date type="published" when="2024-10-20">20 October 2024</date>
						</imprint>
					</monogr>
					<idno type="MD5">FD7D60A4C09252F4B906BE3A61FE9DD3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Drug abuse detection</term>
					<term>Lateral-flow tests</term>
					<term>Explainability</term>
					<term>Convolutional Neural Networks</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The spread of Artificial Intelligence methods in many contexts is undeniable. Different models have been proposed and applied to real-world applications in sectors like economy, industry, medicine, healthcare and sports. Nevertheless, the reasons of why such techniques work are not investigated in depth, thus posing questions about explainability, transparency and trust. In this work, we introduce a novel Deep Learning approach for the problem of drug abuse detection. Specifically, we design a Convolutional Neural Network model analyzing lateral-flow tests and discriminating between normal and abnormal assays. Moreover, we provide evidence regarding the attributes that enable our model to address the considered task, aiming to identify which parts of the input exert a significant influence on the network's output. This understanding is crucial for applying our methodology in real-world scenarios. The results obtained demonstrate the validity of our approach. In particular, the proposed model achieves an excellent accuracy in the classification of the lateral-flow tests and outperforms two state-of-the-art deep networks. Additionally, we provide supporting data for the model's explainability, ensuring a precise understanding of the relationship between attributes and output, a key factor in comprehending the internal workings of the neural network.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Artificial Intelligence (AI) is a field in considerable and continuous expansion that is part of our lives and has spread to many sectors, like economy <ref type="bibr" target="#b0">[1]</ref>, industry <ref type="bibr" target="#b1">[2]</ref>, sports <ref type="bibr" target="#b2">[3]</ref>, medicine <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref> and healthcare <ref type="bibr" target="#b5">[6]</ref>. Focusing on these two latter fields, AI provides a valid support for helping doctors and other professionals to make diagnosis <ref type="bibr" target="#b6">[7]</ref> and predictions <ref type="bibr" target="#b7">[8]</ref>, explain and analyze medical data <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>. Moreover, the use of assistive robots in rehabilitation and elderly monitoring is widespread nowadays <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>.  For each adulterant, the list of colors identifying normality and abnormality of the test is provided. Image taken from <ref type="bibr" target="#b12">[13]</ref>.</p><p>In particular, AI is a unique tool for analyzing huge amounts of data effectively and on time <ref type="bibr" target="#b13">[14]</ref>. Human evaluation, constrained by various factors such as subjectivity, limited computational capacity, past and personal experiences, fatigue, stress, and data quality (such as image resolution and/or lighting conditions), may be prone to generate inaccurate predictions and/or faults. Especially in medicine and healthcare, the error minimization is paramount, since it might affect the diagnosis of potential diseases, prompt interventions, therapies for rehabilitation and other aspects. A largely applied approach in the medical data analysis and classification relies on the use of Convolutional Neural Networks (CNNs) <ref type="bibr" target="#b14">[15]</ref><ref type="bibr" target="#b15">[16]</ref><ref type="bibr" target="#b16">[17]</ref>. CNNs enable the analysis of broad datasets containing thousands of data faster than human operators. Notwithstanding the abundance of the examples, a major concern is the lack of explainability of some models proposed in the literature, posing a challenge that even involves the developers themselves. This turns out to be critical in the fields of medicine and healthcare, where the use of explainable AI approaches is pivotal <ref type="bibr" target="#b17">[18]</ref><ref type="bibr" target="#b18">[19]</ref><ref type="bibr" target="#b19">[20]</ref>.</p><p>In this work, we analyzed the issue of detecting the presence of substances/drugs in rapid lateral-flow tests (Fig. <ref type="figure" target="#fig_0">1</ref>) <ref type="bibr" target="#b20">[21]</ref>. Similar works investigating this topic are those reported in <ref type="bibr" target="#b21">[22]</ref><ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref>. In particular, in <ref type="bibr" target="#b22">[23]</ref> the authors propose an image processing algorithm combined with a Least Squares Support Vector Machine (LS-SVM) to investigate pH indicator paper assays. Their approach achieve excellent performances in terms of accuracy.</p><p>The analysis of the test results is generally made by human operators. As we stated above, the interpretation of the test is affected by factors like the subjectivity of the person and/or her/his physical and mental conditions, with consequent possible errors. Instead, we propose a novel Computer Vision (CV) approach based on the use of a deep CNN. Specifically, we employed the model introduced in <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref> with the addition of pooling layers <ref type="bibr" target="#b26">[27]</ref> in the convolutional part of the network. The use of pooling allows us to reduce the complexity of the problem without losing accuracy. Indeed, pooling helps the model to become invariant to small translations of the input <ref type="bibr" target="#b26">[27]</ref>. The model must distinguish between normal and abnormal results in lateral-flow tests analyzed for the detection of drug abuse. The primary goal of the model is to verify the suitability of the sample to ensure it has not been compromised in any way. Once the sample's suitability is confirmed, the analysis can proceed to investigate the presence of narcotic substances. The cartridge undergoes a color change upon contact with the human biological sample (urine). Based on the detected color gradation, it can be concluded whether the sample has been adulterated or not. A test is considered as "abnormal" if any of the adulterants is not compliant with the corresponding guide (Fig. <ref type="figure" target="#fig_1">2</ref>).</p><p>While the detection of strips in rapid check tests with Deep Learning techniques has already been addressed in the literature <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b27">[28]</ref><ref type="bibr" target="#b28">[29]</ref><ref type="bibr" target="#b29">[30]</ref>, to our knowledge the use of a CNN model to verify that the biological sample is indeed urine and has not been tampered with in the adulterant section of lateral-flow tests has not been investigated. Collected results indicate the validity of our approach: the proposed model manages to discriminate between normal and abnormal tests. Moreover, we discuss the reasons of why such model is effective, thus providing evidence of its explainability, which represents a paramount property to successfully apply the methodology in real-world scenarios. The main contributions of our work can be summarized as follows:</p><p>• we propose a novel Deep Learning (DL) approach to address the issues related to the visual inspection of lateral-flow tests, which are generally examined by human operators, with a focus on the adulterant section of the assay; • we apply the recently proposed ConvNet3_4 model <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref> to discriminate between normal and abnormal tests; • we achieve an excellent classification capability proving the validity of the model; • we compare the model with two state-of-the-art deep networks and we demonstrate the superiority of our approach; • we provide a thorough analysis of the relevant features extracted by the model in order to associate the proper output class to each image.</p><p>The remainder of the article is structured as follows: section 2 contains a description of the methodology we applied with respect to the considered problem. Results of our experiments are provided in section 3. Finally, our conclusions and final remarks are reported in section 4.  The adulterants taken into account are: pH, OX and GL. We filled the image with a black box in order to minimize the impact of meaningless pixels on the model's prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Materials and methods</head><p>As stated in section 1, our study focuses on assessing the suitability of samples through the analysis of lateral-flow tests for the detection of substance abuse (Fig. <ref type="figure" target="#fig_0">1</ref>). All images used were obtained from a specialized medical devices company and were labeled by professional laboratory technicians. An example of images obtained with this setup is shown in Fig. <ref type="figure" target="#fig_2">3</ref>.</p><p>Our analysis focuses on the suitability of the sample by examining the portion of the test image containing the six different adulterants (see Fig. <ref type="figure" target="#fig_3">4</ref> left, highlighted area in the blue rectangle): Specific Gravity (SG), pH, Oxidant (OX), Creatinine (CRE), Nitrite (NI), and Glutaraldehyde (GL). Due to the presence of elements belonging to a specific class only (i.e., samples being either always normal or always abnormal), we concentrated our analysis on only three adulterants -pH, OX, and GL -for which we managed to collect data belonging to both classes. Consequently, we created a dataset consisting of 181 images. Fig. <ref type="figure" target="#fig_3">4</ref> right provides an example of the input image where we cropped the specific portion of interest, filling the remaining part with black pixels. The size of pictures is 215 × 225 pixels. Our model was then trained exclusively on images considering these components.</p><p>Given the small size of our dataset, we employed the data augmentation technique <ref type="bibr" target="#b30">[31]</ref>, which is crucial to avoid overfitting when the amount of available data is limited <ref type="bibr" target="#b31">[32]</ref>. Furthermore, due to the difficulty to collect normal assays, the original dataset is unbalanced between the two classes, with 133 images of abnormal tests and only 48 pictures of normal assays (the ratio is around 2.77). The class imbalance problem is a major concern in Machine Learning (ML) and Deep Learning (DL) <ref type="bibr" target="#b32">[33]</ref>. In fact, training models on unbalanced data may result in learning most from the larger class, with consequent sub-optimal performances and poor generalization capabilities (for instance, a model could associate one class to all the input data regardless of the image features). Aiming to mitigate such issue, we first apply transformations so as to balance the two types of data (see Table <ref type="table" target="#tab_0">1</ref>), thus obtaining 300 images equally split between the two classes, 80% of which constitute the training set and the remaining 20% are the test set. The balancing operation has been performed in order to ensure that each image and its variation(s) cannot be in both training and test sets, hence excluding the possibility of overfitting. Then, we use the RandomAdjustSharpness transformation <ref type="bibr" target="#b33">[34]</ref> (parameters: 𝑓 𝑎𝑐𝑡𝑜𝑟 ∈ [0, 0.25, 0.5, 0.75, 1.25, 1.5, 2, 2.5, 3]; 𝑝𝑟𝑜𝑏 = 1.0) to widen the set of input images in both training and test sets. The type of transformations employed to increase the number of data has been chosen carefully by taking into account the specific nature of the problem and the criticality of modifying image colors (see Fig. <ref type="figure" target="#fig_1">2</ref>). Overall, the final training set consists of 2400 images, while the final test set contains 600 images.</p><p>Our model has been trained 10 times, each one starting with a different network weight initialization. The use of multiple replications minimize the risk of overestimating the model's performance due to lucky conditions. Training lasts 50 epochs, the learning rate is set to 10 −4 and the batch size is set to 16. The model's optimizer is Adam <ref type="bibr" target="#b34">[35]</ref> with weight decay, whose value is 10 −2 . The size of pooling filters is 2 × 2. The experimental parameters have been derived from <ref type="bibr" target="#b25">[26]</ref> and are empirically determined. Before training our model, we applied the k-fold cross-validation technique <ref type="bibr" target="#b35">[36]</ref> to verify whether our deep network is suitable for the considered problem and mitigate the data overfitting issue <ref type="bibr" target="#b36">[37]</ref>. We set 𝑘 = 5 and measured the average accuracy of the model by computing the Cross-Entropy (CE) loss metric. The average CE obtained during cross-validation phase is 92.917%, that is the proposed model achieves sufficiently good categorization performances and correctly discriminates between normal and abnormal tests. Therefore, we can state that our model is suitable for the considered problem.</p><p>Aiming to demonstrate the novelty and efficacy of the proposed model, we perform a comparison with the DenseNet121 <ref type="bibr" target="#b37">[38]</ref> and ResNet18 <ref type="bibr" target="#b38">[39]</ref> pre-trained networks, which represent two state-of-the-art models. We choose these networks since they are characterized by a number of trainable parameters of similar orders of magnitude compared to our approach, as we will illustrate in the next section. This allows us to perform a fair evaluation of the presented model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>In this section we provide the outcomes of our experiments. As for the k-fold cross-validation phase, we employed the CE loss as a performance measure.  List of transformations applied to the original images in order to balance pictures between the two output classes (i.e., "normal" and "abnormal"). Specifically, we generate 17 images of abnormal tests and 102 pictures of normal assays. The resulting set contains 300 images equally distributed among the two classes. For further details about the transformations, the reader is referred to <ref type="bibr" target="#b33">[34]</ref>.   As we mentioned above, we compared the outcomes of our model with those achieved with the DenseNet121 and ResNet18 networks. The analysis is illustrated in Table <ref type="table" target="#tab_1">2</ref> and reveals that the ConvNet3_4 model is notably superior to both DenseNet121 and ResNet18 with respect to the accuracy: pre-trained networks manage to correctly classify only around half of the images, i.e. they are not able to discriminate between the two possible output classes (see also Fig. <ref type="table">B</ref>.1). The result is in line with those reported in <ref type="bibr" target="#b25">[26]</ref>. Moreover, our ConvNet3_4 model is also remarkably better than DenseNet121 and ResNet18 in terms of efficiency (see the significant discrepancy concerning the training time in Table <ref type="table" target="#tab_1">2</ref>), which represents a pivotal property for a model applicability in real scenarios.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Output class</head><p>The results we presented so far demonstrate that our model is suitable to address the considered problem and achieve excellent performances in terms of classification capability. Nonetheless, as we stated in section 1, a worthwhile aspect in the fields of medicine and healthcare is the explainability of the proposed models, which is necessary to practically employ them in real-application cases. To this end, we performed a feature analysis by using two different techniques: Saliency <ref type="bibr" target="#b39">[40]</ref> and Integrated Gradients (IG) <ref type="bibr" target="#b40">[41]</ref>. Both methods are widely employed to interpret the outcomes of a model's classification <ref type="bibr" target="#b41">[42]</ref><ref type="bibr" target="#b42">[43]</ref><ref type="bibr" target="#b43">[44]</ref>. The former method allows the identification of the parts of the image contributing more to the output prediction. An example of the feature map extracted through the Saliency method is shown in Fig. <ref type="figure">7</ref> middle. Saliency values range from 0 (absence of saliency) to 1 (positive saliency), as indicated in Fig. <ref type="figure">7</ref> middle. For a more detailed description of the Saliency algorithm, the reader is referred to <ref type="bibr" target="#b39">[40]</ref>. Conversely, the IG method identifies the regions of the image that most influenced the model's classification decision by considering the entire input-output trajectory and the reference input distribution (baselines) used in the attribution calculation. As pointed out in <ref type="bibr" target="#b40">[41]</ref>, the IG method first considers the input image 𝑥 and a baseline 𝑥 ′ , which is an input characterized by absence of features. Then, a straightline path from 𝑥 ′ to 𝑥 is taken into account. IG computes the gradients at all points along such path. The integrated gradients are obtained as the cumulative sum of these gradients. Put more formally, if we denote our CNN model with 𝐹 : R 𝑛 → [0, 1], the integrated gradients along the 𝑖 𝑡ℎ dimension is calculated as:</p><formula xml:id="formula_0">𝐼𝐺 𝑖 (𝑥) = (𝑥 𝑖 − 𝑥 ′ 𝑖 ) × ∫︁ 1 𝛼=0 𝜕𝐹 (𝑥 ′ + 𝛼 × (𝑥 − 𝑥 ′ )) 𝜕𝑥 𝑖 𝑑𝛼<label>(1)</label></formula><p>where 𝜕𝐹 (𝑥) 𝜕𝑥 𝑖 indicates the gradient of 𝐹 (𝑥) along the 𝑖 𝑡ℎ dimension. Overall, the IG method enables to detect the portions of the picture providing positive (parts in green, see Fig. <ref type="figure">7 right</ref>) and negative (parts in red, see Fig. <ref type="figure">7</ref> right) contributions to the output prediction. Fig. <ref type="figure">7</ref> shows the image of a lateral-flow test categorized as "normal" (Fig. <ref type="figure">7</ref> left), the Saliency feature map (Fig. <ref type="figure">7</ref> middle) and the Integrated Gradients heat map (Fig. <ref type="figure">7</ref> right). Fig. <ref type="figure">8</ref> contains the same data for an "abnormal" assay. The colorbars below the heat maps specify respectively the intensity of the saliency and the magnitude of the importance attribution of each region of the image to the model's prediction.</p><p>By examining the feature maps associated to a normal lateral-flow test (Fig. <ref type="figure">7</ref>), we can observe that the Saliency method returns a positive saliency for the pH and GL adulterants and a slightly positive saliency for the OX adulterant (Fig. <ref type="figure">7 middle</ref>). Concerning the Integrated Gradients technique, the heat map displays a positive attribution for the pH and GL adulterants and a slightly positive attribution for the OX adulterant (see Fig. <ref type="figure">7 right</ref>). Therefore, in the case of a normal assay, the model assigns the same importance to the portions of the image containing the considered adulterants. This outcome is in line with the actual test result.</p><p>If we look at the relevant features highlighted by the Integrated Gradients technique with regard to an abnormal assay, we can see that a slightly positive attribution is conferred to the pH, OX and GL adulterants (Fig. <ref type="figure">8</ref> right). As far as the Saliency algorithm is concerned, it returns a positive saliency for the GL adulterant and a slightly positive attribution for the pH and OX adulterants (Fig. <ref type="figure">8 middle</ref>). Also in this case, the results are coherent with the actual test outcome, which indicates non-compliance for the OX and GL adulterants. Overall, our findings suggest that the proposed model primarily relies on the portions of the image containing the pH, OX and GL adulterants. However, the amount of contribution strongly depends on the test result (e.g., normal or abnormal) and the specific color gradation of the adulterants. Indeed, except for one case, the OX adulterant is characterized by soft nuances tending to be as similar as the background color of the image. Similarly, the normality and abnormality of the pH adulterant are defined based on subtle color gradations. Therefore, distinguishing between the two cases might be challenging even for laboratory operators. Finally, it is worth noting that cropping the picture does not affect the classification capability of the model. Indeed, the use of black pixels providing no information allows the model to focus only on the remaining parts of the image, which contains the relevant data.</p><p>To summarize, our outcomes demonstrate the capability of the ConvNet3_4 model to extract the most relevant features of the input image in order to generate a precise prediction of the output class. In particular, the identification of the portions containing the adulterants as the key elements of the input image implies that the model is capable of making assumptions from a medical point of view. Indeed, discriminating between the color nuances of the considered adulterants (see Fig. <ref type="figure" target="#fig_1">2</ref>) is far from trivial even for experienced and well-trained operators.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion and conclusions</head><p>The spread of AI methods poses questions about the explainability of such models, particularly when they are applied in real-world contexts. Especially in medicine and healthcare, using explainable and trustworthy approaches is paramount in order to help doctors and other professionals to make diagnoses of possible diseases, design adequate therapies for prevention or rehabilitation, explain and collect historical data. Generally, analyzing huge amount of medical data is addressed through DL methods and CNNs represent a widespread tool, although they are often tailored to specific applications. This represents a major concern in the possibility to develop cross-cutting tools. In this work, we proposed a novel approach for the problem of automatically classify lateral-flow tests for drug abuse detection. Specifically, we considered the adulterant section of an assay and we trained a CNN model for the ability to categorize tests (i.e., normal or abnormal) by analyzing the pH, OX and GL adulterants only. We used the network introduced in <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref> with the addition of pooling layers in the convolutional part of the model. The use of pooling enables the development of slim networks that can be used in real-world scenarios, particularly when dealing with limited hardware resources. We verified the suitability of the model through a 5-fold cross-validation and we ran the training 10 times. We collected promising results on the chosen task, with an excellent average accuracy. The proposed approach is also notably superior to two state-of-the-art deep networks. Moreover, we provided an explainability of our model by performing a feature analysis. Our outcomes reveal the importance of some portions of the input image (those containing the adulterants), while other parts affect the final prediction only partially.</p><p>In spite of the good results we achieved, further research is needed to generalize our approach. First, we are collecting samples so as to broaden our analyses to the SG, CRE and NI adulterants, which are out of the scope of this work, and increase the size of our dataset. Owning a significant high number of images is pivotal to apply the model in a real case, since medical data usually include hundreds or thousands of pictures. However, extending the analysis to all adulterants implies considering input images of different sizes, with an unavoidable effect on the model's performance. Furthermore, the ConvNet3_4 model proves effective at dealing with the considered problem, with a very good classification capability, and outperforms the DenseNet121 and ResNet18 pre-trained networks. Nonetheless, we observe the presence of oscillations during training due to the sensitivity to specific batches of input images. Aiming to address such undesired behavior, in the future we will consider the possibility to adapt the learning rate and/or the weight decay during training. In addition, with respect to the explainability issue, we provide evidence of the reasons behind the success of our model. Nonetheless, because the</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Example of a lateral flow test for drug abuse detection.</figDesc><graphic coords="2,193.47,84.19,208.35,179.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Adulterant guide chart. For each adulterant, the list of colors identifying normality and abnormality of the test is provided. Image taken from [13].</figDesc><graphic coords="2,89.29,303.37,416.77,115.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Image showing the entire setup: the lateral flow test is inserted into the device. Acquisition is made through a camera placed at the front.</figDesc><graphic coords="4,193.47,84.19,208.32,117.18" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Left: Adulterant section of the lateral flow test (area inside the blue rectangle). Right: Example of image used for model training. It highlights the portion of the adulterant section considered.The adulterants taken into account are: pH, OX and GL. We filled the image with a black box in order to minimize the impact of meaningless pixels on the model's prediction.</figDesc><graphic coords="4,365.35,254.12,93.76,98.12" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 5 ,</head><label>5</label><figDesc>left shows the CE loss of the model during training. As it can be observed, the training error quickly decreases and stabilizes from the epoch 20 (see Fig. 5 left, blue curve). Conversely, the test error increases in the first 10 epochs (see Fig. 5 left, red curve), then it starts decreasing and almost stabilizes from epoch 20-25. The peak at epoch 40 (see Fig. 5 left, red curve) is due</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :Figure 6 :Figure 7 :Figure 8 :</head><label>5678</label><figDesc>Figure 5: Model classification capability during training. Left: Error curve during training. Data are obtained by averaging 10 replications of the experiment. Right: Number of images correctly classified during training. Curves show how many images are correctly categorized as "normal" in each replication (labeled as S1, S2, . . . , S10). Top: data referring to training set. Bottom: data collected on the images belonging to test set.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>the relevant features from the input images and predict the corresponding output class. The oscillations of the CE during training may be due to the randomization of the order of input images across epochs. If we analyze the number of lateral-flow assays correctly categorized in each replication (see Fig.5right), we can observe how all the networks manage to properly associate each test in the training set to the right class in around 20 epochs (Fig.5right, top figure). The classification of images in the test set is subject to oscillations and stabilizes after around 20 epochs except for one replication (Fig.5right, bottom figure). This outcome is not surprising since the latter set is used as a tool for validating the model. By considering the best model only, Fig.6left explains whether and how the model categorizes the lateral-flow assays in the test set. The latter consists of 300 images of abnormal tests and 300 pictures of normal assays (see Fig.6left). As it can be seen, the model manages to correctly classify all the 600 images in the test set. Fig.6right illustrates the Receiver Operating Characteristic (ROC) curve of the best model, which plots the true positive rate against the false positive rate. Because the Area Under Curve (AUC) is 1.0, our best model corresponds to a perfect classifier.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Analysis of the accuracy and the efficiency of the ConvNet3_4, DenseNet121 and ResNet18 models. With regard to ConvNet3_4, we considered the best model for the comparison. Bold values denote the best outcomes (concerning both Model parameters and Training time, the lower the better). Interestingly, the ConvNet3_4 model requires remarkably less time than the other two networks during training.</figDesc><table><row><cell></cell><cell cols="3">ConvNet3_4 DenseNet121 ResNet18</cell></row><row><cell>Accuracy</cell><cell>100%</cell><cell>52.5%</cell><cell>50.667%</cell></row><row><cell>Model parameters</cell><cell>3464527</cell><cell>6955906</cell><cell>11177538</cell></row><row><cell>Training time (s)</cell><cell>513</cell><cell>11668</cell><cell>3790</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>visualization of feature maps revealed the model identifying sometimes as regions of interest those that should not influence it (e.g., the portions of the cartridge container surrounding the adulterants, see Figs. 7 -8), in future developments we plan to create a mask. This mask, overlaid on the original image, will eliminate areas of low interest, allowing the model to focus solely on relevant regions. Finally, we are investigating the applicability of our model to other datasets in the medical and health care fields with the aim to generalize the validity of the approach.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A role of artificial intelligence in the context of economy: Bibliometric analysis and systematic literature review</title>
		<author>
			<persName><forename type="first">M</forename><surname>Jrad</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Membrane Science and Technology</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="1563" to="1586" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Artificial intelligence for industry 4.0: Systematic review of applications, challenges, and opportunities</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Jan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ahamed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Mayer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Grossmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stumptner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kuusk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">216</biblScope>
			<biblScope unit="page">119456</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Araújo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Couceiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Seifert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Sarmento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Davids</surname></persName>
		</author>
		<title level="m">Artificial intelligence in sport performance analysis</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Artificial intelligence and machine learning in clinical medicine</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Haug</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Drazen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">New England Journal of Medicine</title>
		<imprint>
			<biblScope unit="volume">388</biblScope>
			<biblScope unit="page" from="1201" to="1208" />
			<date type="published" when="2023">2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">O</forename><surname>Marques</surname></persName>
		</author>
		<title level="m">Artificial intelligence and medicine: The big picture</title>
				<imprint>
			<publisher>CRC Press</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
	<note>AI for Radiology</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Artificial intelligence in healthcare: a review on predicting clinical needs</title>
		<author>
			<persName><forename type="first">D</forename><surname>Houfani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Slatnia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kazar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Saouli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Merizig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Healthcare Management</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="267" to="275" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Artificial intelligence for medical diagnosis</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Richens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Buchard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence in Medicine</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="181" to="201" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Clinical data analysis for prediction of cardiovascular disease using machine learning techniques</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Nadakinamani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Reyana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kautish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vibith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">F</forename><surname>Abdelwahab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Mohamed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational intelligence and neuroscience</title>
		<imprint>
			<biblScope unit="page">2022</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Artificial intelligence for clinical interpretation of bedside chest radiographs</title>
		<author>
			<persName><forename type="first">F</forename><surname>Khader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Müller-Franzes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Huck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Keil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Barzakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schulze-Hagen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pedersoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Schulz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Radiology</title>
		<imprint>
			<biblScope unit="volume">307</biblScope>
			<biblScope unit="page">e220510</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Automated interpretation of clinical electroencephalograms using artificial intelligence</title>
		<author>
			<persName><forename type="first">J</forename><surname>Tveit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Aurlien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Plis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">D</forename><surname>Calhoun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">O</forename><surname>Tatum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Schomer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Arntsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fahoum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">B</forename><surname>Gallentine</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">JAMA neurology</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Enrichme: Perception and interaction of an assistive robot for the elderly at home</title>
		<author>
			<persName><forename type="first">S</forename><surname>Coşar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fernandez-Carmona</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Agrigoroaie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pages</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ferland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bellotto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tapus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="779" to="805" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Onofrio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sancarlo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raciti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reforgiato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mangiacotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ricciardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vitanza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cantucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Presutti</surname></persName>
		</author>
		<title level="m">Mario project: experimentation in the hospital setting</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="289" to="303" />
		</imprint>
	</monogr>
	<note>Ambient Assisted Living: Italian Forum 2017 8</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">Craig</forename><surname>Medical</surname></persName>
		</author>
		<ptr target="https://www.craigmedical.com/Drug_5Panel_DSC-Adulterant.htm" />
		<title level="m">Adulterant Validity Chart Interpretation, Rapidcheck pro 10 dsc with adulterant check</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Applications of deep learning for the analysis of medical data</title>
		<author>
			<persName><forename type="first">H.-J</forename><surname>Jang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-O</forename><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Archives of pharmacal research</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="492" to="504" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Convolutional neural networks in medical image understanding: a survey</title>
		<author>
			<persName><forename type="first">D</forename><surname>Sarvamangala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Kulkarni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Evolutionary intelligence</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A comprehensive survey on convolutional neural network in medical image analysis</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-D</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="page" from="41361" to="41405" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Convolutional neural networks for medical image analysis: state-of-the-art, comparisons, improvement and perspectives</title>
		<author>
			<persName><forename type="first">H</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">T</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Armstrong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Deen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">444</biblScope>
			<biblScope unit="page" from="92" to="110" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Explainability and artificial intelligence in medicine</title>
		<author>
			<persName><forename type="first">S</forename><surname>Reddy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Lancet Digital Health</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="e214" to="e215" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Towards explainable artificial intelligence, Explainable AI: interpreting, explaining and visualizing deep learning</title>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-R</forename><surname>Müller</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="5" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (xai) in deep learning-based medical image analysis</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">H</forename><surname>Van Der Velden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Kuijf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">G</forename><surname>Gilhuijs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Viergever</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Medical Image Analysis</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page">102470</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Advanced computer vision techniques for drug abuse detection</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tufo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zribi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pitolli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pagliuca</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">21st edition of the IMACS world congress</title>
				<imprint>
			<date type="published" when="2023">IMACS2023. 2023</date>
			<biblScope unit="page">226</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Automated low-cost smartphone-based lateral flow saliva test reader for drugs-of-abuse detection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Carrio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sampedro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Sanchez-Lopez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pimienta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Campoy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="29569" to="29593" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Intelligent image-based colourimetric tests using machine learning framework for lateral flow assays</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Tania</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">T</forename><surname>Lwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Shabut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Najlah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Hossain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">139</biblScope>
			<biblScope unit="page">112843</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Deep learning of hiv field-based rapid tests</title>
		<author>
			<persName><forename type="first">V</forename><surname>Turbé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Herbst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mngomezulu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Meshkinfamfard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dlamini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mhlongo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Smit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Cherepanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shimada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Budd</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature medicine</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="1165" to="1170" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Convolutional neural networks for the automatic control of consumables for analytical laboratories</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zribi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pagliuca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pitolli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">BUILD-IT2023 worskhop</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="95" to="97" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">A computer vision-based quality assessment technique for the automatic control of consumables for analytical laboratories</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zribi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pagliuca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pitolli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>in press</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Deep learning</title>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Courville</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Development of a smartphone-based lateralflow imaging system using machine-learning classifiers for detection of salmonella spp</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Min</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Mina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Deering</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bae</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Microbiological Methods</title>
		<imprint>
			<biblScope unit="volume">188</biblScope>
			<biblScope unit="page">106288</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Sers-based lateral flow assay combined with machine learning for highly sensitive quantitative analysis of escherichia coli o157: H7</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Analytical and Bioanalytical Chemistry</title>
		<imprint>
			<biblScope unit="volume">412</biblScope>
			<biblScope unit="page" from="7881" to="7890" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Dual-mode fluorescent/intelligent lateral flow immunoassay based on machine learning algorithm for ultrasensitive analysis of chloroacetamide herbicides</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Analytical Chemistry</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">The calculation of posterior distributions by data augmentation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Tanner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">H</forename><surname>Wong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American statistical Association</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<biblScope unit="page" from="528" to="540" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">A survey on image data augmentation for deep learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Shorten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Khoshgoftaar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of big data</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1" to="48" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">The class imbalance problem: A systematic study</title>
		<author>
			<persName><forename type="first">N</forename><surname>Japkowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stephen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Intelligent data analysis</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="429" to="449" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<ptr target="https://pytorch.org/vision/main/auto_examples/plot_transforms.html" />
		<title level="m">PyTorch Illustration of transforms</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ba</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.6980</idno>
		<title level="m">Adam: A method for stochastic optimization</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Selecting a classification method by cross-validation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Schaffer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine learning</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="135" to="143" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Cross validation for model selection: a review with examples from ecology</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Yates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Aandahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Richards</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">W</forename><surname>Brook</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ecological Monographs</title>
		<imprint>
			<biblScope unit="volume">93</biblScope>
			<biblScope unit="page">e1557</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Densely connected convolutional networks</title>
		<author>
			<persName><forename type="first">G</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Der Maaten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Q</forename><surname>Weinberger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4700" to="4708" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="770" to="778" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Simonyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vedaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zisserman</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1312.6034</idno>
		<title level="m">Deep inside convolutional networks: Visualising image classification models and saliency maps</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Axiomatic attribution for deep networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sundararajan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Taly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="3319" to="3328" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Explaining deep neural network using layerwise relevance propagation and integrated gradients</title>
		<author>
			<persName><forename type="first">I</forename><surname>Čík</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Rasamoelina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sinčák</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 19th world symposium on applied machine intelligence and informatics (SAMI), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="381" to="000386" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Visual saliency detection based on multiscale deep cnn features</title>
		<author>
			<persName><forename type="first">G</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on image processing</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="5012" to="5024" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Integrated gradients for feature assessment in point cloud-based data sets</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schwegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Reiterer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Algorithms</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page">316</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
