<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Data Poisoning Attacks in the Training Phase of Machine Learning Models: A Review</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mugdha</forename><surname>Srivastava</surname></persName>
							<email>mugdha.srivastava@dkit.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Regulated Software Research Centre (RSRC)</orgName>
								<orgName type="institution">Dundalk Institute of Technology (DkIT)</orgName>
								<address>
									<settlement>Dundalk</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Abhishek</forename><surname>Kaushik</surname></persName>
							<email>abhishek.kaushik@dkit.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Regulated Software Research Centre (RSRC)</orgName>
								<orgName type="institution">Dundalk Institute of Technology (DkIT)</orgName>
								<address>
									<settlement>Dundalk</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Róisín</forename><surname>Loughran</surname></persName>
							<email>roisin.loughran@dkit.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Regulated Software Research Centre (RSRC)</orgName>
								<orgName type="institution">Dundalk Institute of Technology (DkIT)</orgName>
								<address>
									<settlement>Dundalk</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kevin</forename><surname>Mcdaid</surname></persName>
							<email>kevin.mcdaid@dkit.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Regulated Software Research Centre (RSRC)</orgName>
								<orgName type="institution">Dundalk Institute of Technology (DkIT)</orgName>
								<address>
									<settlement>Dundalk</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Data Poisoning Attacks in the Training Phase of Machine Learning Models: A Review</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">9DCF302FCB84470CB8FBF8D268AEFBA7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Data poisoning, artificial intelligence, machine learning, deep learning, cybersecurity, adversarial attacks (K. McDaid) 0009-0003-6031-4095 (M. Srivastava)</term>
					<term>0000-0002-3329-1807 (A. Kaushik)</term>
					<term>0000-0002-0974-7106 (R. Loughran)</term>
					<term>0000-0002-0695-9082 (K. McDaid)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Data Poisoning Attacks (DPAs) can severely impact the performance of Machine Learning (ML) models by manipulating training datasets to introduce errors or biases. The integrity of ML models is crucial for user safety and trust, especially as these models increasingly influence key decision-making processes in safety-critical sectors like finance, healthcare, and law enforcement. As ML technology advances, so do the vulnerabilities of these systems, making the reliability of training data vital for ensuring accurate and dependable model outcomes. This review examines the growing threat of DPAs on ML systems at the training stage, categorizing these attacks into label manipulation, data injection, feature space manipulation, and relationship manipulation. By exploring multiple types of attacks and providing relevant examples, this analysis aims to raise awareness about the significant risks posed by compromised data, which can lead to widespread mistrust in ML systems and cause considerable harm, including financial losses, legal liabilities, and even threats to human lives.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Machine Learning (ML) models demonstrate outstanding effectiveness in addressing a variety of complex data classification and analysis problems. Because ML models can recognize patterns in data and make predictions, they have transformed several sectors such as healthcare by facilitating advanced data analytics, personalized medicine, and predictive modelling <ref type="bibr" target="#b0">[1]</ref>. However, adversarial attacks have consistently exposed critical vulnerabilities in such systems, highlighting the need for robust security measures to safeguard the integrity and reliability of these applications in every domain <ref type="bibr" target="#b1">[2]</ref>.</p><p>Data Poisoning Attacks (DPAs), a subset of adversarial attacks, signify a substantial threat to the integrity of ML models because of the multiple pathways in which they can introduce vulnerabilities to a system where accurate and reliable predictions are crucial <ref type="bibr" target="#b2">[3]</ref>. Attackers may introduce erroneous or misleading data points, subtly altering class distributions or introducing noise, which can also lead to biased or incorrect predictions <ref type="bibr" target="#b3">[4]</ref>. For example, in Figure <ref type="figure" target="#fig_0">1</ref>. (a) and (b) show a model built to identify dogs. The model, in Figure <ref type="figure" target="#fig_0">1</ref> (a), trained on clean data is clearly able to classify a dog. The model, in Figure <ref type="figure" target="#fig_0">1</ref> (b), gets trained on a poisoned data point (has red dots and a different label). This training data point, while it looks clearly like a dog to the human eye, gets registered as a cat due to the label as well as the image being poisoned. This leads to the model misclassifying during testing and can have severe implications when models are trained in real-time.</p><p>In this paper, we focus on DPAs primarily at the training stage of an ML model because these attacks are growing more nuanced as ML technology evolves <ref type="bibr" target="#b4">[5]</ref>. We aim to categorize and analyse various DPAs and assess their impact on ML models to establish real-world consequences. We focus on illustrating these attack types using the Breast Cancer Wisconsin (Diagnostic) Dataset <ref type="bibr" target="#b5">[6]</ref> which provides a practical scenario to better understand how such attacks can alter model performance. The rest of paper is structured as follows: Section 2 describes the previous work related to the research. Section 3 shows the overview of data poisoning and describes the four groups in which DPAs can be divided, enumerates the different types of attacks under the four groups established previously and formulates these attacks using a medical dataset. Section 4 analyses the impacts of these attacks. Section 5 presents a discussion on what are the emerging solutions to DPAs. Section 6 concludes the paper and sets up the future scope of work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>Research on DPAs in ML has gained significant attention due to the vulnerabilities these attacks expose in various AI applications. One study categorizes different attack scenarios and discusses mitigation strategies, emphasizing the interplay between data poisoning and the trustworthiness of AI systems <ref type="bibr" target="#b6">[7]</ref>. However, it only describes three different types of training attacks i.e. non-targeted, targeted, and backdoor poisoning.</p><p>One survey offers a taxonomy of DPA and an experimental assessment that focuses on the necessity of strong Federated Learning (FL) <ref type="bibr" target="#b7">[8]</ref>. This study is limited in scope as it only addresses four types of training attacks specific to FL including label-flipping attacks, poisoning sample attacks, backdoor attacks and untargeted attacks thereby reducing its overall comprehensiveness and limiting its utility for broader applications. Tian et al. offer an overview of poisoning attacks and countermeasures in centralized and federated learning <ref type="bibr" target="#b8">[9]</ref>. They categorize attack methods by their goals, analyses the differences and connections among techniques, present countermeasures with their pros and cons. Their analysis is constrained by its examination of only three types of DPAs in centralized learning and FL. By mentioning nine types of input attacks, the study by Surekha et al. offers a broader perspective into DPAs across multiple types of ML than the previous studies but lacks in-depth explanation on these attacks <ref type="bibr" target="#b9">[10]</ref>.</p><p>A study by Emanuele Cinà et al. provides a comprehensive systematization of DPAs, reviewing over hundred papers in the field over the past fifteen years <ref type="bibr" target="#b10">[11]</ref>. They describe five types of attacks, limited to computer vision, and further perform threat modelling on them. The work done by Goldblum et al. provides an extensive list of DPAs during the training phase <ref type="bibr" target="#b11">[12]</ref>. They discuss about eight different attack types with what type of a model can be targeted by each attack. Another study provides a comprehensive overview of attacks and defences but does not adequately address the rapid evolution of attack strategies, risking obsolescence of proposed defences <ref type="bibr" target="#b12">[13]</ref>.</p><p>This study presents seventeen distinct DPAs during the training phase, covering multiple domains within ML. These DPAs are further classified into four groups for enhanced clarity and distinction. Each type is illustrated using the Breast Cancer Wisconsin (Diagnostic) Dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Overview of data poisoning attacks</head><p>DPAs detrimentally affect ML systems by intentionally altering the training data to corrupt model performance or change model behaviour <ref type="bibr" target="#b13">[14]</ref>. These attacks involve introducing malicious data points or modifying existing ones, skewing the training process to favour the attacker's goals <ref type="bibr" target="#b11">[12]</ref>. As ML models are increasingly integrated into various industries, understanding, and mitigating the risks associated with data poisoning is crucial for maintaining the integrity and reliability of these systems <ref type="bibr" target="#b14">[15]</ref>. The impact of these attacks can vary from minor performance reduction to severe consequences, depending on the context in which the ML model is employed. DPAs can be classified into several distinct groups, each exploiting different vulnerabilities in the ML training process (see Figure <ref type="figure" target="#fig_1">2</ref>). These groups include label manipulation, where incorrect labels are assigned to training data <ref type="bibr" target="#b15">[16]</ref>; data injection, which involves adding fraudulent data points <ref type="bibr" target="#b16">[17]</ref>; feature space manipulation, where the features of the data are altered to mislead the model <ref type="bibr" target="#b17">[18]</ref>; and relationship (or context) manipulation, which disrupts the underlying relationships between data points <ref type="bibr" target="#b18">[19]</ref>. As shown in Figure <ref type="figure" target="#fig_1">2</ref>, the different groups can be further divided into the following types.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Label manipulation attacks</head><p>Label manipulation attacks in ML involve various strategies that aim to compromise the integrity of a model's training data, thereby skewing its outcomes. One common approach is label flipping, where attackers maliciously alter the labels of training samples to mislead the model into making incorrect predictions <ref type="bibr" target="#b19">[20]</ref>. Another technique is targeted poisoning, which focuses on specific cases or categories within the dataset, intending to skew the model's results towards erroneous outputs <ref type="bibr" target="#b10">[11]</ref>. Additionally, clean-label attacks involve introducing subtle changes to the training data that appear harmless but are strategically crafted to cause model errors <ref type="bibr" target="#b20">[21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Data injection attacks</head><p>Data injection attacks encompass various techniques used to manipulate and break ML models. One such method is outlier injection, which involves adding extreme feature values to distort the model's learning process <ref type="bibr" target="#b21">[22]</ref>. Backdoor attacks (or Trojan Attacks) embed specific trigger patterns in data to control the model's behaviour upon activation <ref type="bibr" target="#b11">[12]</ref>. Another approach is gradient ascent, where data is crafted to maximize the model's error rate during training <ref type="bibr" target="#b22">[23]</ref>. Availability attacks focus on inserting noise into the training data, hindering the model's learning process and reducing its accuracy <ref type="bibr" target="#b23">[24]</ref>. In contrast, integrity attacks involve making subtle changes to data, leading to a gradual decline in the model's performance <ref type="bibr" target="#b24">[25]</ref>. Data obfuscation disguises the attack by altering data in ways that appear plausible, making it difficult to detect <ref type="bibr" target="#b25">[26]</ref>. Finally, false data injection creates fictitious records to skew the model's predictions, further compromising its reliability <ref type="bibr" target="#b26">[27]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Feature space manipulation attacks</head><p>Feature space manipulation encompasses several techniques that adversaries use to compromise ML models. One such technique is feature collision which involves creating features that seem harmless but cause the input data's characteristics to overlap or "collide" with those of other features, disrupting how the model interprets and learns from the data <ref type="bibr" target="#b27">[28]</ref>. Another method is subpopulation attacks, which target specific demographic groups within the dataset to exploit vulnerabilities associated with those subpopulations <ref type="bibr" target="#b28">[29]</ref>. Generative Adversarial Network (GAN)-based poisoning utilizes data generation to produce synthetic data that poisons the model by damaging its performance or causing it to make incorrect predictions <ref type="bibr" target="#b29">[30]</ref>. Replica injection involves duplicating examples within the training data, which can skew model bias and lead to overfitting on certain patterns <ref type="bibr" target="#b30">[31]</ref>. Semantic poisoning, on the other hand, changes feature relationships to mislead the model by altering the underlying data semantics without altering its appearance <ref type="bibr" target="#b31">[32]</ref>. Lastly, constructive interference refers to the manipulation of decision boundaries through manufactured examples, aiming to disrupt the model's ability to accurately classify data by strategically influencing its learning process <ref type="bibr" target="#b32">[33]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Relationship manipulation attack</head><p>Causal poisoning involves deliberately altering correlations between datapoints to mislead causal inference models <ref type="bibr" target="#b33">[34]</ref>. This technique can manipulate the perceived relationships within data, leading to wrong conclusions about cause-and-effect dynamics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Attack examples using a medical dataset</head><p>We explore each of the DPA types using examples based on the Breast Cancer Wisconsin (Diagnostic) Dataset. This dataset contains features extracted from breast cancer cell images, where each instance is labeled as either "benign" or "malignant." We will use this dataset as a consistent reference for all examples.</p><p>In the dataset (see Table <ref type="table">1</ref>), let X represent the feature matrix, where each row x i corresponds to the features of an individual sample, and let Y represent the label vector, where y i corresponds to the label of x i , with y i = 1 for malignant and y i = 0 for benign. Let x j be a new data point that does not already exist in the dataset.</p><p>All the functions used are denoted in bold and italics to maintain consistency and clarity in the explanation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1: Overview of Attacks and Formulations</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Name of Attack Example Formulation</head><p>Label Flipping Changing the label of certain data points from malignant (1) to benign (0) or vice versa, confusing the model during training and causing it to make incorrect predictions on test data.</p><p>Change labels: y i = 1 → y i = 0 for some i where x i exhibits malignant characteristics. Targeted Poisoning Altering the labels of specific cancer cases with rare cell features, flipping their diagnosis from malignant to benign. This can cause the model to perform poorly in these rare but crucial cases.</p><p>Modify y i = 1 → y i = 0 for samples with rare features x i = rare(X).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Continued on next page</head><p>Table <ref type="table">1</ref> Continued</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Name of Attack Example Formulation</head><p>Clean-label Attacks Adding benign samples that have similar features to malignant samples but keeping their label as benign. This confuses the model during inference when it encounters similar patterns in malignant cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Add</head><p>x i ≈ malignant(X), keep y i = 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Outlier Injection</head><p>Adding outlier data points with impossible or unrealistic feature values, such as extremely high or low measurements, which could skew the model's understanding of what constitutes benign and malignant tumours.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Add</head><p>x j with max(x j ) ≫ max(X) or min(x j ) ≪ min(X).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Backdoor Attacks</head><p>Inserting a specific pattern (i.e., a particular combination of feature values, for example a small watermark) in some training data labelled as benign. The model learns to associate this pattern with benign cases, even if it appears in future malignant inputs.</p><p>Insert pattern p in x i , label as y i = 0 even if x i = malignant(X).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Gradient Ascent Attacks</head><p>Modifying data points to increase the model's error. This can be achieved by creating data samples that maximize prediction errors, thus ruining the overall model performance.</p><formula xml:id="formula_0">Modify x i to x ′ i such that ∇L(f (x ′ i ), y i ) &gt; ∇L(f (x i ), y i ),</formula><p>where ∇L is the gradient loss function and f is the model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Availability Attacks</head><p>Introducing enough noisy data points that confuse the learning process, causing the model to fail to generalize and effectively classify the actual cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduce</head><p>noise: x j = noise(X), y j = random(0, 1).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Integrity Attacks</head><p>Subtly altering specific features of benign data to appear malignant. This could cause the model to falsely classify future benign datapoints as malignant, leading to over-treatments.</p><p>Alter x i → x ′ i where x ′ i ≈ malignant(X), y i = 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data Obfuscation</head><p>Slightly modifying the features of current benign samples so that they still look plausible but cause damage to the model performance, misclassifying them as malignant.</p><p>Slightly change x i to x ′ i such that d(x i , x ′ i ) &lt; ϵ, y i = 0, where d() represents the distance function and ϵ is the plausibility threshold. False Data Injection Adding fictitious patient records with fake measurements and labels to corrupt the training data, leading the model to learn incorrect patterns.</p><p>Add fictitious samples (x j , y j ) with random x j and y j .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Poisoning via Feature Collision</head><p>Creating new training data points that share feature space similarities with malignant samples but label them as benign, causing confusion and misclassification.</p><p>Generate x j such that x j ≈ malignant(X) but set y j = 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Subpopulation Attacks</head><p>Targeting training data of a specific patient demographic (e.g., older patients) by adding noise to that subgroup, causing the model to underperform on this specific population.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Add</head><p>noise to x i where x i = older_patients(X).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Continued on next page Table 1 Continued</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Name of Attack Example Formulation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>GAN-based Poisoning</head><p>Using GANs to generate synthetic images of benign tumours that mimic the feature distribution of malignant cases, causing misclassification during inference.</p><p>Use GAN to create x GAN ∼ benign(X) but resembles malignant(X).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Replica Injection</head><p>Duplicating certain benign examples multiple times in the dataset to bias the model towards classifying similar features as benign, even when they may be malignant.</p><p>Duplicate x i where y i = 0 multiple times to bias the model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Semantic Poisoning</head><p>Altering benign data samples by changing feature relationships (e.g., modifying cell size ratios) to mislead the model into incorrect conclusions about what defines malignancy.</p><p>Alter</p><formula xml:id="formula_1">x i such that relationship new (x i ) ̸ = relationship original (x i ), maintain y i = 0.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Constructive Interference</head><p>Introducing data that causes the model to construct incorrect decision boundaries in feature space, such as mixing malignant and benign features in novel ways to confuse the model.</p><p>Introduce x i such that it lies between decision boundary of benign and malignant, y i = 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Causal Poisoning</head><p>Introducing data that changes the learned relationships between variables. For example, altering the correlation between cell texture and malignancy, misleading the model into incorrect causal inferences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Modify data</head><formula xml:id="formula_2">x i such that corr new (texture(x i ), y i ) ̸ = corr original (texture(x i ), y i ).</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Impact of data poisoning</head><p>Data poisoning is a critical challenge in the development and deployment of ML models as it renders the model ineffective in making sound and reliable decisions <ref type="bibr" target="#b34">[35]</ref>. For example, to poison Gmail's spam filtering mechanism attackers sent millions of emails to confuse Gmail's spam filters, allowing malicious emails to bypass detection <ref type="bibr" target="#b35">[36]</ref>. In 2016, Microsoft's AI chatbot Tay was shut down hours after launch when malicious users fed it offensive tweets, causing it to post inappropriate content <ref type="bibr" target="#b35">[36]</ref>. Researchers have demonstrated that Google's AI image recognition system can be deceived by adversarial attacks, where subtly modified images such as a 3D-printed turtle altered to appear as a rifle, cause the AI to misidentify objects <ref type="bibr" target="#b36">[37]</ref>. A firm reportedly manipulated a Tesla's AI system to drive into oncoming traffic by poisoning the training data used for its navigation and decision-making processes <ref type="bibr" target="#b37">[38]</ref>. In 2023, a new application called Nightshade came about and is being used by artists to undermine generative AI models by deliberately corrupting their training data, aiming to expose and counteract the impact of AI on their creative work <ref type="bibr" target="#b38">[39]</ref>.</p><p>The performance in critical scenarios, such as healthcare, can directly impact patient care and safety <ref type="bibr" target="#b39">[40]</ref>. Even a small percentage of poisoned data can disproportionately affect a model's accuracy, leading to bad performance, misdiagnoses, and incorrect treatment recommendations. For instance, a poisoned model might incorrectly identify benign tumours as malignant or fail to recognize serious conditions, leading to inappropriate treatment plans. As a result, healthcare providers may be reluctant to adopt these systems, fearing potential inaccuracies and the associated liabilities <ref type="bibr" target="#b40">[41]</ref>.</p><p>Data poisoning poses significant risks to ML models in the financial sector as poisoned data can lead to incorrect predictions and decisions in areas like fraud detection, credit scoring, and algorithmic trading <ref type="bibr" target="#b41">[42]</ref>. For instance, if an ML model is trained on manipulated data, it may incorrectly classify fraudulent transactions as legitimate, leading to substantial financial losses for institutions. Similarly, poisoned data can skew credit scoring models, resulting in unfair lending practices that either deny credit to worthy applicants or approve loans for high-risk individuals, increasing default rates. In algorithmic trading, data poisoning can cause models to make erroneous buy or sell decisions, leading to market manipulation and significant financial instability. These vulnerabilities undermine the integrity of financial operations and diminish trust in such systems, which can result in increased regulatory scrutiny and legal liabilities for financial institutions.</p><p>An ML model trained on poisoned data that specifically targets a certain demographic can inadvertently perpetuate or even amplify biases that were not initially present <ref type="bibr" target="#b42">[43]</ref>. When the poisoned data skews the representation of a particular demographic, the model may develop biased decision-making processes that disproportionately affect that group <ref type="bibr" target="#b43">[44]</ref>. This can result in unfair outcomes, such as biased hiring algorithms or discriminatory loan approval systems, where the biases introduced during training become automated, perpetuating systemic inequalities. Even if the original data was free of such biases, the poisoned data can introduce new harmful patterns that the model then enforces in its predictions and decisions.</p><p>Backdoors embedded in ML models can pose a serious threat by not only manipulating model behaviour but also by enabling the extraction of sensitive training data. This data, often containing personal or confidential information, can be exploited by attackers to enhance social engineering tactics <ref type="bibr" target="#b44">[45]</ref>. For instance, if a backdoor allows access to detailed training data, attackers can gather specific insights about individuals, such as their preferences, behaviours, or personal details. Armed with this information, they can craft highly convincing phishing emails or fraudulent messages tailored to exploit the victim's vulnerabilities. This misuse of extracted data significantly amplifies the effectiveness of social engineering attacks, making them more persuasive and harder to detect.</p><p>Data poisoning during the training of ML models can significantly impact public trust and perception of technology <ref type="bibr" target="#b45">[46]</ref>. When poisoned data skews a model's outputs, it can undermine confidence in AI systems, especially in critical sectors like healthcare, finance, and law enforcement where reliability and fairness are crucial. This erosion of trust can lead to decreased adoption of AI technologies and heightened scrutiny of their ethical implications. Additionally, compromised models can strain social services by misallocating resources, thereby deepening disparities in access to essential services <ref type="bibr" target="#b46">[47]</ref>. The economic impact includes potential financial losses and damage to a company's reputation, which can deter investment in AI research and development, ultimately affecting innovation and economic growth in the tech industry.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head><p>As data poisoning becomes a more prominent threat, emerging defence mechanisms are being developed to protect ML models. Techniques such as adversarial training, formal verification and role-based access controls, training data sanitization, robust statistical methods, and advanced anomaly detection algorithms are at the forefront of these efforts <ref type="bibr" target="#b47">[48]</ref>. Adversarial training involves exposing models to potential attacks during the training phase, allowing them to learn from and resist these threats <ref type="bibr" target="#b48">[49]</ref>. Robust statistical methods aim to enhance the resilience of models by employing techniques that reduce sensitivity to corrupted data points <ref type="bibr" target="#b49">[50]</ref>. Additionally, anomaly detection algorithms are becoming increasingly sophisticated, capable of identifying unusual patterns that may indicate data poisoning <ref type="bibr" target="#b50">[51]</ref>. These technological advances aim to fortify ML systems against poisoning attacks, enabling them to maintain performance and reliability even in the face of malicious interference.</p><p>Healthcare, traditionally a slow adopter of cutting-edge technology, has been particularly vulnerable to these evolving threats <ref type="bibr" target="#b51">[52]</ref>. Unlike sectors such as finance or cybersecurity, which have rapidly integrated ML innovations, medical systems often operate with legacy infrastructures that are less adaptable to new technologies <ref type="bibr" target="#b52">[53]</ref>. The sensitivity of health data and the strict regulatory environments further complicate the integration of advanced ML systems, creating a gap where vulnerabilities can easily be exploited <ref type="bibr" target="#b53">[54]</ref>.</p><p>Moreover, the rapid pace of change in ML technology worsens these vulnerabilities. New algorithms and models are being developed at a breakneck speed, outpacing every sector's ability to implement robust security measures effectively <ref type="bibr" target="#b54">[55]</ref>. The need of the hour is for every sector to accelerate its adoption of technological advancements while simultaneously enhancing its cybersecurity posture to protect against the growing threat of data poisoning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and future work</head><p>DPAs represent a great challenge to the reliability, safety, and ethical application of ML systems. In this paper, we have systematically categorized DPAs into four distinct groups and seventeen specific types, providing a comprehensive framework for understanding the diverse nature of these threats. Furthermore, we have presented clear examples of these attacks, leveraging a medical dataset to demonstrate their practical implications and to facilitate more rigorous analytical interpretations.</p><p>Our future work will focus on developing robust defence mechanisms that can preemptively identify and neutralize DPAs before they can affect ML models. This includes further research into the creation of real-time monitoring systems that can detect and respond to DPA threats using technologies like adversarial training and blockchain.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Example of training phase DPA (a) model trained on clean data (b) model trained on poisoned data</figDesc><graphic coords="2,127.16,65.61,338.49,130.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Data poisoning groups and types</figDesc><graphic coords="3,138.45,196.03,315.92,206.22" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This publication has emanated from research conducted with the financial support of Research Ireland under Grant number 21/FFP-A/9255.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Artificial intelligence and machine learning in precision medicine: A paradigm shift in big data analysis</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sahu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Ambasta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Progress in molecular biology and translational science</title>
		<imprint>
			<biblScope unit="volume">190</biblScope>
			<biblScope unit="page" from="57" to="100" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">O</forename><surname>Ibitoye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Abou-Khamis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Shehaby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matrawy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">O</forename><surname>Shafiq</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1911.02621</idno>
		<title level="m">The threat of adversarial attacks on machine learning in network security-a survey</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Secure and robust machine learning for healthcare: A survey</title>
		<author>
			<persName><forename type="first">A</forename><surname>Qayyum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qadir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bilal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Al-Fuqaha</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Reviews in Biomedical Engineering</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="156" to="180" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kesidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the IEEE</title>
		<imprint>
			<biblScope unit="volume">108</biblScope>
			<biblScope unit="page" from="402" to="433" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A survey on attacks and their countermeasures in deep learning: Applications in deep neural networks</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Harrington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Salazar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Al Ameedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Butt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-H</forename><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">federated, transfer, and deep reinforcement learning</title>
				<imprint>
			<publisher>IEEE Access</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Wolberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Mangasarian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Street</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Street</surname></persName>
		</author>
		<title level="m">Breast cancer wisconsin (diagnostic)</title>
				<imprint>
			<date type="published" when="1995">1995. 1995</date>
		</imprint>
	</monogr>
	<note>uci machine learning repository</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Machine learning security against data poisoning: Are we there yet?</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">E</forename><surname>Cinà</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Grosse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Demontis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Biggio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Roli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pelillo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="26" to="34" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Sagar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-S</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">W</forename><surname>Loke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Choi</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2301.05795</idno>
		<title level="m">Poisoning attacks and defenses in federated learning: A survey</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A comprehensive survey on poisoning attacks and countermeasures in machine learning</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="1" to="35" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A comprehensive analysis of poisoning attack and defence strategies in machine learning techniques</title>
		<author>
			<persName><forename type="first">M</forename><surname>Surekha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Sagar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Khemchandani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Computing, Power and Communication Technologies (IC2PCT)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2024">2024. 2024</date>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="1662" to="1668" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Wild patterns reloaded: A survey of machine learning security against training data poisoning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">E</forename><surname>Cinà</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Grosse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Demontis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vascon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zellinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Moser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oprea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Biggio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pelillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Roli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="1" to="39" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses</title>
		<author>
			<persName><forename type="first">M</forename><surname>Goldblum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tsipras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schwarzschild</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mądry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Goldstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="1563" to="1580" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The path to defence: A roadmap to characterising data poisoning attacks on victim models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Chaalan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kamruzzaman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Gondal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page" from="1" to="39" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Threats to training: A survey of poisoning attacks and defenses on machine learning systems</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="1" to="36" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Data security issues in deep learning: Attacks, countermeasures, and opportunities</title>
		<author>
			<persName><forename type="first">G</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Deng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Communications Magazine</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="116" to="122" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Noisy label learning for security defects</title>
		<author>
			<persName><forename type="first">R</forename><surname>Croft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Babar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th International Conference on Mining Software Repositories</title>
				<meeting>the 19th International Conference on Mining Software Repositories</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="435" to="447" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Safeguarding station data integrity: A comprehensive study on detecting and mitigating false data injection through advanced machine learning techniques</title>
		<author>
			<persName><forename type="first">J</forename><surname>Shirini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Shaik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sahithi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Reddy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jyothi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Subramanyam</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Educational Administration: Theory and Practice</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="1316" to="1324" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A survey on image data augmentation for deep learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Shorten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Khoshgoftaar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of big data</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1" to="48" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Inductive biases for deep learning of higher-level cognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the Royal Society A</title>
		<imprint>
			<biblScope unit="volume">478</biblScope>
			<biblScope unit="page">20210068</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Analysis of label-flip poisoning attack on machine learning based malware detector</title>
		<author>
			<persName><forename type="first">K</forename><surname>Aryal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdelsalam</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Big Data (Big Data)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="4236" to="4245" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">H</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ngoc-Hieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-A</forename><surname>Ta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Nguyen-Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-S</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Thanh-Tung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">D</forename><surname>Doan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2407.10825</idno>
		<title level="m">Wicked oddities: Selectively poisoning for effective clean-label backdoor attacks</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Detection of profile injection attacks in social recommender systems using outlier analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Davoudi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chatterjee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Big Data (Big Data)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="2714" to="2719" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Exploring adversarial attack in spiking neural networks with spike-compatible gradient</title>
		<author>
			<persName><forename type="first">L</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on neural networks and learning systems</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="2569" to="2583" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Re-thinking data availability attacks against deep neural networks</title>
		<author>
			<persName><forename type="first">B</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="12215" to="12224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Data integrity attacks and their impacts on scada control system</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sridhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Manimaran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE PES general meeting, IEEE</title>
				<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Probabilistic obfuscation through covert channels</title>
		<author>
			<persName><forename type="first">J</forename><surname>Stephens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yadegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Collberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Debray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Scheidegger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 IEEE European Symposium on Security and Privacy (EuroS&amp;P), IEEE</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="243" to="257" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Design of false data injection attacks in cyber-physical systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Padhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Turuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<biblScope unit="volume">608</biblScope>
			<biblScope unit="page" from="825" to="843" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">An overview of backdoor attacks against deep neural networks and possible defences</title>
		<author>
			<persName><forename type="first">W</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tondi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Open Journal of Signal Processing</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="261" to="287" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Subpopulation data poisoning attacks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Jagielski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Severi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Harger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oprea</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security</title>
				<meeting>the 2021 ACM SIGSAC Conference on Computer and Communications Security</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="3104" to="3122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Guan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2401.08984</idno>
		<title level="m">A gan-based data poisoning framework against anomaly detection in vertical federated learning</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Identifying statistical bias in dataset replication</title>
		<author>
			<persName><forename type="first">L</forename><surname>Engstrom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ilyas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Santurkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tsipras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Steinhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Madry</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="2922" to="2932" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Mass: Model-agnostic, semantic and stealthy data poisoning attack on knowledge graph embedding</title>
		<author>
			<persName><forename type="first">X</forename><surname>You</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Feng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM Web Conference 2023</title>
				<meeting>the ACM Web Conference 2023</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="2000" to="2010" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Decision boundary analysis of adversarial examples</title>
		<author>
			<persName><forename type="first">W</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Song</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Poisoning programs by un-repairing code: security concerns of ai-generated code</title>
		<author>
			<persName><forename type="first">C</forename><surname>Improta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 34th International Symposium on Software Reliability Engineering Workshops (ISSREW)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="128" to="131" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<title level="m" type="main">Mitigating unfairness and adversarial attacks in machine learning</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Abebe</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<author>
			<persName><surname>Mathco</surname></persName>
		</author>
		<ptr target="https://mathco.com/blog/data-poisoning-and-its-impact-on-the-ai-ecosystem/" />
		<title level="m">Data poisoning and its impact on the ai ecosystem</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<title level="m" type="main">Google&apos;s ai thinks this turtle looks like a gun, which is a problem</title>
		<author>
			<persName><forename type="first">J</forename><surname>Vincent</surname></persName>
		</author>
		<ptr target="https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<title level="m" type="main">Military artificial intelligence can be easily and dangerously fooled</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Review</surname></persName>
		</author>
		<ptr target="https://www.technologyreview.com/2019/10/21/132277/military-artificial-intelligence-can-be-easily-and-dangerously-fooled/" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<monogr>
		<title level="m" type="main">This new data poisoning tool lets artists fight back against generative ai</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Review</surname></persName>
		</author>
		<ptr target="https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Hidden risks of machine learning applied to healthcare: unintended feedback loops between models and future data causing model degradation</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Adam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-H</forename><forename type="middle">K</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Haibe-Kains</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Goldenberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning for Healthcare Conference</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="710" to="731" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Artificial intelligence and clinical decision support: clinicians&apos; perspectives on trust, trustworthiness, and liability</title>
		<author>
			<persName><forename type="first">C</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Thornton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Wyatt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Medical law review</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="501" to="520" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Algorithms in future insurance markets</title>
		<author>
			<persName><forename type="first">M</forename><surname>Śmietanka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Koshiyama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Treleaven</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Data Science and Big Data Analytics</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<monogr>
		<title level="m" type="main">Trustworthy Graph Learning</title>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
		<respStmt>
			<orgName>Stevens Institute of Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. thesis</note>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Franco</surname></persName>
		</author>
		<title level="m">Towards trustworthiness in artificial intelligence: Pushing for explainable, fair, robust, and private supervised machine learning</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F.-Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2407.15912</idno>
		<title level="m">The shadow of fraud: The emerging danger of ai-powered social engineering and its possible cure</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b45">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Toreini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Aitken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Coopamootoo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Elliott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Zelaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Missier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Moorsel</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2007.08911</idno>
		<title level="m">Technologies for trustworthy machine learning: A survey in a socio-technical context</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Addressing machine learning bias to foster energy justice</title>
		<author>
			<persName><forename type="first">R</forename><surname>C.-F. Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Napolitano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kar</surname></persName>
		</author>
		<author>
			<persName><surname>Yao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Energy Research &amp; Social Science</title>
		<imprint>
			<biblScope unit="volume">116</biblScope>
			<biblScope unit="page">103653</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">A systematic review of adversarial machine learning attacks, defensive controls and technologies</title>
		<author>
			<persName><forename type="first">J</forename><surname>Malik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Muthalagu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Pawar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Najafirad</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2007.00753</idno>
		<title level="m">Opportunities and challenges in deep learning adversarial robustness: A survey</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b49">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Hendrycks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Mu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">D</forename><surname>Cubuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zoph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lakshminarayanan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1912.02781</idno>
		<title level="m">Augmix: A simple data processing method to improve robustness and uncertainty</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">A topological data analysis approach for detecting data poisoning attacks against machine learning based network intrusion detection systems</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">F</forename><surname>Monkam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>De Lucia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Bastian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Security</title>
		<imprint>
			<biblScope unit="page">103929</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<analytic>
		<title level="a" type="main">Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in ai-driven healthcare</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Williamson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Prybutok</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">675</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">Implementing machine learning for enhanced critical infrastructure protection: A framework-centric approach for legacy systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Grunt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Potejko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Wiedza Obronna</title>
		<imprint>
			<biblScope unit="volume">286</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Security and privacy of internet of medical things: A contemporary review in the age of surveillance, botnets, and adversarial ml</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">U</forename><surname>Rasool</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">F</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Rafique</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Qayyum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qadir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Network and Computer Applications</title>
		<imprint>
			<biblScope unit="volume">201</biblScope>
			<biblScope unit="page">103332</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<analytic>
		<title level="a" type="main">Balancing innovation and regulation in the age of generative artificial intelligence</title>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">C</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Information Policy</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
