<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Application of a Nine-Variate Prediction Ellipsoid for Normalized Data and Machine Learning Algorithms for Keystroke Dynamics Recognition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sergiy</forename><surname>Prykhodko</surname></persName>
							<email>prykhodko@nuos.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Admiral Makarov National University of Shipbuilding</orgName>
								<address>
									<addrLine>Heroes of Ukraine Ave., 9</addrLine>
									<postCode>54007</postCode>
									<settlement>Mykolaiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Odesa Polytechnic National University</orgName>
								<address>
									<addrLine>Shevchenko Ave., 1</addrLine>
									<postCode>65044</postCode>
									<settlement>Odesa</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Artem</forename><surname>Trukhov</surname></persName>
							<email>artem.trukhov@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Admiral Makarov National University of Shipbuilding</orgName>
								<address>
									<addrLine>Heroes of Ukraine Ave., 9</addrLine>
									<postCode>54007</postCode>
									<settlement>Mykolaiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Application of a Nine-Variate Prediction Ellipsoid for Normalized Data and Machine Learning Algorithms for Keystroke Dynamics Recognition</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">CF73C1CD5E42280454E920190D3CF8F7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>keystroke dynamics</term>
					<term>multivariate normal distribution</term>
					<term>Box-Cox transformation</term>
					<term>machine learning 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Keystroke dynamics recognition is a crucial element in enhancing security, enabling personalized user authentication, and supporting various identity verification systems. This study offers a comparative analysis of a nine-variate prediction ellipsoid for normalized data and machine learning algorithms specifically, autoencoder, isolation forest, and one-class support vector machine for keystroke dynamics recognition. Traditional methods often assume a multivariate normal distribution. However, real-world keystroke data typically deviate from this assumption, negatively impacting model performance. To address this, the dataset was normalized using the multivariate Box-Cox transformation, allowing the construction of a decision rule based on a nine-variate prediction ellipsoid for normalized data. The study also includes</p><p>The results revealed that the application of the Box-Cox transformation significantly enhanced both the accuracy and robustness of the prediction ellipsoid. Although all models demonstrated strong performance, the nine-variate prediction ellipsoid for normalized data consistently outperformed the machine learning algorithms. The study highlights the importance of careful feature selection and multivariate normalizing transformations in keystroke dynamics recognition. Future studies could benefit from broader datasets that include a wider range of user behaviors, such as variations in environmental factors and longer key sequences.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, keystroke dynamics recognition has become an effective method for biometric authentication. By analyzing the unique patterns and rhythms individuals demonstrate while typing, such as keystroke duration and the intervals between key presses <ref type="bibr" target="#b0">[1]</ref>, it becomes possible to create a distinctive typing profile for each user. Unlike traditional biometric methods like fingerprint or facial recognition, keystroke dynamics offers a non-intrusive and continuous form of authentication <ref type="bibr" target="#b1">[2]</ref>. This makes it especially appealing for secure applications such as online banking, login systems, and access control.</p><p>The keystroke recognition process involves several essential stages to ensure accurate user authentication. It begins with the collection of a dataset, typically consisting of timestamps for keypress and key release events. From this raw data, key attributes such as hold times and inter-key intervals are extracted, which reveal the unique typing behavior of the user. A critical preprocessing step is the detection and removal of outliers -data points that significantly deviate from the expected behavior and could otherwise distort the results <ref type="bibr" target="#b2">[3]</ref>. This step is vital for creating a cleaner dataset and improving model accuracy. Once preprocessing is complete, classification models are applied to recognize new data inputs.</p><p>In traditional recognition tasks, classification typically involves assigning an object to one of several predefined categories. However, in the context of keystroke dynamics, one-class classification is more frequently employed. Unlike standard classification methods, which rely on a balanced dataset with both positive and negative examples, one-class classification focuses on without the need for negative samples. This approach is particularly beneficial in authentication systems, where the goal is to continuously verify that the current user matches the known profile, rather than distinguishing between multiple users <ref type="bibr" target="#b3">[4]</ref>. Closely related to outlier detection, one-class classification evaluates new data to determine if it aligns with the target profile, flagging any deviations as potential anomalies <ref type="bibr" target="#b4">[5]</ref>.</p><p>Prediction ellipsoids <ref type="bibr" target="#b5">[6]</ref> and machine learning algorithms <ref type="bibr" target="#b6">[7]</ref> are commonly utilized in the field of pattern recognition. The study aims to compare these models in keystroke dynamics recognition, assessing their performance, robustness, and applicability in real-world settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature review</head><p>Mathematical modeling techniques are pivotal in the field of keystroke dynamics recognition, aimed at improving accuracy and reliability. Recent advancements have integrated a range of approaches. Tree-based models, like random forests <ref type="bibr" target="#b7">[8]</ref>, classify data by constructing hierarchical structures, and learning feature splits that effectively differentiate between classes. Support vector-based methods <ref type="bibr" target="#b8">[9]</ref> focus on maximizing the margin between classes to create optimal decision boundaries, while neural network models <ref type="bibr" target="#b9">[10]</ref> capture complex patterns in keystroke data by processing information through multiple interconnected layers of nodes.</p><p>However, for user authentication systems, one-class classification is more commonly employed <ref type="bibr" target="#b10">[11]</ref>. Among the leading techniques are prediction ellipsoids and machine learning algorithms such as one-class support vector machine (OCSVM) <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>, isolation forest (IF) <ref type="bibr" target="#b13">[14]</ref>, and autoencoder (AE) <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>. OCSVM learns a decision boundary that separates target data points from outliers while maximizing the margin within the feature space. IF is an ensemble method that isolates anomalies by randomly selecting features and partitioning the data until anomalous points are isolated in smaller partitions, requiring fewer splits for target points. AE, as a neural network, learns an efficient representation of data by encoding inputs into a lower-dimensional space and reconstructing them. Anomalies are flagged by evaluating reconstruction errors, where higher discrepancies suggest potential outliers.</p><p>The use of prediction ellipsoids relies on the assumption that data conforms to a multivariate normal distribution <ref type="bibr" target="#b16">[17]</ref>. In practice, however, this assumption often does not hold for real-world keystroke data <ref type="bibr" target="#b17">[18]</ref>. To address this, normalization transformations are applied, adjusting the data to more closely align with a multivariate normal distribution and thereby improving the model's accuracy and robustness <ref type="bibr" target="#b18">[19]</ref><ref type="bibr" target="#b19">[20]</ref>. Techniques like univariate transformations (e.g., logarithmic or Box-Cox transformation) operate on individual features, while multivariate transformations, such as the multivariate Box-Cox transformation, consider relationships between features for a more holistic normalization approach.</p><p>This study focuses on comparing prediction ellipsoid for normalized data and machine learning algorithms such as OCSVM, IF, and AE, which are widely used and offer distinct approaches to oneclass classification. In the context of keystroke dynamics recognition, accuracy, and efficiency are critical, making it essential to evaluate the effectiveness of different approaches. While prediction ellipsoid offers interpretability and computational efficiency, it can encounter limitations when dealing with non-Gaussian data distributions. On the other hand, machine learning algorithms such as OCSVM, IF, and AE provide alternative techniques, each with its own advantages and challenges when applied to keystroke dynamics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Materials and methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Keystroke dynamics dataset</head><p>In keystroke dynamics recognition, the quality and structure of the dataset play a crucial role in determining the performance and accuracy of the applied algorithms. A typical keystroke dynamics dataset records various temporal characteristics of an individual's typing behavior, including metrics such as the duration of key presses and the intervals between consecutive key events.</p><p>This study utilizes the CMU keystroke dynamics dataset, which captures detailed typing data dataset records various keystroke timing features in seconds, including how long each key is pressed and the intervals between key presses. Data collection was conducted over eight distinct sessions per subject, with at least one day between sessions. Each session required subjects to type the password 50 times, resulting in 400 samples per individual and a total of 20,400 samples across all participants.</p><p>The dataset is organized by subject identifier, session number, repetition count, and 31 timing features. Columns are labeled to reflect specific keystroke metrics: H.key denotes the hold time for a particular key, measuring the duration from key press to release. DD.key1.key2 represents the keydown-keydown interval, i.e., the time between pressing two consecutive keys, while UD.key1.key2 indicates the keyup-keydown interval, measuring the time between releasing one key and pressing the next. Notably, UD times can be negative in some cases, and the sum of H times and UD times corresponds to the DD time for a given digraph.</p><p>To simplify the modeling process, this study focuses on 9 key properties, hold time for a particular key, forming the feature vector: X = { H.t, H.i, H.e, H.5, H.R, H.o, H.a, H.n, H.l }.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Outlier removal</head><p>After extracting feature vectors, the subsequent step involves detecting and removing outliers. This process is crucial because outliers can distort the analysis and undermine the performance of recognition models. By eliminating these anomalies, the dataset is refined, ensuring that the data better reflects typical user behavior, which in turn enhances model training.</p><p>One commonly used method for outlier detection is based on the squared Mahalanobis distance (SMD). However, SMD assumes that the data follows a multivariate normal distribution, which might not always be the case. To verify this assumption, it is necessary to assess the data's normality through statistical tests like the Mardia test, which is used in the study <ref type="bibr" target="#b20">[21]</ref>. This test evaluates two aspects of multivariate normality: skewness 𝛽 1 and kurtosis 𝛽 2 .</p><p>The Mardia test calculates skewness scaled by 𝑁/6, which follows a chi-square distribution with 𝑝(𝑝 + 1)(𝑝 + 2)/6 degrees of freedom, where 𝑝 is the number of variables and 𝑁 is the sample size. Kurtosis is compared to the normal distribution, with a mean of 𝑝(𝑝 + 2) and a variance of 8𝑝(𝑝 + 2)/𝑁. By comparing the calculated skewness and kurtosis values with those expected under a normal distribution, the test helps identify significant deviations from multivariate normality. If the data deviates significantly, normalization is required to transform a non-Gaussian vector 𝑋 = 𝑋 1 , 𝑋 2 , … , 𝑋 9 𝑇 into a Gaussian vector 𝑍 = 𝑍 1 , 𝑍 2 , … , 𝑍 9 𝑇 .</p><p>Normalization transformations are essential in data analysis and machine learning, as they help stabilize variance, reduce skewness, and better align data with a multivariate Gaussian distribution. Univariate transformations, such as logarithmic transformations and univariate BCT, are typically applied to individual features. The logarithmic transformation is effective for stabilizing variance in positively skewed data, while the univariate BCT can handle both positive and negative skewness by optimizing a interdependent, and the BCT can be sensitive to outliers due to the complexity of parameter estimation.</p><p>In contrast, multivariate transformations like the multivariate BCT consider the relationships between multiple features. The multivariate Box-Cox transformation builds upon the principles of the univariate Box-Cox transformation but applies it across multiple variables at once:</p><formula xml:id="formula_0">𝑍 𝑗 = 𝑥(λ 𝑗 ) = { (𝑋 𝑗 λ 𝑗 − 1)/λ 𝑗 , λ 𝑗 ≠ 0; ln(𝑋 𝑗 ), λ 𝑗 = 0.<label>(1)</label></formula><p>While it is more computationally demanding, this transformation preserves correlations between variables, offering a more robust approach to normalizing complex datasets. The multivariate BCT improves the alignment of data with a multivariate normal distribution by optimizing parameters through methods such as maximizing the log-likelihood of transformed data, as discussed in the study <ref type="bibr" target="#b20">[21]</ref>. Once the multivariate BCT is applied, the Mardia test should be repeated to verify the success of the normalization.</p><p>After normalization, outlier removal is performed iteratively using the SMD method, removing one data point per iteration based on the largest distance. This ensures that the most extreme values are eliminated first, leading to a cleaner and more representative dataset for subsequent analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Prediction ellipsoid</head><p>A prediction ellipsoid is a multivariate tool used to assess whether a data point belongs to a specific target class. It operates by calculating the SMD for each point, which forms the left side of the comparison equation. This distance is then measured against a critical value derived from the chisquare distribution, which serves as the right side of the equation <ref type="bibr" target="#b21">[22]</ref>:</p><formula xml:id="formula_1">(𝑋 − 𝑋 ̄)𝑇 𝑆 𝑋 −1 (𝑋 − 𝑋 ̄) = 𝜒 9, 0.005 2 .</formula><p>(2) The SMD follows a chi-square distribution with degrees of freedom corresponding to the number of features in the data, which in this case is 9. This allows for the calculation of a critical value based on the desired significance level, commonly set at 0.005 for one-class classification tasks. If a data an anomaly, meaning it is likely part of a different class. If the SMD falls below the threshold, the point is considered an instance of the target class.</p><p>In cases where the data does not follow a normal distribution, a normalization process is implemented before constructing the nine-variate prediction ellipsoid, which is represented by the equation:</p><formula xml:id="formula_2">(𝑍 − 𝑍 ̄)𝑇 𝑆 𝑍 −1 (𝑍 − 𝑍 ̄) = 𝜒 9, 0.005 2 .</formula><p>(3) For 9 degrees of freedom at a significance level of 0.005, the chi-square distribution provides a critical value of 23.59. Any data point with an SMD below this value is deemed to lie within the ellipsoid, signifying its membership in the target class.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Machine learning algorithms</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.1.">One-class support vector machine</head><p>OCSVM constructs a decision boundary that separates target data from the rest of the feature space by finding a hyperplane with the maximum margin. This boundary is optimized by maximizing the distance between the hyperplane and the origin within a high-dimensional feature space. The OCSVM employs an implicit transformation function, denoted as φ(•), which is a non-linear projection evaluated through a kernel function. This kernel function maps the original feature space into a potentially higher-dimensional one: 𝑘(𝑥, 𝑦) = φ(𝑥) • φ(𝑦) <ref type="bibr" target="#b22">[23]</ref>.</p><p>Several kernel functions are commonly used in OCSVM. The linear kernel computes dot products in the original feature space, making it ideal for linearly separable data. The polynomial kernel captures non-linear relationships by raising dot products to specific powers, allowing it to model more complex decision boundaries. The radial basis function kernel, using a Gaussian function, effectively captures intricate relationships, particularly in cases where data is not linearly separable. The sigmoid kernel, based on the hyperbolic tangent function, excels at capturing non-linear patterns, making it useful for handling complex relationships between features and classes.</p><p>The decision boundary that OCSVM learns is defined by the following equation:</p><p>𝑔(𝑥) = 𝜔 𝑇 φ(𝑥) − 𝜌, where 𝜔 represents the normal vector of the hyperplane, and 𝜌 is the bias term.</p><p>OCSVM is formulated as a quadratic optimization problem, aiming to minimize the weight vector 𝜔 while maximizing the margin, subject to specific constraints. The optimization problem can be expressed as:</p><formula xml:id="formula_3">𝑚𝑖𝑛 𝜔,ξ,𝜌 ||𝜔|| 2 2 − 𝜌 + 1 νN ∑ ξ 𝑖 , 𝑁 𝑖=1</formula><p>subject to: 𝜔 𝑇 φ(𝑥 𝑖 ) ≥ 𝜌 − ξ 𝑖 , ξ 𝑖 ≥ 0, where ξ 𝑖 are slack variables that account for separation errors, and ν ∈ (0,1] is the regularization parameter, which controls the balance between the number of outliers and the number of support vectors.</p><p>The optimization problem is typically solved in its dual form, producing a decision function that classifies new data points as either belonging to the target class or as anomalies. The final decision function is:</p><formula xml:id="formula_4">𝑓(𝑥) = 𝑠𝑔𝑛(𝑔(𝑥)).</formula><p>The function returns a positive value for data points belonging to the target class and a negative value for anomalies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.2.">Isolation forest</head><p>Unlike traditional methods that rely on modeling target points, IF takes a distinctive approach by focusing directly on isolating anomalies. This technique works by constructing isolation trees, where internal nodes represent features and their split values, and leaf nodes represent individual data points. The construction of isolation trees begins by randomly selecting a feature and a corresponding split value within its range. This random process continues until each data point is isolated in its own leaf node or until a specified maximum tree depth is reached <ref type="bibr" target="#b23">[24]</ref>. The strength distribution. Anomalies, being easier to isolate since they reside in sparser regions of the feature space, require fewer splits from root to leaf nodes compared to normal data points. As a result, the average path length from the root to the leaf node for each data point is calculated across all trees in the forest.</p><p>The anomaly score for each data point is derived based on its average path length using the following formula:</p><formula xml:id="formula_5">𝑠(𝑥, 𝑛) = 2 − 𝐸(ℎ(𝑥))</formula><p>𝑐 (𝑛) , where 𝐸(ℎ(𝑥)) is the average path length of data point 𝑥 across 𝑡 isolation trees:</p><formula xml:id="formula_6">𝐸(ℎ(𝑥)) = ∑ ℎ 𝑖 (𝑥) 𝑡 𝑖=1 𝑡 ,</formula><p>and 𝑐(𝑛) represents the average path length of an unsuccessful search in a binary tree:</p><formula xml:id="formula_7">𝑐(𝑛) = 2𝐻(𝑛 − 1) − (2(𝑛 − 1)/𝑛),</formula><p>where 𝐻(𝑖) = ln(𝑖) + γ, and γ Data points with shorter path lengths, closer to the root of the tree, are more likely to be anomalies, while those with longer paths are considered targets. Based on the anomaly scores, a threshold is set to classify data points as either anomalies or normal. Points with scores above the threshold are flagged as anomalies, while those below are classified as normal data points.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.3.">Autoencoder</head><p>An autoencoder is a type of artificial neural network designed for learning efficient data representations, dimensionality reduction, and anomaly detection. As an unsupervised learning method, it consists of two key components: an encoder and a decoder.</p><p>The primary goal of an autoencoder is to learn a compressed and meaningful representation of the input data. The encoder's function is to map the input data into latent space, effectively compressing the data into a lower-dimensional form. This is typically achieved through a series of layers, where each layer applies non-linear transformations to the input data. The resulting latent space captures the most relevant features and patterns of the input, condensing its essential information. The decoder, on the other hand, is tasked with reconstructing the original input data from its latent space representation. Its architecture generally mirrors that of the encoder, but in reverse, and it applies a series of non-linear transformations to transform the latent representation back into the original data format <ref type="bibr" target="#b24">[25]</ref>.</p><p>During training, the autoencoder aims to minimize reconstruction error, which quantifies the difference between the original input and the reconstructed output. This is typically achieved by optimizing a loss function, such as mean squared error or binary cross-entropy, using gradient-based methods like backpropagation.</p><p>In recognition tasks, the autoencoder is trained using only instances of the target class, allowing it to learn the typical patterns and structure of normal data. When the autoencoder encounters new data, it will reconstruct the input with a low error if it belongs to the target class. However, if the input represents an anomaly, the reconstruction error will be higher, as the autoencoder is not wellequipped to accurately reconstruct unfamiliar instances. By establishing a threshold for the reconstruction error, anomalies can be detected and distinguished from normal instances.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Evaluation metrics</head><p>In one-class classification, where the objective is to differentiate between target instances and anomalies, evaluation metrics such as specificity, recall, precision, F1 score, and accuracy are crucial for assessing model performance <ref type="bibr" target="#b25">[26]</ref>.</p><p>These metrics are derived from the classification outcomes, which can be categorized into four groups: true positives (TP), representing correctly identified anomalies; false positives (FP), indicating instances mistakenly classified as anomalies; true negatives (TN), denoting correctly identified target instances; and false negatives (FN), reflecting actual anomalies that were misclassified as target instances.</p><p>proportion of accurately identified target instances out of all target instances:</p><formula xml:id="formula_8">𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑁 𝑇𝑁 + 𝐹𝑃 .</formula><p>Recall gauges the model's ability to detect all actual anomalies, measuring the proportion of true anomalies correctly identified out of all existing anomalies:</p><formula xml:id="formula_9">𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 .</formula><p>Precision assesses the model's reliability when identifying anomalies, showing the proportion of true anomalies among all instances classified as anomalies:</p><formula xml:id="formula_10">𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 .</formula><p>F1 score provides a balanced evaluation by calculating the harmonic mean of precision and recall, offering a single metric that accounts for both aspects:</p><formula xml:id="formula_11">𝐹1 𝑠𝑐𝑜𝑟𝑒 = 2 * 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 * 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 .</formula><p>Finally, the accuracy metric measures the overall correctness of the classification, taking both target instances and anomalies into account: 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 .</p><p>After constructing the models, they will be evaluated using these metrics, enabling a comprehensive analysis of their performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Data preparation and outlier removal</head><p>For the experiments, data with the identifier s015 was randomly selected for analysis, while data from s004 was used as a test set to evaluate the recognition of keystroke dynamics from a different individual. Outlier detection began by assessing whether the s015 dataset adhered to a multivariate normal distribution. The Mardia test revealed significant deviations, as the test statistic for multivariate skewness 𝑁𝛽 1 /6, at 391.54, exceeded the chi-square critical value of 215.53 for 165 degrees of freedom at a significance level of 0.005. Similarly, the multivariate kurtosis statistic 𝛽 2 , with a value of 113.32, surpassing the critical value of 102.62 for a mean of 99, variance of 1.98, and significance level of 0.005, indicating non-normality and necessitating further normalization.</p><p>Normalization parameters were estimated using the maximum likelihood method, yielding the following estimates for the multivariate BCT: After applying the nine-variate Box-Cox transformation with components (1), the Mardia test was performed again. The skewness statistic 𝑁𝛽 1 /6 was reduced to 212.07, which is below the chisquare threshold of 215.53, but the kurtosis statistic 𝛽 2 remained slightly elevated at 109.01, still above the critical value of 102.62. Despite some remaining non-normality, primarily due to outliers, the transformed dataset better approximated a multivariate normal distribution, improving the conditions for using SMD.</p><formula xml:id="formula_12">λ 1 ̂ = 0.9939, λ 2 ̂ = 1.3605, λ 3 ̂ = 1.2202, λ 4 ̂ = 1.7521,</formula><p>Subsequently, SMD was computed for each feature vector to identify potential outliers. These distances were compared to the chi-square critical value of 23.59 for 9 degrees of freedom at a 0.005 significance level. Any vectors with SMD exceeding this value were classified as outliers. The most extreme outlier, vector number 295 with an SMD of 37.44, was removed.</p><p>This process of outlier removal was iteratively repeated until all extreme points were excluded. After eliminating 6 outliers, the multivariate kurtosis statistic finally fell below the critical value, confirming that outliers had a substantial impact on the dataset's distribution.</p><p>Table <ref type="table" target="#tab_0">1</ref> lists the SMD values and the corresponding indices for each outlier that was removed. This iterative process continued until no further significant outliers were detected, resulting in a refined dataset that was less affected by extreme values.</p><p>To mitigate any potential bias related to the order of the data, the final sample was randomly shuffled to ensure an even distribution across the training and test sets. The shuffled data was then split into two equal parts, with 195 vectors in each set.</p><p>The training set was utilized to build both the prediction ellipsoid and the machine learning models, allowing them to capture the underlying patterns and relationships within the data. Meanwhile, the test set was reserved to assess the performance of the models on data not previously encountered during training. .972 204 Following this outlier removal process, the final set was obtained with the following vector of means: 𝑋 ̅ = {0.07525; 0.07022; 0.07823; 0.063; 0.06911; 0.08829; 0.08605; 0.07505; 0.0751}, Table <ref type="table">2</ref> presents the covariance matrix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2</head><p>The covariance matrix of the final set </p><formula xml:id="formula_13">X1 X2 X3 X4 X5 X6 X7 X8 X9 X1 0.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 3</head><p>The covariance matrix of the training set  </p><formula xml:id="formula_14">X1 X2 X3 X4 X5 X6 X7 X8 X9 X1 0.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Prediction ellipsoid construction</head><p>The prediction ellipsoid should be constructed using data that follows a normal distribution, so verifying the data's normality is a necessary first step. Based on the Mardia test results, the multivariate distribution of this training sample deviates from normality. The test statistic for multivariate skewness 𝑁𝛽 1 /6 is 286.99, exceeding the critical value of 215.53 from the chi-square distribution for 165 degrees of freedom at a 0.005 significance level. Additionally, the test statistic for multivariate kurtosis 𝛽 2 is 105.43, also exceeding the critical value of 104.19, given a mean of 99, a variance of 4.062, and a 0.005 significance level.</p><p>To address this non-normality, the training set is normalized using a nine-variate BCT. The optimal parameters for this transformation were estimated using the maximum likelihood method:</p><formula xml:id="formula_15">λ 1 ̂ = 1.3676, λ 2 ̂ = 1.4807, λ 3 ̂ = 1.078, λ 4 ̂ = 1.7393, λ 5 ̂ = 2.1004, λ 6 ̂ = 1.1498, λ 7 ̂ = 1.566, λ 8 ̂ = 1.1685, λ 9 ̂ = 2.1146.</formula><p>After applying the BCT with components (1), the normalized training set has a mean vector 𝑍 ̅ = {0.70932; 0.66184; 0.86764; -0.57016; -0.47427; 0.81642; 0.62417; -0.81443; -0.47084}. The covariance matrix 𝑆 𝑍 is presented in Table <ref type="table">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>The covariance matrix of the normalized training set 4 29 0.0 5 14 0.0 4 2 0.0 5 2 0.0 6 13 0.0 5 21 0.0 5 39 0.0 7 73 -0.0 7 68 Z2 0.0 5 14 0.0 4 16 -0.0 5 48 -0.0 7 21 0.0 7 71 0.0 5 27 0.0 5 43 -0.0 5 16 0.0 7 31 Z3 0.0 4 2 -0.0 5 48 0.0 3 18 -0.0 6 42 0.0 6 43 0.0 5 42 -0.0 4 11 0.0 5 51 0.0 6 65 Z4 0.0 5 2 -0.0 7 21 -0.0 6 42 0.0 5 31 0.0 6 15 -0.0 6 15 0.0 5 14 -0.0 5 2 -0.0 7 32 Z5 0.0 6 13 0.0 7 71 0.0 6 43 0.0 6 15 0.0 6 33 0.0 6 58 0.0 8 8 -0.0 6 17 0.0 6 13 Z6 0.0 5 21 0.0 5 27 0.0 5 42 -0.0 6 15 0.0 6 58 0.0 4 55 0.0 5 15 0.0 6 92 0.0 6 72 Z7 0.0 5 39 0.0 5 43 -0.0 4 11 0.0 5 14 0.0 8 8 0.0 5 15 0.0 4 22 -0.0 4 1 0.0 7 12 Z8 0.0 7 73 -0.0 5 16 0.0 5 51 -0.0 5 2 -0.0 6 17 0.0 6 92 -0.0 4 1 0.0 3 11 0.0 6 89 Z9 -0.0 7 68 0.0 7 31 0.0 6 66 -0.0 7 32 0.0 7 13 0.0 6 72 0.0 7 12 0.0 6 89 0.0 6 57 The Mardia test performed on the normalized training set indicates conformity with multivariate normality. The test statistic for multivariate skewness 𝑁𝛽 1 /6 is 175.47, which is below the critical value of 215.53. Similarly, the test statistic for multivariate kurtosis 𝛽 2 is 99.76, which does not exceed the critical value of 104.19, confirming that the normalized set follows a multivariate normal distribution.</p><formula xml:id="formula_16">Z1 Z2 Z3 Z4 Z5 Z6 Z7 Z8 Z9 Z1 0.0</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Implementation of machine learning algorithms</head><p>This section outlines the implementation of various machine learning algorithms used to recognize the keystroke dynamics data. Specifically, we explore the One-Class Support Vector Machine, Isolation Forest, and autoencoder models, each selected for their unique capabilities in anomaly detection and one-class classification.</p><p>The OCSVM is implemented in Python using the OneClassSVM object from the scikit-learn library. This implementation allows for the customization of several critical parameters, including determines the acceptable proportion of training errors and establishes an upper limit for the fraction of outliers in the training dataset. The radial basis function kernel was chosen for its flexibility in modeling non-linear relationships among data points. The gamma parameter is set to "auto," allowing its value to be computed automatically based on the inverse of the number of features, influencing the range for each training example; lower values extend the influence while higher values localize it.</p><p>The IF algorithm, also implemented through sci-kit-learn <ref type="bibr" target="#b26">[27]</ref>, provides several tunable parameters for optimizing performance. A key parameter is the contamination level, which defines the threshold for categorizing new data points as either target or anomalous. After experimentation, a contamination value of 0.05 was determined to effectively balance the detection of true anomalies against false positives.</p><p>Additional significant parameters include n_estimators, which denotes the number of decision trees in the forest (set at 100), max_samples, indicating the maximum number of samples per tree (set to 256), and max_features, specifying the maximum number of features for splitting each node (set to 1.0 to utilize all features). To classify a sample as either target or anomalous, we compare the anomaly score against a defined threshold. The scores can range from negative to positive values; negative scores indicate a higher likelihood of being a target, while positive scores suggest a greater probability of being anomalous. The selection of the threshold value is application-dependent; in this analysis, a threshold of 0 yielded optimal results.</p><p>For the AE model, we utilized TensorFlow and Keras, leveraging their combined strengths in flexibility, scalability, and ease of use. Keras, as a high-level API for building neural networks atop TensorFlow, simplifies the process of constructing and training models. Meanwhile, TensorFlow provides the essential computational framework, ensuring efficient performance during training and inference.</p><p>Before passing the data into the neural network, min-max normalization is applied to each feature individually, scaling all features to a range of [0, 1]. This technique standardizes the features, promoting stable and efficient learning processes.</p><p>The AE architecture consists of an input layer configured to accept a nine-variate representation of the data. The model includes fully connected layers for encoding and decoding operations. During the encoding phase, the input data is compressed into a lower-dimensional representation, progressively reducing dimensionality from 9 to 8 and then to 6, creating a bottleneck in the network structure. This bottleneck layer compels the model to capture essential features of the input data while minimizing redundancy <ref type="bibr" target="#b27">[28]</ref>.</p><p>Each encoding layer employs rectified linear unit ReLU activation functions, introducing nonlinearity that facilitates the extraction of complex features. The decoding phase reverses this process, expanding the dimensionality back to 8 and ultimately to the original 9 dimensions, using ReLU activation functions to retain the learned non-linear relationships. The final layer utilizes a sigmoid activation function to constrain output values within the range of [0, 1], a common choice for reconstruction and binary classification tasks that require smooth and interpretable outputs. The structure of the AE is illustrated in Figure <ref type="figure" target="#fig_3">1</ref>. To train the model, we employed the Adam optimizer in conjunction with binary cross-entropy loss, a standard metric for reconstruction tasks aimed at minimizing the discrepancy between the original and reconstructed data. The Adam optimizer combines the strengths of AdaGrad and RMSProp <ref type="bibr" target="#b28">[29]</ref>, dynamically adjusting the learning rate during training for faster convergence and improved performance. The binary cross-entropy loss effectively measures the difference between the input and reconstructed outputs, making it suitable for binary classification problems. The training process encompasses 25 epochs, with a batch size of 16 instances per batch. Shuffling the data at each epoch introduces variability, preventing the model from memorizing the training sequence, and thus enhancing generalization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>Table <ref type="table" target="#tab_2">5</ref> displays a comparison of the recognition performance among various methods, including the Prediction Ellipsoid for Non-Gaussian Data (PENGD) (1), the Prediction Ellipsoid for Normalized Data (PEND) <ref type="bibr" target="#b6">(7)</ref>, One-Class Support Vector Machine (OCSVM), Isolation Forest (IF), and Autoencoder (AE). performance in keystroke dynamics recognition. However, the PENGD stands out with the lowest accuracy among the models assessed, indicating that while it can capture some patterns, it may struggle with more complex datasets, particularly due to the challenges posed by non-Gaussian data distributions. On the other hand, both the OCSVM and AE exhibit very good performance across multiple metrics, reflecting their capabilities in identifying true anomalies with high precision and recall. These models effectively leverage their respective architectures to capture intricate relationships within the data, contributing to their robust performance. In contrast, the IF did not perform as well as the other models.</p><p>Ultimately, the PEND emerged as the best-performing model, achieving the highest scores across key evaluation metrics. This reinforces the significance of normalization transformations in enhancing prediction ellipsoid models for recognition tasks, particularly in scenarios involving non-Gaussian data distributions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>All models in this study exhibit strong performance in keystroke dynamics recognition, but PEND stands out as the best performer. The precision, recall, and F1 score of this model are the highest, demonstrating its ability to handle keystroke dynamics recognition tasks with remarkable accuracy. The performance of OCSVM and AE is also notable, offering very good results, while IF lags slightly behind the others.</p><p>The findings underscore that applying the nine-variate BCT played a critical role in boosting model performance, particularly by improving how the models handle non-Gaussian data. Multivariate transformations like BCT take into account the correlations between variables, allowing for a more accurate and comprehensive prediction ellipsoid. This, in turn, enhances the model's ability to identify intricate patterns in the data, improving both its accuracy and reliability.</p><p>However, there are certain disadvantages to using prediction ellipsoid for normalized data. A robust model typically requires a dataset of at least 100 instances, which can be a challenge for smaller datasets. Additionally, selecting the most appropriate normalization transformation can be complex, especially for datasets with intricate distributions or a large number of outliers. Another important factor is the choice of significance level, as this can influence the efficiency and reliability of the prediction ellipsoid.</p><p>Limitations also arise from the outlier removal process, as deleting 10 outliers during preprocessing may cause the model to miss some underlying patterns in the data. To mitigate this, more advanced normalization techniques, such as the Johnson transformation, could be considered to better align the model with the dataset's distribution, improving its ability to generalize across all relevant data points.</p><p>In this paper, the primary aim was to address the challenge posed by non-Gaussian data distributions in the context of biometric identification based on keystroke dynamics. Emphasizing the importance of normalization techniques, specifically the multivariate Box-Cox transformation, to enhance model accuracy with such data.</p><p>The dataset used in this study represents a 10-character password length, which may not be optimal for real-world applications. A password length of 20-22 characters, without the use of uppercase characters, is generally considered preferable, as it allows for more comprehensive feature extraction. Beyond keystroke length and character variety, several contextual factors, such as the 30], were not considered in this research. However, these factors could play an important role in biometric identification based on keystroke dynamics.</p><p>In future research, a broader dataset that includes data reflecting the impact of environmental factors could be used, along with extended key sequences. The inclusion of these factors would provide a more realistic representation of user behavior. Additionally, the application of other normalization techniques, such as the Johnson transformation, could further enhance model accuracy by addressing the complexity of non-Gaussian distributions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>The focus of this paper was to address the challenges associated with non-Gaussian data distributions in the context of keystroke dynamics recognition. The study compared the performance of prediction ellipsoid models and machine learning algorithms, including OCSVM, IF, and AE. All models demonstrated a high probability of recognition. Notably, the prediction ellipsoid for non-Gaussian data had the lowest accuracy, highlighting the challenges posed by complex datasets. However, by applying the multivariate BCT, the prediction ellipsoid model for normalized data showed significant performance improvements, emphasizing the critical role of normalization when addressing non-Gaussian data distributions. The BCT not only improved the overall accuracy but also deepened the understanding of data patterns by considering correlations between variables, ultimately leading to a more precise prediction ellipsoid.</p><p>Despite these advancements, the study identified certain limitations and challenges. One significant drawback is the necessity for a large dataset, as constructing a reliable prediction ellipsoid model generally requires at least 100 instances. Furthermore, selecting the optimal normalization transformation remains a complex task, especially when dealing with datasets that contain outliers or exhibit highly intricate distributions. Another challenge lies in determining the appropriate significance level, which directly affects the reliability and efficiency of the prediction ellipsoid.</p><p>Looking ahead, future research could expand the dataset to include factors like environmental factors, as well as extended key sequences, to provide a more realistic representation of user behavior.</p><p>The incorporation of alternative normalization techniques, such as the Johnson transformation, could further enhance model accuracy by addressing the impact of non-Gaussian data. Further investigation into model complexity and feature selection for both prediction ellipsoid models and machine learning algorithms could offer valuable insights for improving keystroke dynamics recognition.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>03 19 0.04 26 0.0 4 54 0.04 13 0.0 5 49 0.0 4 12 0.0 4 26 -0.0 5 6 -0.0 4 17 X2 0.0 4 26 0.0 3 21 -0.0 4 19 0.0 5 51 0.0 5 82 0.0 4 14 0.0 4 4 -0.0 4 1 0.0 5 14 X3 0.0 4 50 -0.0 4 19 0.0 3 25 0.0 6 1 0.0 5 35 0.0 5 11 -0.0 4 26 0.0 4 27 0.0 6 1 X4 0.0 4 13 0.0 5 51 0.0 6 1 0.0 3 19 0.0 4 16 -0.0 5 52 0.0 4 31 -0.0 4 18 -0.0 5 57 X5 0.0 5 49 0.0 5 82 0.0 5 35 0.0 4 16 0.0 3 13 0.0 4 14 -0.0 5 62 0.0 5 56 0.0 5 48 X6 0</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>X2 0.04 16 0.0 3 21 -0.04 19 -0.0 5 26 -0.0 6 35 0.0 4 14 0.0 4 62 -0.0 4 1 0.0 5 15 X3 0.0 4 62 -0.0 4 19 0.0 3 26 -0.0 6 84 0.0 4 11 0.0 5 71 -0.0 4 51 0.0 5 99 0.0 4 17 X4 0.0 4 42 -0.0 5 26 -0.0 6 84 0.0 3 21 0.0 4 24 -0.0 6 75 0.0 4 37 -0.0 4 28 -0.0 5 87 X5 0.0 5 53 -0.0 6 35 0.0 4 11 0.0 4 24 0.0 3 13 0.0 4 12 -0.0 5 36 -0.0 5 53 0.0 5 72 X6 0.0 5 71 0.0 4 14 0.0 5 71 -0.0 6 75 0.0 4 12 0.0 3 12 0.0</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Autoencoder structure.</figDesc><graphic coords="10,134.60,407.20,345.62,268.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table><row><cell>Removed anomalies</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>SMD</cell><cell>Vector number</cell><cell></cell><cell>SMD</cell><cell>Vector number</cell></row><row><cell>1</cell><cell>37.44</cell><cell>295</cell><cell>6</cell><cell>26.963</cell><cell>323</cell></row><row><cell>2</cell><cell>36.962</cell><cell>160</cell><cell>7</cell><cell>26.868</cell><cell>45</cell></row><row><cell>3</cell><cell>30.742</cell><cell>306</cell><cell>8</cell><cell>25.776</cell><cell>263</cell></row><row><cell>4</cell><cell>28.833</cell><cell>388</cell><cell>9</cell><cell>24.515</cell><cell>294</cell></row><row><cell>5</cell><cell>28.662</cell><cell>214</cell><cell>10</cell><cell>23</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>presents the covariance matrix of the training set, which has the mean vector 𝑋 ̅ = {0.</figDesc><table><row><cell>07635;</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 5</head><label>5</label><figDesc>Comparison of models</figDesc><table><row><cell>Model</cell><cell>Specificity</cell><cell>Recall</cell><cell>Precision</cell><cell>F1 score</cell><cell>Accuracy</cell></row><row><cell>PENGD</cell><cell>0.9795</cell><cell>0.9225</cell><cell>0.9893</cell><cell>0.9547</cell><cell>0.9412</cell></row><row><cell>PEND</cell><cell>0.9949</cell><cell>0.9700</cell><cell>0.9974</cell><cell>0.9835</cell><cell>0.9782</cell></row><row><cell>OCSVM</cell><cell>0.9744</cell><cell>0.9675</cell><cell>0.9872</cell><cell>0.9773</cell><cell>0.9697</cell></row><row><cell>IF</cell><cell>0.9333</cell><cell>0.9500</cell><cell>0.9669</cell><cell>0.9584</cell><cell>0.9445</cell></row><row><cell>AE</cell><cell>0.9641</cell><cell>0.9625</cell><cell>0.9821</cell><cell>0.9722</cell><cell>0.9630</cell></row><row><cell cols="4">All models evaluated in this study demonstrate commendable</cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>The authors have not employed any Generative AI tools.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Comparing machine learning classifiers for continuous authentication on mobile devices by keystroke dynamics</title>
		<author>
			<persName><forename type="first">L</forename><surname>De-Marcos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Martínez-Herráiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Junquera-Sánchez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cilleruelo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pages-Arévalo</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics10141622</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Accurate Continuous and Non-intrusive User Authentication with Multivariate Keystroke Streaming</title>
		<author>
			<persName><forename type="first">A</forename><surname>Alshehri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Coenen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bollegala</surname></persName>
		</author>
		<idno type="DOI">10.5220/0006497200610070</idno>
	</analytic>
	<monogr>
		<title level="m">9th International Conference on Knowledge Discovery and Information Retrieval</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="61" to="70" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Outlier detection for keystroke biometric user authentication</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ismail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Salem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Abd</forename><forename type="middle">El</forename><surname>Ghany</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Aldakheel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Abbas</surname></persName>
		</author>
		<idno type="DOI">10.7717/peerj-cs.2086</idno>
	</analytic>
	<monogr>
		<title level="j">PeerJ Computer Science</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Keystroke dynamics-based authentication using unique keypad</title>
		<author>
			<persName><forename type="first">M</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Shin</surname></persName>
		</author>
		<idno type="DOI">10.3390/s21062242</idno>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page">2242</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">On the evaluation of outlier detection and one-class classification: a comparative study of algorithms, model selection, and ensembles</title>
		<author>
			<persName><forename type="first">H</forename><surname>Marques</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Swersky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Campello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zimek</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10618-023-00931-x</idno>
	</analytic>
	<monogr>
		<title level="j">Data Min Knowl Disc</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page">1517</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Evaluation of one-class classifiers for fault detection: Mahalanobis classifiers and the Mahalanobis Taguchi system</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jung</surname></persName>
		</author>
		<idno type="DOI">10.3390/pr9081450</idno>
	</analytic>
	<monogr>
		<title level="j">Processes</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page">1450</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Machine learning and deep learning for fixed-text keystroke dynamics</title>
		<author>
			<persName><forename type="first">H</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stamp</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2107.00507</idno>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence for Cybersecurity</title>
		<imprint>
			<biblScope unit="page" from="309" to="329" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A three-step authentication model for mobile phone user using keystroke dynamics</title>
		<author>
			<persName><forename type="first">B</forename><surname>Saini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nayyar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kaur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bhatia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>El-Sappagh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hu</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2020.3008019</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="125909" to="125922" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">CDAS: A continuous dynamic authentication system</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1145/3316615.3316691</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 8th International Conference on Software and Computer Applications</title>
				<meeting>the 2019 8th International Conference on Software and Computer Applications</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="447" to="452" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title/>
		<idno type="DOI">10.48550/arXiv.2307.05529</idno>
		<idno type="arXiv">arXiv:2307.05529</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A comprehensive review of keystroke dynamics-based authentication mechanism</title>
		<author>
			<persName><forename type="first">N</forename><surname>Raul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Shankarmani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Joshi</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-15-0324-5_13</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Innovative Computing and Communications. Advances in Intelligent Systems and Computing</title>
				<meeting><address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">1059</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Time-frequency analysis of keystroke dynamics for user authentication</title>
		<author>
			<persName><forename type="first">R</forename><surname>Toosi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Akhaee</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.future.2020.09.027</idno>
	</analytic>
	<monogr>
		<title level="j">Future Generation Computer Systems</title>
		<imprint>
			<biblScope unit="volume">115</biblScope>
			<biblScope unit="page" from="438" to="447" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A hybrid method for keystroke biometric user identification</title>
		<author>
			<persName><forename type="first">K</forename><surname>Ml Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Thakur</surname></persName>
		</author>
		<author>
			<persName><surname>Obaidat</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics11172782</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">17</biblScope>
			<biblScope unit="page">2782</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Enhancing user authentication through keystroke dynamics analysis using isolation forest algorithm</title>
		<author>
			<persName><forename type="first">I</forename><surname>Meenakshisundaram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Karunanithi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Sahana</surname></persName>
		</author>
		<idno type="DOI">10.1109/ic-ETITE58242.2024.10493648</idno>
	</analytic>
	<monogr>
		<title level="m">Second International Conference on Emerging Trends in Information Technology and Engineering (ICETITE)</title>
				<imprint>
			<date type="published" when="2024">2024. 2024</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Free text keystroke dynamics-based authentication with continuous learning: a case study</title>
		<author>
			<persName><forename type="first">F</forename><surname>Trad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hussein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chehab</surname></persName>
		</author>
		<idno type="DOI">10.1109/IUCC-CIT-DSCI-SmartCNS57392.2022.00031</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 21st International Conference on Ubiquitous Computing and Communications (IUCC/CIT/DSCI/SmartCNS)</title>
				<meeting><address><addrLine>Chongqing, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="125" to="131" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Keystroke dynamics using auto encoders</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ouazzane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vassilev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Faruqi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Walker</surname></persName>
		</author>
		<idno type="DOI">10.1109/CyberSecPODS.2019.8885203</idno>
	</analytic>
	<monogr>
		<title level="m">2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security)</title>
				<meeting><address><addrLine>Oxford, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Application of transformed prediction ellipsoids for outlier detection in multivariate non-gaussian data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Prykhodko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Makarova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Prykhodko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pukhalevych</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCSET49122.2020.235454</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 15th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET)</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="359" to="362" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Examining the distribution of keystroke dynamics features on computer, tablet and mobile phone platforms</title>
		<author>
			<persName><forename type="first">O</forename><surname>Oyebola</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-99-0835-6_43</idno>
	</analytic>
	<monogr>
		<title level="m">Mobile Computing and Sustainable Informatics: Proceedings of ICMCSI 2023</title>
				<meeting><address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature Singapore</publisher>
			<biblScope unit="page" from="613" to="620" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Real-world keystroke dynamics are a potentially valid biomarker for clinical disability in multiple sclerosis</title>
		<author>
			<persName><forename type="first">K</forename><surname>Lam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Meijer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Loonstra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Coerver</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Twose</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Redeman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Killestein</surname></persName>
		</author>
		<idno type="DOI">10.1177/1352458520968797</idno>
	</analytic>
	<monogr>
		<title level="j">Multiple Sclerosis Journal</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1421" to="1431" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Estimating the size of web apps created using the CakePHP framework by nonlinear regression models with three predictors</title>
		<author>
			<persName><forename type="first">S</forename><surname>Prykhodko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Prykhodko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Shutko</surname></persName>
		</author>
		<idno type="DOI">10.1109/CSIT52700.2021.9648680</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 16th International Conference on Computer Sciences and Information Technologies (CSIT)</title>
				<meeting><address><addrLine>LVIV, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="333" to="336" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Application of a ten-variate prediction ellipsoid for normalized data and machine learning algorithms for face recognition</title>
		<author>
			<persName><forename type="first">S</forename><surname>Prykhodko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Trukhov</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3702/paper30.pdf" />
	</analytic>
	<monogr>
		<title level="m">Selected Papers of the Seventh International Workshop on Computer Modeling and Intelligent Systems (CMIS-2024</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<meeting><address><addrLine>Zaporizhzhia, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024-05-03">May 3, 2024. 2024</date>
			<biblScope unit="volume">3702</biblScope>
			<biblScope unit="page" from="362" to="375" />
		</imprint>
	</monogr>
	<note>Workshop Proceedings (CMIS-2024)</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Mahalanobis distances for ecological niche modelling and outlier detection: implications of sample size, error, and bias for selecting and parameterising a multivariate location and scatter method</title>
		<author>
			<persName><forename type="first">T</forename><surname>Etherington</surname></persName>
		</author>
		<idno type="DOI">10.7717/peerj.11436</idno>
	</analytic>
	<monogr>
		<title level="j">PeerJ</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">One-class SVM based outlier detection strategy to detect thin interlayer debondings within pavement structures using Ground Penetrating Radar data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Todkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Baltazart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ihamouten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Dérobert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Guilbert</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jappgeo.2021.104392</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Geophysics</title>
		<imprint>
			<biblScope unit="volume">192</biblScope>
			<biblScope unit="page">104392</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Generalized isolation forest for anomaly detection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lesouple</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Baudoin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Spigai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Tourneret</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.patrec.2021.05.022</idno>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">149</biblScope>
			<biblScope unit="page" from="109" to="119" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Improving performance of autoencoderbased network anomaly detection on NSL-KDD dataset</title>
		<author>
			<persName><forename type="first">W</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jang-Jaccard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sabrina</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2021.3116612</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="140136" to="140146" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Financial Fraud: A review of anomaly detection techniques and recent advances</title>
		<author>
			<persName><forename type="first">W</forename><surname>Hilal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Gadsden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yawney</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2021.116429</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">193</biblScope>
			<biblScope unit="page">116429</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Anomaly detection for data streams based on isolation forest using scikit-multiflow</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">U</forename><surname>Togbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Boly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chabchoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chiky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Montiel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">V</forename><surname>Tran</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-58811-3_2</idno>
	</analytic>
	<monogr>
		<title level="m">Computational Science and Its Applications ICCSA 2020: 20th International Conference</title>
		<title level="s">Proceedings, Part IV</title>
		<meeting><address><addrLine>Cagliari, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2014-07">July 1 4, 2020</date>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="15" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">An overview of deep learning architecture of deep neural networks and autoencoders</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sewak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Sahay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Rathore</surname></persName>
		</author>
		<idno type="DOI">10.1166/jctn.2020.8648</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Computational and Theoretical Nanoscience</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="182" to="188" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">A comparison of adaptive moment estimation and rmsprop optimisation techniques for wildlife animal classification using convolutional neural networks</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Kartowisastro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Latupapua</surname></persName>
		</author>
		<idno type="DOI">10.18280/ria.370424</idno>
	</analytic>
	<monogr>
		<title level="j">Revue d&apos;Intelligence Artificielle</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1023" to="1030" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">interactive biometric identification system based on the keystroke dynamic</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bilan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bilan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bilan</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-48378-4_3</idno>
	</analytic>
	<monogr>
		<title level="m">Biometric Identification Technologies Based on Modern Data Mining Methods</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Bilan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Elhoseny</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Hemanth</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="39" to="58" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
