<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Comparison of Classifiers for Predicting Heart Attack in Patients *</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Oliwia</forename><surname>Cimała</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Applied Mathematics</orgName>
								<orgName type="institution">Silesian University of Technology</orgName>
								<address>
									<addrLine>Kaszubska 23</addrLine>
									<postCode>44100</postCode>
									<settlement>Gliwice</settlement>
									<country key="PL">POLAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maria</forename><surname>Bocheńska</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Applied Mathematics</orgName>
								<orgName type="institution">Silesian University of Technology</orgName>
								<address>
									<addrLine>Kaszubska 23</addrLine>
									<postCode>44100</postCode>
									<settlement>Gliwice</settlement>
									<country key="PL">POLAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Information Society</orgName>
								<orgName type="institution">University Studies</orgName>
								<address>
									<addrLine>2024, May 17</addrLine>
									<settlement>Kaunas</settlement>
									<country key="LT">Lithuania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Comparison of Classifiers for Predicting Heart Attack in Patients *</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">0A192567CBB709E975E8A31D904707FC</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Soft Set Classifier</term>
					<term>Naive Bayes</term>
					<term>K-Nearest Neighbors</term>
					<term>Heart Attack Prediction</term>
					<term>Machine Learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Heart attack predictions play a pivotal role in patients health. While having two options of fast responding to health issue, making many tests on patients to see whats wrong or compare information about the patients with others to classify a patient and narrow down the search to the right field. This study presents a comprehensive comparison of three classification algorithms -Soft Set Classifier, Naive Bayes, and K-Nearest Neighbors (KNN) -for predicting heart attack in patients. Through experimentation with different variations of these algorithms, including custom implementations, the project evaluates their effectiveness in recognizing high or low chance of heart attack. Methodologically, the project explores the nuances of each algorithm, discussing their underlying principles and implementation details. Experimental results reveal insights into the performance of each algorithm, providing valuable considerations for practical applications. Additionally, the project discusses the significance of precision, recall, F1-score, and accuracy metrics in assessing algorithm performance. Overall this study contributes to advancing heart attack prediction technology, offering valuable insights into algorithmic approaches.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The heart is vital to the body's function, acting as a powerful pump that circulates blood, oxygen, and essential nutrients throughout the body. This cardiovascular system ensures that all bodily tissues receive the resources they need to operate effectively. Consequently, any issues with the heart can disrupt the normal functioning of other organs and systems, leading to widespread health problems <ref type="bibr" target="#b0">[1]</ref>. Heart disease are the main responsible for one-third of all human deaths in the world <ref type="bibr" target="#b1">[2]</ref>, making accurate and timely diagnosis critical for effective treatment. Traditional diagnostic methods often rely on various tests and clinical evaluations, which can be time-consuming and costly. With the advancement of machine learning, there is an increasing interest in developing automated systems for predicting heart disease using patient data <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>.</p><p>Existing solutions leverage different algorithms to achieve this goal, including logistic regression, decision tree, random forest, voting and neural networks <ref type="bibr" target="#b5">[6]</ref>. However, our study focuses on comparing three distinct classifiers: the Soft Set Classifier <ref type="bibr" target="#b6">[7]</ref>, Naive Bayes <ref type="bibr" target="#b7">[8]</ref>, and K-Nearest Neighbors (KNN) <ref type="bibr" target="#b8">[9]</ref>. Each of these algorithms offers unique advantages and challenges, which we explore in the context of heart disease prediction. To get a closer look into the applied classifiers, the following paragraphs will briefly describe them to illustrate the differences between these calculation methods. The Soft Set classifier is a flexible and general mathematical tool used for handling uncertainty in data. It does not rely on predefined probabilities or distances, making it particularly useful in situations where traditional probabilistic or distance-based models like Naive Bayes or K-Nearest Neighbors (KNN) may not perform well. The classifier iteratively adjusts the membership values based on the training data, thus enabling it to handle imprecise and vague information effectively. The model's adaptability to various forms of uncertainty makes it a valuable tool in fields where data ambiguity is prevalent. The Naive Bayes classifier is a probabilistic machine learning model based on Bayes' theorem, which calculates the probability of a certain class given a set of features. It assumes that the features are conditionally independent, hence "naive." K-Nearest Neighbors (KNN) is a non-parametric supervised learning algorithm used for classification and regression tasks. In KNN, the class of a new data point is determined by the majority class among its k nearest neighbors in the feature space. It's simple to implement and understand but can be computationally expensive for large datasets, as it requires storing all training data and computing distances for each prediction. All three algorithms have varying time consumption, with K-Nearest Neighbors (KNN) being more computationally expensive due to its need to calculate distances for each prediction. While making the algorithms we follow the same build of the specific class. The class contains two functions the fit and predict, if needed also other functions like: distance or score of the given sample. Now, let's delve into a brief explanation of each of the applied algorithms and the underlying thought process behind their selection. The first classifier is the Soft Set classifier that is independently create. Next, the Naive Bayes classifier is from the library, change a little to be built like a rest (it also have a fit, predict functions in Bayes class). The third classifier is a K-Nearest Neighbours algorithm but in this instance written by us. It was created following open-access models with an interest to achieve as high accuracy as possible. After performing the calculations, each algorithm displays a matrix and a table with the results of the effectiveness in defining of low or high probability of heart attack.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head><p>This section details the methodologies used for each classifier, including their mathematical foundations and implementation specifics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">SoP Set Classifier</head><p>The Soft Set Classifier, from a mathematical perspective, assigns to each element of the set X a value from the interval &lt;-1, 1&gt;, representing the degree of membership of that element to the set X. A membership value of 1 indicates assignment to the negative class, while a membership value of -1 indicates assignment to the positive class. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Naive Bayes Classifier</head><p>The Naive Bayes classifier is based on Bayes' theorem and assumes that the features are conditionally independent given the class label. The implementation follows these steps:</p><p>where 𝑃 (𝑦|𝑋) is the posterior probability of class 𝑦 given feature vector 𝑋. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">K-Nearest Neighbors (KNN) Classifier</head><p>The KNN classifier classifies a sample based on the majority label among its 𝑘-nearest neighbors in the training set. The distance metric used is typically the Euclidean distance: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experiments</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Dataset Description</head><p>The dataset includes records of patients along with their medical attributes and the presence or absence of heart disease. The dataset contains 13 columns with different attributes: age, sex, number of major vessels, chest pain type, resting blood pleasure, cholesterol, maximum heart rate achieved, fasting blood sugar, resting electrocardiograph results, exercises, slope, thal rate and the last column that we compare to (target variable). All records were first normalized and then subjected to further tests. The normalization function operated on the basic min-max algorithm <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Data Splitting and Testing</head><p>To evaluate the performance of our classifiers, we split the dataset into a training set and a test set. This is a crucial step to ensure that the model can generalize well to unseen data. We used the 'train-test-split' function from the 'sklearn.model-selection' library for this purpose.</p><p>X_train , X_test , y _t r a i n , y _ t e s t = t r a i n _ t e s t _ s p l i t ( X , y , t e s t _ s i z e = 0 . 3 5 , r a n d o m _ s t a t e = 42 )</p><p>This function performs the following tasks:</p><p>• Input Parameters:</p><p>-X: the feature matrix containing the input data for all samples.</p><p>y: the target vector containing the labels for all samples.</p><p>-test_size=0.35: specifies the proportion of the dataset to include in the test split.</p><p>(Here, 35% of the data is allocated for testing, and the remaining 65% is used for training.) -random_state=42: this parameter ensures reproducibility of the results. By setting a specific random state, we ensure that the same split is generated every time the code is run.</p><p>• Outputs:</p><p>-X_train: the feature matrix for the training set.</p><p>-X_test: the feature matrix for the test set.</p><p>-y_train: the target vector for the training set.</p><p>-y_test: the target vector for the test set.</p><p>By splitting the data into training and testing sets, we can train the model on one subset of the data and evaluate its performance on another, independent subset. This approach helps in assessing how well the model can generalize to new, unseen data and is an essential part of model validation in machine learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Results Analysis</head><p>To compare the different performance parameters of the used algorithms, we utilized the metrics module from the 'sklearn' library. The dataset containing numerical values in 13 different types of attributes (medical data of the patient) with a total length of 303 records was divided into training and testing sets in a 65:35 ratio. For each algorithm, we compared parameters such as:</p><p>• precision -it is a measure that determines the ratio of correctly predicted class elements to all those marked as the given class</p><p>• recall -a measure that informs us how many elements from given class were correctly recognized</p><p>• f1-score -it is the harmonic mean between precision and recall</p><p>• support -a measure of the occurrences of each class in dataset • accuracy -it is the ratio of correctly classified samples to all cases in the test set Meaning of labels:</p><p>• TP -true positive -cases that were correctly classified as positive by the classifier • TN -true negative -cases that were correctly classified as negative by the classifier • FP -false positive -an error where the test result incorrectly indicates the presence of a condition when it is not present • FN -false negative -an error where the test result incorrectly indicates the absence of a condition when it is actually present</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Results</head><p>As we can see in the results above in matrix we have 0 and 1 (Fig. <ref type="figure" target="#fig_0">1</ref>) as the output were 0 is a low chance of heart attack and 1 is a higher chance of heart attack. And in the classification-report, that is from 'sklearn' library, the 0 value is change to -1 (Tab.: 1, 2, 3). Analyzing the results shown in above matrix and table, we can observe that all three algorithm have lower precision in qualify the low chance of heart attack.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>This study presented a comparative analysis of three different classifiers for heart disease prediction. The Soft Set Classifier, while effective in handling uncertainty, showed moderate accuracy which equals 70%. The Naive Bayes classifier demonstrated high accuracy 83%, making it a strong candidate for medical diagnostics. The K-Nearest Neighbors classifier also performed well, with an accuracy of 84%. These results provide valuable insights into the strengths and limitations of each classifier, guiding future research and application in medical diagnostics. In all this pondering we need to remember that the Naive Bayes classifier wasn't written by us. We can only assume what kind of results can give independently written the Naive Bayes algorithm and what results can bring us the K-Nearest Neighbors and Soft Set classifier written from the library. Improvements that we can make in the future are to write the Naive Bayes algorithm and check its accuracy then, remake the Soft Set algorithm so it reaches higher accuracy. In addition to boost the accuracy we can compare all of the three algorithms to the ones from library and eliminate the weak points because of which the accuracy isn't as high as needed.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Algorithm 1 :</head><label>1</label><figDesc>Soft Set Classifier Input: Training set 𝑋 train , Training labels 𝑦 train , Number of iterations 𝑛 iters , Regularization parameter 𝜆 param Output: Fitted model Y 1 Initialize weight vector Y to zeros of length equal to the number of features; 2 for iteration in range 𝑛 iters do 3 for each sample 𝑥 i , 𝑦 i in 𝑋 train , 𝑦 train do 4 if 𝑦 i * classify(𝑥 i ) ≤ 1 then 5 Update Y by Y ← Y + 𝑦 i * 𝑥 i -2 * 𝜆 param * Y 6 Return Fitted weight vector Y Algorithm 2: Soft Set Prediction Input: Test set 𝑋 test , Fitted weight vector Y Output: Predicted labels 𝑦 pred 1 for each sample 𝑥 i in 𝑋 test do 2 Compute classification score classification ← classify(𝑥 i ) ); 3 Assign label 𝑦 pred ← sign (classification); 4 return Predicted labels 𝑦 pred</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Algorithm 3 :1 1 :2 2 : 3 3 :</head><label>31233</label><figDesc>Naive Bayes Input: Training set 𝑋 𝑡𝑟𝑎𝑖𝑛 , Training labels 𝑦 𝑡𝑟𝑎𝑖𝑛 , Test set 𝑋 𝑡𝑒𝑠𝑡 Output: Predicted labels 𝑦 𝑝𝑟𝑒𝑑 Step Initialize the Gaussian Naive Bayes model; Step Fit the model with the training data 𝑋 𝑡𝑟𝑎𝑖𝑛 and 𝑦 𝑡𝑟𝑎𝑖𝑛 ; Step Predict the labels for 𝑋 𝑡𝑒𝑠𝑡 using the trained model;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Algorithm 4 : 3</head><label>43</label><figDesc>KNN Algorithm Input: Training set 𝑋 𝑡𝑟𝑎𝑖𝑛 , Training labels 𝑦 𝑡𝑟𝑎𝑖𝑛 , Test set 𝑋 𝑡𝑒𝑠𝑡 , Number of neighbors 𝑘 Output: Predicted labels 𝑦 𝑝𝑟𝑒𝑑 1 for each sample 𝑥 in 𝑋 𝑡𝑒𝑠𝑡 do 2 Compute distances between 𝑥 and all samples in 𝑋 𝑡𝑟𝑎𝑖𝑛 ; Identify the 𝑘-nearest neighbors; 4 Assign the label based on the majority vote of the neighbors;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Comparison of Different Classifiers</figDesc><graphic coords="6,203.15,343.70,181.40,137.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Accuracy when model is trained with KNN: 84.11214953271028</figDesc><table><row><cell>Class</cell><cell cols="4">Precision Recall F1-score Support</cell></row><row><cell>-1.0</cell><cell>0.78</cell><cell>0.86</cell><cell>0.82</cell><cell>44</cell></row><row><cell>1.0</cell><cell>0.90</cell><cell>0.83</cell><cell>0.86</cell><cell>63</cell></row><row><cell>Accuracy Macro avg</cell><cell>0.84</cell><cell>0.84</cell><cell>0.84 0.84</cell><cell>107 107</cell></row><row><cell>Weighted avg</cell><cell>0.85</cell><cell>0.84</cell><cell>0.84</cell><cell>107</cell></row><row><cell>Table 2</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">Accuracy when model is trained with Bayes: 83.17757009345794</cell><cell></cell><cell></cell></row><row><cell>Class</cell><cell cols="4">Precision Recall F1-score Support</cell></row><row><cell>-1.0</cell><cell>0.76</cell><cell>0.86</cell><cell>0.81</cell><cell>44</cell></row><row><cell>1.0</cell><cell>0.89</cell><cell>0.81</cell><cell>0.85</cell><cell>63</cell></row><row><cell>Accuracy Macro avg</cell><cell>0.83</cell><cell>0.84</cell><cell>0.83 0.83</cell><cell>107 107</cell></row><row><cell>Weighted avg</cell><cell>0.84</cell><cell>0.83</cell><cell>0.83</cell><cell>107</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>Accuracy when model is trained with Soft Set: 70.09345794392523</figDesc><table><row><cell>Class</cell><cell cols="4">Precision Recall F1-score Support</cell></row><row><cell>-1.0 1.0</cell><cell>0.60 0.83</cell><cell>0.82 0.62</cell><cell>0.69 0.71</cell><cell>44 63</cell></row><row><cell>Accuracy Macro avg</cell><cell>0.71</cell><cell>0.72</cell><cell>0.70 0.70</cell><cell>107 107</cell></row><row><cell>Weighted avg</cell><cell>0.74</cell><cell>0.70</cell><cell>0.70</cell><cell>107</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A comparative study of machine learning algorithms for the prediction of heart disease</title>
		<author>
			<persName><forename type="first">H</forename><surname>Arghandabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shams</surname></persName>
		</author>
		<idno type="DOI">10.22214/ijraset.2020.32591</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal for Research in Applied Science and Engineering Technology</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="677" to="683" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Diagnosis of heart using genetic algorithm based trained recurrent fuzzy neural networks</title>
		<author>
			<persName><forename type="first">K</forename><surname>Uyar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ilhan</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.procs.2017.11.283</idno>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">120</biblScope>
			<biblScope unit="page" from="588" to="593" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Development of ai-based prediction of heart attack risk as an element of preventive medicine</title>
		<author>
			<persName><forename type="first">I</forename><surname>Rojek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kotlarz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kozielski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jagodziński</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Królikowski</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics13020272</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Heart attack prediction using machine learning algorithms</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J A</forename><surname>Laxamana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M M</forename><surname>Vale</surname></persName>
		</author>
		<idno type="DOI">10.52783/jes.2474</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Electrical Systems</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="1428" to="1436" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>license CC BY-ND 4.0</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A machine learning approach for heart attack prediction</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shrivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">P</forename><surname>Upadhyay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Chaurasia</surname></persName>
		</author>
		<idno type="DOI">10.35940/ijeat.F3043.0810621</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Engineering and Advanced Technology</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="124" to="134" />
			<date type="published" when="2021">2021</date>
		</imprint>
		<respStmt>
			<orgName>Central University Bihar, Babasaheb Bhimrao Ambedkar Central University Lucknow</orgName>
		</respStmt>
	</monogr>
	<note>mahatma Gandhi</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Analyzing the Effectiveness of Several Machine Learning Methods for Heart Attack Prediction</title>
		<author>
			<persName><forename type="first">K</forename><surname>Oliullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Whaiduzzaman</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-19-9483-8_19</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="225" to="236" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Three classes of soft functions via soft-open sets and soft-closed sets</title>
		<author>
			<persName><forename type="first">P</forename><surname>Majeed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Shareef</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Darwesh</surname></persName>
		</author>
		<idno type="DOI">10.31185/wjps.288</idno>
	</analytic>
	<monogr>
		<title level="j">Wasit Journal of Pure Sciences</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">An analysis of bayesian classifiers</title>
		<author>
			<persName><forename type="first">P</forename><surname>Langley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Iba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Thompson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1992">1992</date>
			<biblScope unit="volume">90</biblScope>
			<biblScope unit="page" from="223" to="228" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Grey wolf optimizer combined with k-nn algorithm for clustering problem</title>
		<author>
			<persName><forename type="first">K</forename><surname>Prokop</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IVUS 2022: 27th International Conference on Information Technology</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A novel approach for data feature weighting using correla-tion coefficients and min-max normalization</title>
		<author>
			<persName><forename type="first">M</forename><surname>Shantal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Othman</surname></persName>
		</author>
		<idno type="DOI">10.3390/sym15122185</idno>
	</analytic>
	<monogr>
		<title level="j">Symmetry</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page">2185</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
