<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Analysis of selected algorithms for the classification of space objects *</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Radosław</forename><surname>Jędrzejczyk</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Applied Mathematics</orgName>
								<orgName type="institution">Silesian University of Technology</orgName>
								<address>
									<addrLine>Kaszubska 23</addrLine>
									<postCode>44100</postCode>
									<settlement>Gliwice</settlement>
									<country key="PL">POLAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Katarzyna</forename><surname>Kłeczek</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Applied Mathematics</orgName>
								<orgName type="institution">Silesian University of Technology</orgName>
								<address>
									<addrLine>Kaszubska 23</addrLine>
									<postCode>44100</postCode>
									<settlement>Gliwice</settlement>
									<country key="PL">POLAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Information Society</orgName>
								<orgName type="institution">University Studies</orgName>
								<address>
									<addrLine>2024, May 17</addrLine>
									<settlement>Kaunas</settlement>
									<country key="LT">Lithuania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Analysis of selected algorithms for the classification of space objects *</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">24AD974966892CDBD88A5A0422014729</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>knn</term>
					<term>naive bayes</term>
					<term>decision trees</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Along with the rise of available astronomical data, captured from numerous facilities from around the world, a need for faster and more sophisticated data analysis methods emerges. Data captures from numerous observation of large quantities of object in the sky can reach large volumes very quickly, making it impossible for scientist to analyse by hand. This rises the need for fast and reliable automated methods of data processing, which can be found in computer science research. Leveraging algorithms used in different areas of research is crucial for processing information about celestial bodies. In this work, we apply machine learning methods from computer science domain into an astronomy problem. We lay out three different machine learning algorithms, along with their inner workings, and show how they can be applied to astronomy problems. We show how those algorithms can be used to speed up processing of large volumes of data, and how they can help scientists in classification of celestial bodies. We investigate how each algorithm performs and try to find the best performing one in the problem of classification of different objects, based on their characteristics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In modern astronomy, increasing number of data is becoming an ever-growing problem and opportunity. Formulation and validation of many theories require scientists to go through huge databases, which have become impossible to do by hand. At the same time, increasing capabilities of earth-based observatories and space telescopes are providing us with many sky surveys containing petabytes of quality data <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. This data-intensive situation encourages the investigation of new methodologies, big data tools and techniques, therefore providing a great environment for astroinformatics development <ref type="bibr" target="#b2">[3]</ref>.</p><p>Machine learning has a significant impact on this new reality <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref>. It provides many tools that can be used to swiftly classify huge amounts of data, which we will try to explore in this paper. We will go through algorithms such as Decision Tree <ref type="bibr" target="#b6">[7]</ref>, Naive Bayes <ref type="bibr" target="#b7">[8]</ref> and K-Nearest Neighbors <ref type="bibr" target="#b8">[9]</ref> and analyse their accuracy to distinguish between different objects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head><p>In the beginning, we will need to transform our data into a convenient form. In the case of the non-numerical data, we will simply map it to one by associating separate numbers for each value. On the other hand, numerical data will be rescaled using min-max normalization.</p><p>We will compare performance of different algorithms, given the task of classification of stellar objects. For the comparison, we have chosen:</p><p>• KNN (K-Nearest Neighbors) classification.</p><p>• Decision tree model.</p><p>• Naive Bayes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mathematical Model for K-Nearest Neighbors (K-NN)</head><p>If we assume we have a training dataset consisting of 𝑁 data points:</p><p>where 𝑥 𝑖 is the feature vector for the 𝑖-th point, and 𝑦 𝑖 is the class label (for classification) or value (for regression).</p><p>Then we can calculate a distance metric, typically using the Euclidean distance 𝑑 between two points 𝑥 and 𝑧 defined as:</p><p>where 𝑥 and 𝑧 are feature vectors of dimension 𝑚. To classify a new point 𝑥, we compute the distances between 𝑥 and all points in the training set, then select 𝐾 nearest neighbours and assign a class label based on the majority.</p><p>The parameter 𝐾 is a crucial hyperparameter in the KNN algorithm. A small 𝐾 can lead to overfitting, while a large 𝐾 can lead to underfitting. The optimal value of 𝐾 is often selected using cross-validation methods. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mathematical Model for Decision Tree</head><p>If we assume we have a training dataset consisting of 𝑁 data points:</p><p>where 𝑥 𝑖 is the feature vector for the 𝑖-th point, and 𝑦 𝑖 is the class label (for classification) or value (for regression). Then a decision tree is a tree-like model where internal nodes represents a test on a feature, branches represents outcomes of those tests and leaf node represents a class label.</p><p>To build a decision tree, we recursively split the data at each node. The choice of split is based on a criterion that maximizes the separation of the classes or reduces the prediction error.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Common criteria include:</head><p>Gini Index:</p><p>where 𝑝 𝑘 is the proportion of instances of class 𝑘 in the dataset 𝐷.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Information Gain:</head><p>where 𝐸𝑛𝑡𝑟𝑜𝑝𝑦(𝐷) is given by: and 𝐷 𝑣 is the subset of 𝐷 where attribute 𝐴 has value 𝑣.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mean Squared Error (MSE):</head><p>where 𝑦¯ is the mean of the values in the dataset 𝐷.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mathematical Model for Naive Bayes</head><p>Assume we have a training dataset consisting of 𝑁 data points:</p><p>where 𝑥 𝑖 = (𝑥 𝑖1 , 𝑥 𝑖2 , . . . , 𝑥 𝑖𝑚 ) is the feature vector for the 𝑖-th point, and 𝑦 𝑖 is the class label from a set of classes {𝐶 1 , 𝐶 2 , . . . , 𝐶 𝐾 }.</p><p>The Naive Bayes algorithm is based on Bayes' Theorem: Additionally, we will look for the best number of neighbours for KNN classifier. We will use a few libraries to handle our operations: Sklearn [10]-will provide us with algorithm implementations, saving us a lot of time and ensuring we will be able to go through relatively big databases in reasonable time. Pandas <ref type="bibr" target="#b10">[11]</ref> -will provide us with data structure (DataFrame). Seaborn <ref type="bibr" target="#b11">[12]</ref> and Matplotlib <ref type="bibr" target="#b12">[13]</ref>-will be used for visualizations, graphs, etc.</p><p>In order to find the best constant for KNN, we will launch classification in a simple loop, looking for the best solution. Generally speaking, when this number will increase our accuracy should decrease, therefore this approach is reasonable and should not take too much time. In the end, we present the confusion matrix for each of our solutions, and we will consider only two metrics:</p><p>• Accuracy (Equation <ref type="formula">15</ref>) -to measure how many correct classifications we get.</p><p>• False categorization -in order to check if any of the classes are more often confused with others.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Accuracy</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experiments</head><p>For our dataset, we have chosen data from Sloan Digital Sky Survey DR17 <ref type="bibr" target="#b13">[14]</ref> (it was accessed from [15]). Which was the fourth phase of the Sloan Digital Sky Survey (we will call it SDSS-IV from now on). It contains 100000 observations, each containing (qouting <ref type="bibr" target="#b14">[16]</ref>):</p><p>• obj_ID = Object Identifier, the unique value that identifies the object in the image catalogue used by the CAS • alpha = Right Ascension angle (at J2000 epoch)  Some of that information will not be used for our classification, as they are contained in SDSS-IV for cataloguing purposes (such as object identifiers). We will focus on: coordinates alpha and delta; data from filtered channels u, g, r, i and z; class, which is the aim of our classification efforts.</p><formula xml:id="formula_0">•</formula><p>After mapping and normalising our data in ultraviolet, green and infrared presented strange pattern, where basically all data is accumulated near value 1.0. Upon further inspection it turns out that one of the observed objects have some abnormal values (equals to -9999), we will remove it from our dataset and then proceed. Now we will have a look at the correlation matrix (figure <ref type="figure" target="#fig_2">1a</ref>) and address some of the relations:</p><p>• Coordinates have neutral relations with all the other data.</p><p>• Ultraviolet and green relation-green light is a part of spectrum of many stars similar to the Sun (G-type main-sequence stars). Those stars also happens to emit significant part of their radiation as ultraviolet. An additional effect, that can also explain moderate relation with infrared and near infrared light is absorption and re-emission of different by interstellar gas, which then re-emits in those wavelengths (heat radiation) <ref type="bibr" target="#b15">[17]</ref>. • Infrared, near-infrared and red data have strong relation -red stars are typically colder, but they still emit a lot of infrared radiation. Additional factor -absorption and re-emission of light was mentioned above. • Moderate relation of red, near infrared and infrared light with redshift can by explained by many objects detected as red having their colour shifted due to phenomenons as Doppler effect. This relation might be absent from other detectors, as light of stars different from infrared might have been cut off by stardust or shifted strong enough to not be detected at all <ref type="bibr" target="#b15">[17]</ref>.</p><p>In general, it is easy to notice strong relations with red and infrared light. This phenomenon might be related to extinction of light in the space, which is more explicit for shorter wavelengths. The coordinates of our objects are mostly related to each other (but it is still very weak relation). It also has a pure neutral relation with most of the data from detectors, therefore we are going to drop this one. Our final correlation matrix is shown for the sake of clarity in figure <ref type="figure" target="#fig_2">1b</ref>.</p><p>Additionally, we will provide histograms for SDSS-IV data, we will plot them on to one histogram, excluding redshift, which will be shown separately for clarity (figures 2a and 2b). We will also have a look at a number of each of the individual objects in our data (figure <ref type="figure" target="#fig_4">2c</ref>)we can notice significant dominance of galaxies. Galaxies and quasars are similar in number, with a small margin for stars.</p><p>We will split our data with at train and test set with ratio of 0.2. After running the calculation mentioned in chapter before, we get:  • For KNN we get 96.465% accuracy, which was best for numbers of neighbours equal to 3 as shown in figure <ref type="figure" target="#fig_5">3</ref> with confusion matrix as in figure 4a. • Decision tree have achieved 96.78% accuracy (confusion matrix in figure <ref type="figure" target="#fig_6">4b</ref>).</p><p>• Naive Bayes have achieved the lowest accuracy of 92.11% (confusion matrix in figure <ref type="figure" target="#fig_6">4c</ref>)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>Behaviour of KNN accuracy was, as expected, decreasing with relation to its constant. On the other hand, all the analysed algorithms achieved good accuracy (above 90%). Bayes algorithms turned out to have had some problems distinguishing between galaxies and quasars (almost 1000 wrongly classified galaxies), although two others algorithms also struggled there. KNN seems to deal the best with this problem, recognising even so slightly more QSO objects than others but have more mismatches, recognising some of the galaxies as stars. None of the algorithms have any problems recognising stars and rarely ever mismatches them.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Algorithm 2 : 2 3 if 4 Label 5 else 6 if 𝐴 is empty then 7 LabelAlgorithm 3 : 2 for 6 return</head><label>2234567326</label><figDesc>Decision Tree Algorithm Data: Training dataset 𝑥 𝑡 , set of attributes 𝐴, class attribute 𝑦 𝑡 Result: Decision tree 1 begin Create a root node 𝑡; all instances in 𝑥 𝑡 belong to the same class 𝑦 then 𝑡 as leaf node with class 𝑦; 𝑡 as leaf node with majority class in 𝑥 𝑡 ; 8 Choose attribute 𝑎 from 𝐴 that best classifies instances in 𝐷; 9 Label node 𝑡 as attribute 𝑎; 10 Remove 𝑎 from 𝐴; 11 for each value 𝑣 of 𝑎 do 12 Add a branch to 𝑡 corresponding to 𝑣; 13 Let 𝑥 be the subset of instances in 𝑥 𝑡 with value 𝑣 for attribute 𝑎; 14 if 𝑥 is empty then 15 Label the corresponding branch with the majority class in 𝑥 𝑡 ; 16 Label the corresponding branch using Decision Tree Algorithm(𝑥, 𝐴, 𝑦 𝑡 ); 17 ReturnDecision tree ; where: 𝑃 (𝐶 𝑘 | 𝑥) is the posterior probability of class 𝐶 𝑘 given feature vector 𝑥, 𝑃 ( | 𝑥 𝐶 𝑘 ) is the likelihood of feature vector 𝑥 given class 𝐶 𝑘 , 𝑃 (𝐶 𝑘 ) is the prior probability of class 𝐶 𝑘 , 𝑃 (𝑥) is the evidence or marginal likelihood of feature vector 𝑥. The "naive" assumption is that the features are conditionally independent given the class label: The goal is to predict the class label 𝑦^ for a new instance 𝑥 by maximizing the posterior probability: Using Bayes' Theorem and the naive assumption, we can write: The probabilities 𝑃 (𝐶 𝑘 ) and 𝑃 (𝑥 𝑗 | 𝐶 𝑘 ) need to be estimated from the training data and the prior probability of class 𝐶 𝑘 is estimated as: where 𝑁 𝑘 is the number of instances in class 𝐶 𝑘 . For continuous features, a common approach is to assume a Gaussian distribution: where 𝜇 𝑗𝑘 and are the mean and variance of the feature 𝑥 𝑗 for class 𝐶 𝑘 . Naive Bayes Algorithm Data: Training dataset 𝑥 𝑡 , class attribute 𝑦 𝑡 Result: Classifier model 1 begin each class 𝑦 in 𝑦 𝑡 do 3 Calculate prior probability 𝑃 (𝑦); 4 for each attribute 𝑎 do 5 Calculate conditional probability 𝑃 ( | 𝑎 𝑦); Classifier model;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Correlation matrix for SDSS-IV data. (b) Final correlation matrix for SDSS-IV data.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Comparison of correlation matrices for SDSS-IV data.</figDesc><graphic coords="6,97.95,157.20,199.00,137.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>delta = Declination angle (at J2000 epoch) • u = Ultraviolet filter in the photometric system • g = Green filter in the photometric system • r = Red filter in the photometric system • i = Near Infrared filter in the photometric system • z = Infrared filter in the photometric system • run_ID = Run Number used to identify the specific scan • rereun_ID = Rerun Number to specify how the image was processed • cam_col = Camera column to identify the scanline within the run • field_ID = Field number to identify each field • spec_obj_ID = Unique ID used for optical spectroscopic objects (this means that 2 different observations with the same spec_obj_ID must share the output class) • class = object class (galaxy, star or quasar object) • redshift = redshift value based on the increase in wavelength • plate = plate ID, identifies each plate in SDSS • MJD = Modified Julian Date, used to indicate when a given piece of SDSS data was taken • fiber_ID = fiber ID that identifies the fiber that pointed the light at the focal plane in each observation (a)Histograms for all normalized data, excluding redshift. (b) Histogram for normalised redshift. (c) Number of different objects (0 -galaxies, 1 -QSOs, 2 -stars).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Various histograms showing data distributions and object counts.</figDesc><graphic coords="7,204.20,337.20,188.75,142.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Accuracy of KNN.</figDesc><graphic coords="8,195.30,463.00,204.50,149.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Comparison of confusion matrices for KNN, ID3, and Naive Bayes Algorithms (0 -galaxies, 1 -QSOs, 2 -stars).</figDesc><graphic coords="9,205.05,286.90,184.10,149.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Algorithm 1: KNN Algorithm Data: Training data 𝑥_𝑡, training classes 𝑦_𝑡, class to be classified 𝑙𝑎𝑏𝑒𝑙_ ℎ 𝑠𝑒𝑎𝑟𝑐 𝑒𝑑, test dat a 𝑥, algorithm constant 'k' ℎ 𝑛𝑒𝑖𝑔 𝑏𝑜𝑢𝑟𝑠_𝑛𝑢𝑚𝑏𝑒𝑟 Result: Predictions 1 for each 𝑡𝑒𝑠𝑡𝑒𝑑 in 𝑥 do 2</head><label></label><figDesc>𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒𝑠 ← distance between 𝑡𝑒𝑠𝑡𝑒𝑑 and each in 𝑥_𝑡;</figDesc><table><row><cell>3 4 5 6</cell><cell>𝑐𝑙𝑜𝑠𝑒𝑠𝑡_𝑖𝑛𝑑𝑒𝑥𝑒𝑠 ← indexes of ℎ 𝑛𝑒𝑖𝑔 𝑏𝑜𝑢𝑟𝑠_𝑛𝑢𝑚𝑏𝑒𝑟 closest neighbours; 𝑙𝑎𝑏𝑒𝑙𝑠 ← classes of closest neighbours; 𝑟𝑒𝑠𝑢𝑙𝑡 ← dominating label in 𝑙𝑎𝑏𝑒𝑙𝑠; Add 𝑟𝑒𝑠𝑢𝑙𝑡 to prediction list;</cell></row><row><cell cols="2">7 Create data structure with predictions, by choosing indexes of the test data; return Predictions</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Algorithm 4 :</head><label>4</label><figDesc>Loop for finding the best constant for KNN</figDesc><table /><note>Data: Training data 𝑥_𝑡, training classes 𝑦_𝑡, class to be classified 𝑙𝑎𝑏𝑒𝑙_ ℎ 𝑠𝑒𝑎𝑟𝑐 𝑒𝑑, test data 𝑥</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Result: Best constant 1 begin 2</head><label></label><figDesc>Feed KNN algorithm with 𝑥_𝑡 and 𝑦_𝑡 data.; Check accuracy 𝑛 and add it to the list 𝑁 .;</figDesc><table><row><cell>7</cell><cell>Increase KNN constant by 1.;</cell></row></table><note>3Set KNN constant as 1.; 4 while KNN constant is lower than significant number do 5 Classify 𝑥 using KNN.; 6 8 𝑛 𝑚𝑎𝑥 = index of biggest 𝑛 in 𝑁 .; 9 return 𝑛 𝑚𝑎𝑥</note></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Cnn architecture comparison for radio galaxy classification</title>
		<author>
			<persName><forename type="first">B</forename><surname>Becker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vaccari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Prescott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Grobler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Monthly Notices of the Royal Astronomical Society</title>
		<imprint>
			<biblScope unit="volume">503</biblScope>
			<biblScope unit="page" from="1828" to="1846" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Lstm and cnn application for core-collapse supernova search in gravitational wave real data</title>
		<author>
			<persName><forename type="first">A</forename><surname>Iess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Cuoco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Morawski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Nicolaou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Lahav</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Astronomy &amp; Astrophysics</title>
		<imprint>
			<biblScope unit="volume">669</biblScope>
			<biblScope unit="page">A42</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Astronomy in the big data era</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">Z</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.5334/dsj-2015-011</idno>
	</analytic>
	<monogr>
		<title level="j">Data Science Journal</title>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Performance analysis and prediction for mobile internet-of-things (iot) networks: a cnn approach</title>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">A</forename><surname>Gulliver</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Internet of Things Journal</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="13355" to="13366" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Fuzzy logic type-2 intelligent moisture control system</title>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Szczotka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sikora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zielonka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">238</biblScope>
			<biblScope unit="page">121581</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Strengthening the perception of the virtual worlds in a virtual reality environment</title>
		<author>
			<persName><forename type="first">D</forename><surname>Połap</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kęsik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Winnicka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISA transactions</title>
		<imprint>
			<biblScope unit="volume">102</biblScope>
			<biblScope unit="page" from="397" to="406" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Soft trees with neural components as image-processing technique for archeological excavations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Woźniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Połap</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Personal and Ubiquitous Computing</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="363" to="375" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Naive bayes: applications, variations and vulnerabilities: a review of literature with code snippets for implementation</title>
		<author>
			<persName><forename type="first">I</forename><surname>Wickramasinghe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kalutarage</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Soft Computing</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="2277" to="2293" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Survey on exact knn queries over high-dimensional data space</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ukey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page">629</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="https://scikit-learn.org/stable/" />
		<title level="m">Package of scikit-learn</title>
				<imprint>
			<date type="published" when="2024-05-17">2024. 2024-05-17</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="https://pandas.pydata.org/" />
		<title level="m">Pandas library</title>
				<imprint>
			<date type="published" when="2024-05-17">2024. 2024-05-17</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Seaborn</forename><surname>Library</surname></persName>
		</author>
		<ptr target="https://seaborn.pydata.org/" />
		<imprint>
			<date type="published" when="2024-05-17">2024. 2024-05-17</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<ptr target="https://matplotlib.org/" />
		<title level="m">Matplotlib library</title>
				<imprint>
			<date type="published" when="2023">2023. 2024-05-17</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<ptr target="https://www.sdss4.org/dr17/" />
		<title level="m">Original source of data release 17 from sloan digital sky survey</title>
				<imprint>
			<date type="published" when="2022">2022. 2024-05-18</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Stellar classification dataset -sdss17</title>
		<author>
			<persName><surname>Fedesoriano</surname></persName>
		</author>
		<ptr target="https://www.kaggle.com/fedesoriano/stellar-classification-dataset-sdss17" />
		<imprint>
			<date type="published" when="2022-05-18">2022. May 18, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<ptr target="https://www.skyatnightmagazine.com/space-science/infrared-astronomy" />
		<title level="m">Article about infrared imaging</title>
				<imprint>
			<date type="published" when="2024-05-18">2024. 2024-05-18</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
