<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Implementation of swarm intelligence methods for preprocessing in neuroevolution synthesis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Serhii</forename><surname>Leoshchenko</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">National University &quot;Zaporizhzhia Polytechnic&quot;</orgName>
								<address>
									<addrLine>Zhukovskogo street 64</addrLine>
									<postCode>69063</postCode>
									<settlement>Zaporizhzhia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Oliinyk</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">National University &quot;Zaporizhzhia Polytechnic&quot;</orgName>
								<address>
									<addrLine>Zhukovskogo street 64</addrLine>
									<postCode>69063</postCode>
									<settlement>Zaporizhzhia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sergey</forename><surname>Subbotin</surname></persName>
							<email>subbotin@zntu.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National University &quot;Zaporizhzhia Polytechnic&quot;</orgName>
								<address>
									<addrLine>Zhukovskogo street 64</addrLine>
									<postCode>69063</postCode>
									<settlement>Zaporizhzhia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tetiana</forename><surname>Kolpakova</surname></persName>
							<email>t.o.kolpakova@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">National University &quot;Zaporizhzhia Polytechnic&quot;</orgName>
								<address>
									<addrLine>Zhukovskogo street 64</addrLine>
									<postCode>69063</postCode>
									<settlement>Zaporizhzhia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Implementation of swarm intelligence methods for preprocessing in neuroevolution synthesis</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">68FEAEB6012F54F6DC20BA61B0F125CB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:22+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Preprocessing</term>
					<term>swarm intelligence</term>
					<term>neurosynthesis</term>
					<term>neuromodels</term>
					<term>machine learning</term>
					<term>artificial neural networks</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Using swarm intelligence techniques for data preprocessing over neuroevolution synthesis of artificial neural networks (ANNs) can provide a number of advantages. Therefore, swarm analysis techniques such as particle swarm optimization (PSO) or ant colony optimization (ACO) can effectively identify the most relevant traits from multidimensional data. This reduces the dimension of the input space, mitigating the Curse of dimension and improving the effectiveness of ANNs training. In addition, swarm analysis methods can filter out noisy or mismatched data points and detect outliers, increasing ANNs resistance to noisy input data and expanding generalization capabilities. In general, the introduction of swarm intelligence techniques for data preprocessing prior to neuroevolution ANNs synthesis results in improved model performance, reduced computational complexity, and improved generalization capabilities, making it an appropriate approach in machine learning tasks. This paper is proposed to consider the implementation of swarm intelligence methods for preprocessing to improve the performance of neuromodel synthesis.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Today, ANN-based neuromodels are widely used to automate and solve many processes and tasks of human activity (industry, medicine, operational processes, etc.). ANNs are computational models based on the structure and functioning of biological neurons in the human brain. They consist of interconnected nodes (neurons) organized in layers, with each neuron performing simple calculations and transmitting signals to connected neurons <ref type="bibr" target="#b0">[1]</ref>.</p><p>Let's look at how ANNs are used in diagnostic tasks. 1. Medical image analysis: Anns are used to analyze medical images such as X-rays, magnetic resonance imaging, computed tomography, and histopathological images. Convolutional neural networks (CNNs), a special type of deep ANNs designed for processing spatial data, are particularly effective in tasks such as tumor detection, organ segmentation, and pathology classification. CNNs can automatically study image features, providing accurate diagnostics and anomaly detection <ref type="bibr" target="#b0">[1]</ref>.</p><p>2. Disease diagnosis: Anns are used to diagnose various diseases by analyzing patient data such as symptoms, medical history, laboratory tests, and genetic information. Recurrent neural networks (RNNs) and long-term short-term memory networks (LSTMs), thanks to their structural features (the presence of feedback and gates, which allows to store the context/history of states), are able to process sequential data, used for tasks such as predicting disease progression, identifying risk factors, and diagnosing conditions such as heart disease, diabetes, and cancer <ref type="bibr" target="#b0">[1]</ref>.</p><p>3. Medical signal processing: Anns are used to process physiological signals such as an electrocardiogram (ECG), electroencephalogram (EEG), and electromyogram (EMG). They are used for tasks such as detecting arrhythmias, predicting seizures, classifying sleep stages, and analyzing muscle activity. Given the nature of the input data that is the result of clinical trials, models that best represent time series are used, in particular the RNNs and LSTM networks, which perfectly capture time dependencies and patterns in sequential data, making them suitable for analyzing medical signals <ref type="bibr" target="#b1">[2]</ref>. <ref type="bibr" target="#b3">4</ref> Preprocessing refers to a set of methods applied to raw data before it is passed to a machine learning model, such as an ANN. These methods are aimed at converting data to a format that is more appropriate to the chosen training method, and at improving the overall performance and effectiveness of the model. Preprocessing includes steps such as data cleaning, normalization, object scaling, dimensionality reduction, and processing missing values or outliers <ref type="bibr" target="#b3">[4]</ref>.</p><p>Pretreatment is crucial for Ann synthesis and training for several reasons. 1. Normalization and scaling: Anns often work better when input objects are normalized or scaled to a similar range. Preprocessing techniques, such as minimum-maximum scaling or Zscore normalization, ensure that features make an equal contribution to the learning process, preventing certain features from dominating the learning process due to differences in scale <ref type="bibr" target="#b4">[5]</ref>. Swarm intelligence is an area of research inspired by the collective behavior of decentralized, selforganizing systems in nature, such as ant colonies, flocks of birds, and flocks of fish. Swarm intelligence techniques aim to solve complex problems by coordinating the actions of multiple agents, each following simple rules, without the need for centralized control or global knowledge.</p><p>Some key characteristics of swarm intelligence include <ref type="bibr" target="#b5">[6]</ref>. Swarm intelligence techniques can be applied to neurosynthesis, which involves computer-aided design or synthesis of artificial neural networks (ANNs) using optimization techniques. These methods can help you explore the wide range of possible neural network architectures, optimize network parameters, and improve network performance. Some uses of swarm intelligence techniques for neurosynthesis include <ref type="bibr" target="#b6">[7]</ref>:</p><p>1. Self-organization: swarm-based intelligence systems exhibit emerging behavior where global patterns and solutions arise from local interactions between individual agents without explicit coordination.</p><p>2. Swarm intelligence relies on distributed decision-making, where each agent makes decisions based on local information and interaction with nearby neighbors, rather than relying on centralized control or external leadership.</p><p>3. Adaptation and reliability: swarm intelligent systems are often adaptive and resistant to environmental changes or disturbances. They can self-adjust environments and reorganize in response to dynamic conditions, providing stability and flexibility. 4. Research and Operation: Swarm Intelligence techniques combine exploration of the search space to identify new solutions and use known solutions to effectively optimize performance. 5. Swarm intelligence techniques can be applied to neurosynthesis, which involves computeraided design or synthesis of artificial neural networks (ANNs) using optimization techniques.</p><p>These methods can help you explore the wide range of possible neural network architectures, optimize network parameters, and improve network performance. Some uses of swarm intelligence techniques for neurosynthesis include <ref type="bibr" target="#b7">[8]</ref>:</p><p>6. Swarm intelligence techniques such as particle swarm optimization (PSO), ant colony optimization (ACO), or genetic algorithms (GAs) can be used to find optimal neural network architectures by exploring the space of possible configurations, including the number of layers, neuron types, connectivity patterns, and more. hyperparameters. 7. Swarm intelligence techniques can optimize neural network hyperparameters, such as learning speed, activation functions, regularization parameters, and network topology, to improve performance and generalization. 8. Swarm intelligence algorithms can help optimize the learning process of neural networks by tweaking parameters related to optimization algorithms (such as gradient descent options), convergence criteria, and data preprocessing techniques, resulting in faster convergence and improved performance. 9. Ensemble training: swarm intelligence can be used to optimize the creation of ensemble models that train and combine multiple neural networks to improve prediction accuracy and reliability.</p><p>Overall, swarm intelligence techniques offer powerful neurosynthesis tools to automate the design and optimization of neural networks for a variety of applications, including pattern recognition, classification, regression, and management tasks <ref type="bibr" target="#b7">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Let's analyze why preprocessing of input data is necessary for neuroevolution synthesis of analytical models based on ANN and what such processes are reduced to <ref type="bibr" target="#b8">[9]</ref>. Normalization and scaling. Preprocessing techniques such as normalization and scaling ensure that the input characteristics are at the same scale. This is important because neural networks can be sensitive to the magnitude of input values. Data normalization prevents certain functions from dominating the training process solely because of their scale, resulting in more stable and effective learning <ref type="bibr" target="#b8">[9]</ref>.</p><p>Filling in missing values and monitoring emissions. Preprocessing allows you to process missing values and outliers in input data. Missing values can be calculated using methods such as calculating the Mean, median calculation, or interpolation. Outliers, if left untreated, can negatively affect the learning process, distorting the studied model parameters. Preprocessing techniques such as outlier detection and removal ensure that the neural network learns from clean and reliable data, resulting in improved performance <ref type="bibr" target="#b8">[9]</ref>.</p><p>Data reduction and feature selection. Preprocessing allows you to develop and select objects, while removing irrelevant or redundant objects and creating new informative objects. This process helps reduce the size of the input space by focusing the neural network's attention on the most relevant functions and improving its ability to generalize invisible data <ref type="bibr" target="#b9">[10]</ref>.</p><p>Data balancing. In classification problems with unbalanced class distribution, preprocessing methods can balance a data set by over-sampling minority classes, under-sampling majority classes, or using more advanced methods such as the Synthetic Minority over-sampling Technique (SMOTE). Data set balancing ensures that the neural network is trained on a representative set of samples from each class, preventing it from shifting to the majority class <ref type="bibr" target="#b9">[10]</ref>.</p><p>Noise reduction and improved data quality: preprocessing techniques can help filter out noise and improve the overall quality of input data. This is especially important in real data sets that may contain inappropriate or erroneous information. Noise removal ensures that the neural network focuses on basic data patterns, resulting in more accurate and reliable predictions <ref type="bibr" target="#b9">[10]</ref>.</p><p>Overall, input preprocessing plays a crucial role in neuroevolution synthesis, ensuring that the neural network learns from high-quality, well-structured data. This contributes to more efficient learning, faster convergence, and better generalization performance, which ultimately leads to more reliable and accurate neural network models <ref type="bibr" target="#b10">[11]</ref>.</p><p>Also, when studying the very practical implementation of input preprocessing, it should be noted that such processes can help save time during neurosynthesis as follows.</p><p>Obviously, preprocessing techniques such as selecting or extracting objects can reduce the dimension of input data by eliminating non-essential or redundant objects. With fewer input parameters, the neuroevolution process requires less computational resources and time to study the reduced feature space, which leads to faster processing <ref type="bibr" target="#b10">[11]</ref>.</p><p>Normalizing or scaling the input data ensures that all functions are on the same scale, preventing certain functions from dominating the learning process based solely on their size. Normalized data contributes to more efficient learning convergence by reducing the time required for the neuroevolution synthesis process <ref type="bibr" target="#b10">[11]</ref>.</p><p>Preprocessing methods for handling missing values and outliers ensure that the input data is clean and reliable. By removing or attributing missing values and outliers, the neuroevolution process can focus on learning based on high-quality data, resulting in faster convergence and more efficient use of computational resources.</p><p>In general, pretreatment of input data before neuroevolution synthesis helps optimize the learning process, increases the efficiency of model training, and reduces the overall processing time required to achieve the desired results <ref type="bibr" target="#b11">[12]</ref>.</p><p>Let's take a look at some of the most popular methods of swarm intelligence. PSO method:</p><p>• PSO is inspired by the social behavior of flocks of birds or schools of fish; • each candidate solution (particle) in the search space adjusts its position based on its own experience and the most known position of its neighbors; • PSO is relatively easy to implement and can efficiently search in large-dimensional spaces; • the method tends to approach local optima quickly, but may have difficulty going beyond local optima in multi-modal or deceptive landscapes. ACO method <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref>:</p><p>• ACO is inspired by the search behavior of ants.</p><p>• artificial ants leave pheromone traces along the edges of the graph representing the problem space, and the intensity of pheromone traces reflects the desirability of the path; • ACO is excellent in combinatorial optimization problems and is particularly effective in solving problems with discrete and undifferentiated objective functions; • the method requires careful parameter adjustment and may suffer from slow convergence, especially in large and complex problem spaces. Firefly algorithm (FA) <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref>:</p><formula xml:id="formula_0">•</formula><p>FA is inspired by the flashing behavior of fireflies, when brighter fireflies attract others; • the method optimizes the set of solutions by iteratively moving brighter solutions to brighter ones in the search space; • FA is easy to implement and requires no gradient information, making it suitable for optimization tasks with complex and multi-modal landscapes.; • however, in some cases, FA can suffer from slow convergence and premature convergence, especially in multidimensional and nonlinear optimization problems. Each of these swarm analysis methods has its own strengths and weaknesses, which makes them suitable for different types of optimization problems and subject areas. The choice of method depends on factors such as problem complexity, search space characteristics, computational resources, and desired convergence properties <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref>.</p><p>Let's compare the most well-known methods of swarm intelligence according to the following criteria:</p><p>• basic idea: this criterion describes the natural phenomenon or behavior that inspired the development of each method. Understanding a biological or natural process can provide insight into how the method works and its suitability for various optimization tasks <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b15">[16]</ref>;</p><p>• objective function: refers to the type of optimization problems that each method is primarily designed to solve. Some methods are better suited for continuous optimization problems, while others are excellent for combinatorial or discrete optimization problems <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b15">[16]</ref>; • aggregate initialization: this criterion describes how the initial set of candidate Solutions is generated in each method. Random initialization is common, but some methods may have specific initialization strategies <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b15">[16]</ref>; • environment vs. exploitation research: reflects the balance between research (finding new areas of the solution space) and exploitation (refining known promising solutions). Different methods may show different trends in research or exploitation <ref type="bibr" target="#b16">[17]</ref>;</p><p>• communication mechanism: this criterion refers to how information is exchanged between individuals in a swarm. This may include mechanisms such as pheromone traces in ACO or brightness-based attraction in FA <ref type="bibr" target="#b16">[17]</ref>; • convergence properties: describes the speed and efficiency with which a method comes to a solution. So, for example, some methods can converge quickly, but run the risk of getting stuck in local optimums, while others can converge more slowly, but with better global optimization properties <ref type="bibr" target="#b16">[17]</ref>; • this criterion evaluates how sensitive the method is to the choice of metaparameters. Methods that are less sensitive can be easier to configure and more reliable in various problem areas;</p><p>• implementation complexity: it reflects the ease of implementation and computational complexity of each method. Methods with simpler implementations may require less computing resources;</p><p>• areas of application: this criterion defines the types of optimization problems that each method is usually applied to. Understanding typical application areas can help you choose the most appropriate method for a particular task <ref type="bibr" target="#b17">[18]</ref>- <ref type="bibr" target="#b20">[21]</ref>. In the table. 1 shows the results of comparison by criteria. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Continuous optimization problems</head><p>From the results of the comparison, we can conclude that the FAmethod is the most universal in order to use it for preprocessing input data in the future.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Main features of applying the FA method</head><p>The FA method is a nature-inspired optimization method developed by Xin-she Yang in 2008. It is inspired by the blinking behavior of fireflies, where fireflies use bioluminescence to attract partners or prey. FA is a metaheuristic algorithm used to solve optimization problems in various fields <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b19">[20]</ref>.</p><p>The Firefly method was proposed by Xin-she Yang in 2008.</p><p>The method was developed as a population-based optimization method inspired by the blinking behavior of fireflies to solve optimization problems more efficiently.</p><p>Since its introduction, the FA method has gained popularity and has been applied to various optimization problems in Engineering, Computer Science, and other fields.</p><p>The main idea of the Firefly algorithm is to simulate the blinking behavior of fireflies to find optimal solutions to the optimization problem. The method works on the basis of the following principles <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b19">[20]</ref>: attraction: fireflies are attracted to each other depending on the brightness of their flashes. Similarly, in FA, the attractiveness of a firefly (solution) is determined by its suitability, with brighter solutions representing better solutions; moving towards brighter solutions: Fireflies tend to move towards brighter fireflies nearby. In FA, each Firefly (solution) adjusts its position in the search space, moving towards brighter solutions, and the intensity of attraction is determined by the difference in brightness and distance between the Fireflies <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b19">[20]</ref>; randomization and research: to help explore the search space, FA includes randomization by adding a random component to Firefly movement. This ensures that the method does not get hung up on local optimums and can explore different areas of the solution space.;</p><p>global optimization: FA aims to find a global optimal solution by iteratively updating Firefly positions based on their attractiveness and distance between them. The method converges when the Firefly positions no longer change significantly or the predefined completion criterion is met.</p><p>Overall, the FA method is a simple but effective optimization technique inspired by Firefly behavior. It is able to effectively solve a wide range of optimization problems and is successfully used in various fields due to its simplicity and efficiency <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b19">[20]</ref>.</p><p>Now we can dive into the intricacies of firefly optimization in more detail. The essence of the method is clearly shown in Fig. <ref type="figure" target="#fig_0">1</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed method</head><p>Levy's flight function can be incorporated into the Firefly Algorithm (FA) to enhance its exploration capability and improve its convergence speed. Levy's flight is a random walk process described by Levy distribution, which exhibits heavy-tailed behavior and allows for long-distance jumps in the search space. By incorporating Levy's flight function, the fireflies in the FA can explore the search space more effectively, leading to better global optimization performance <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b19">[20]</ref>.</p><p>Here's how Levy's flight function can be integrated into the Firefly Algorithm The results of the work present at Table <ref type="table" target="#tab_4">4</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussions of results</head><p>Analyzing the results obtained, it is worth noting that using any method for data preprocessing significantly reduces the time of subsequent synthesis. However, it is the use of FA that demonstrates the greatest optimization of synthesis time. This can be explained precisely by better selection of informative features. After all, if features can be shortened differently during preprocessing by different methods, the FA method helps to better track hidden relationships between data, and therefore significantly simplify the process of forming structural relationships between really dependent features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>At the paper examines the using swarm intelligence techniques for data preprocessing over neuroevolution synthesis of ANN can provide a number of advantages. Therefore, swarm analysis techniques such as particle swarm optimization or ant colony optimization can effectively identify the most relevant traits from multidimensional data. This reduces the dimension of the input space, mitigating the Curse of dimension and improving the effectiveness of ANNs training. In addition, swarm analysis methods can filter out noisy or mismatched data points and detect outliers, increasing ANNs resistance to noisy input data and expanding generalization capabilities.</p><p>The results of experiments prove that the new concept has quite acceptable (certifying) indicators of time use during the synthesis of neural networks.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Fireflies in the search space. Visibility decreases with increasing distance</figDesc><graphic coords="7,217.60,211.20,159.80,159.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>[22]-[25]: 1. Step 1: Levy Flight Generation: • at each iteration of the algorithm, Levy flights are generated for each firefly to determine their movement direction and distance; • Levy flights are generated using Levy's flight function, which produces step sizes according to the Levy distribution. The step sizes are typically drawn from a Levy distribution with a specified scale parameter. 2. Movement of Fireflies: • the fireflies adjust their positions based on the generated Levy flights. Each firefly moves a distance determined by Levy's flight function in a random direction; • the movement of fireflies is guided by their attractiveness, with brighter fireflies attracting others more strongly. Fireflies move towards brighter individuals while incorporating Levy flights for exploration. 3. Exploration and Exploitation: • Levy flights facilitate exploration by allowing fireflies to make long-distance jumps in the search space, enabling the algorithm to escape local optima and explore new regions; • at the same time, the attractiveness mechanism of the Firefly Algorithm ensures that fireflies tend to converge towards brighter solutions, promoting exploitation of promising regions of the search space. 4. Update Positions and Iteration:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>. Drug discovery and development: Anns are used in drug discovery and development processes, including virtual screening, molecular modeling, and drug toxicity prediction. They are used to analyze chemical structures, predict drug-target interactions, and develop new compounds with desired pharmacological properties. ANN-based models help speed up the drug search process and reduce the time and cost associated with developing new drugs<ref type="bibr" target="#b1">[2]</ref>. 5. Predictive and predictive modeling: ANNs are used to build predictive and predictive models in healthcare that help clinicians make informed decisions about patient management and treatment strategies. These models predict outcomes such as disease progression, treatment response, and patient survival based on clinical data, biomarkers, and other relevant factors<ref type="bibr" target="#b2">[3]</ref>. Overall, ANNs are versatile diagnostic tools that offer the ability to analyze different types of data, extract meaningful patterns, and make accurate predictions, thereby improving medical decisionmaking and patient care.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>Adaptation and reliability: swarm intelligent systems are often adaptive and resistant to environmental changes or disturbances. They can self-adjust environments and reorganize in response to dynamic conditions, providing stability and flexibility. 4. Research and Operation: Swarm Intelligence techniques combine exploration of the search space to identify new solutions and use known solutions to effectively optimize performance.</figDesc><table><row><cell>1. Self-organization: swarm-based intelligence systems exhibit emerging behavior where global</cell></row><row><cell>patterns and solutions arise from local interactions between individual agents without explicit</cell></row><row><cell>coordination.</cell></row><row><cell>2. Swarm intelligence relies on distributed decision-making, where each agent makes decisions</cell></row><row><cell>based on local information and interaction with nearby neighbors, rather than relying on</cell></row><row><cell>centralized control or external leadership.</cell></row><row><cell>3.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 1</head><label>1</label><figDesc>Results of comparing PSO, ACO and FA</figDesc><table><row><cell>Criteria</cell><cell>PSO</cell><cell>ACO</cell><cell>FA</cell></row><row><cell>Nature Inspiration</cell><cell>Social behavior of bird flocks or fish schools</cell><cell>Foraging behavior of ants</cell><cell>Flashing behavior of fireflies</cell></row><row><cell>Objective Function</cell><cell>Continuous optimization problems</cell><cell>Combinatorial optimization problems</cell><cell>Continuous optimization problems</cell></row><row><cell>Population Initialization</cell><cell>Random initialization</cell><cell>Random initialization</cell><cell>Random initialization</cell></row><row><cell>Exploration vs. Exploitation</cell><cell>Balanced exploration and exploitation</cell><cell>Exploration focused</cell><cell>Exploration focused</cell></row><row><cell>Communication</cell><cell>No direct</cell><cell>Pheromone-based</cell><cell>Attraction based on</cell></row><row><cell>Mechanism</cell><cell>communication</cell><cell>communication</cell><cell>brightness</cell></row><row><cell>Convergence Properties</cell><cell>May converge quickly to local optima</cell><cell>Slow convergence</cell><cell>May suffer from slow convergence</cell></row><row><cell>Robustness to Parameter Settings</cell><cell>Moderately sensitive</cell><cell>Sensitive</cell><cell>Moderately sensitive</cell></row><row><cell>Implementation Complexity</cell><cell>Relatively simple to implement</cell><cell>Moderate complexity</cell><cell>Relatively simple to implement</cell></row><row><cell>Application Domains</cell><cell>Continuous optimization problems</cell><cell>Combinatorial optimization problems</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>. The work results of proposed method</figDesc><table><row><cell>Datasets</cell><cell>ANN time synthesis (without feature selection)</cell><cell>ANN time synthesis (using PSO)</cell><cell>ANN time synthesis (using ACO)</cell><cell>ANN time synthesis (using FA)</cell><cell>Error on test sample</cell></row><row><cell>RT-IoT2022</cell><cell>8456 s</cell><cell>7225 s</cell><cell>6935 s</cell><cell>5689 s</cell><cell>0.91</cell></row><row><cell>VGCD</cell><cell>9555 s</cell><cell>8123 s</cell><cell>7456 s</cell><cell>6126 s</cell><cell>0.924</cell></row><row><cell>IOEP</cell><cell>9832 s</cell><cell>8526 s</cell><cell>7626 s</cell><cell>6751 s</cell><cell>0.942</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Acknowledgements</head><p>The work was carried out with the support of the state budget research projects of the state budget of the National University "Zaporozhzhia Polytechnic" "Intelligent information processing methods and tools for decision-making in the military and civilian industries" (state registration number 0124U000250).</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>• after adjusting their positions based on Levy flights and attractiveness, the positions of fireflies are updated, and the algorithm proceeds to the next iteration; • the process continues until a termination criterion is met, such as a maximum number of iterations or convergence of solutions. By incorporating Levy's flight function into the Firefly Algorithm, the algorithm gains enhanced exploration capabilities, enabling it to efficiently search for global optima in complex optimization problems. This modification can lead to improved convergence speed and better overall performance of the algorithm across various domains <ref type="bibr" target="#b21">[22]</ref>-[25].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental research</head><p>For the experimental research and comparison of proposed method with another approaches was be used the following as the training and testing data:</p><p>• RT-IoT2022 <ref type="bibr" target="#b24">[26]</ref>, <ref type="bibr" target="#b15">[16]</ref>. The RT-IoT2022 is a data sample formed by scientists on the basis of the Internet of things infrastructure, when a wide range of Internet of Things devices and a range of network attacks are combined in real-time conditions. The sample covers normal and hostile network behaviors that model a general view of real-world scenarios; • Visegrad Group companies data (VGCD) [27], <ref type="bibr" target="#b15">[16]</ref>. This dataset contains information about companies from the Visegrad Group countries, which include the Czech Republic, Hungary, Poland, and Slovakia. The Visegrad Group is a cultural and political alliance of these four Central European countries; • Influenza Outbreak Event Prediction via Twitter (IOEP) <ref type="bibr">[28]</ref>. This dataset aims to predict influenza outbreaks using Twitter data The general information about datasets presented in Table <ref type="table">2</ref>. The meta-parameters for neuroevolution synthesis of models demonstrate at Table <ref type="table">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 3</head><p>The </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">X.-S</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Karamanoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Gandomi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Xiao</surname></persName>
		</author>
		<title level="m">Swarm Intelligence and Bio-Inspired Computation: Theory and Applications</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Particle Swarm Optimization: A Comprehensive Survey</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Shami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>El-Saleh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Alswaitti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Al-Tashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Summakieh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mirjalili</surname></persName>
		</author>
		<idno type="DOI">10.1109/access.2022.3142859</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="10031" to="10061" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Marini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Walczak</surname></persName>
		</author>
		<idno type="DOI">10.1016/b978-0-12-409547-2.14581-0</idno>
		<title level="m">Particle Swarm Optimization, Comprehensive Chemometrics</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="649" to="666" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Application of PSO in Distribution Power Systems: Operation and Planning Optimization, Applying Particle Swarm Optimization</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Gkaidatzis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Bouhouras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Labridis</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-70281-6_17</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="321" to="351" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Ant Colony Optimization, Nature-Inspired Optimization Algorithms</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vasuki</surname></persName>
		</author>
		<idno type="DOI">10.1201/9780429289071-8</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Chapman and Hall/CRC</publisher>
			<biblScope unit="page" from="99" to="114" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Why the Firefly Algorithm Works?</title>
		<author>
			<persName><forename type="first">X.-S</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X.-S</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-67669-2_11</idno>
	</analytic>
	<monogr>
		<title level="m">Nature-Inspired Algorithms and Applied Optimization</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="245" to="259" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<ptr target="https://towardsdatascience.com/ant-colony-optimization-intuition-code-visualization-9412c369be81" />
		<title level="m">Ant Colony Optimization -Intuition, Code &amp; Visualization</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://medium.com/@melika.akz97/growth-optimizer-go-dd6da47a4e93" />
		<title level="m">Growth Optimizer</title>
				<meeting><address><addrLine>GO</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="https://python.plainenglish.io/wolf-search-algorithm-wsa-harnessing-the-social-and-hunting-strategies-of-wolves-for-9de4d1fc19c6" />
		<title level="m">WSA): Harnessing the Social and Hunting Strategies of Wolves for Optimization</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>Wolf Search Algorithm</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Machine Learning with Particle Swarm Optimization by Sebastian Raschka</title>
		<ptr target="https://sebastianraschka.com/" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Differential evolution: A recent review based on state-of-the-art works</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A M</forename><surname>Isa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">H</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Ang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aej.2021.09.013</idno>
	</analytic>
	<monogr>
		<title level="j">Alex. Eng. J</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Pandey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ram</surname></persName>
		</author>
		<idno type="DOI">10.1201/9781003090038-5</idno>
		<title level="m">Swarm Intelligence Applications in Artificial Neural Networks, Swarm Intelligence</title>
				<meeting><address><addrLine>Boca Raton</addrLine></address></meeting>
		<imprint>
			<publisher>CRC Press</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="99" to="122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Swarm Intelligence-based Framework for Image Segmentation of Knee MRI Images for Detection of Bone Cancer, Swarm Intelligence and Machine Learning</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tamilselvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kamatchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sujatha</surname></persName>
		</author>
		<idno type="DOI">10.1201/9781003240037-6</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>CRC Press</publisher>
			<biblScope unit="page" from="95" to="115" />
			<pubPlace>Boca Raton</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Structure of Particle Swarm Optimization (PSO)</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ehteram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Seifi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">B</forename><surname>Banadkooki</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-19-9733-4_2</idno>
	</analytic>
	<monogr>
		<title level="m">Application of Machine Learning Models in Agricultural and Meteorological Sciences</title>
				<meeting><address><addrLine>Singapore, Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="23" to="32" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Handbook of Machine Learning</title>
		<idno type="DOI">10.1142/9789811205675_0007</idno>
	</analytic>
	<monogr>
		<title level="m">Ant Colony Optimization</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="123" to="141" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Pandey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ram</surname></persName>
		</author>
		<idno type="DOI">10.1201/9781003090038-5</idno>
		<title level="m">Swarm Intelligence Applications in Artificial Neural Networks, Swarm Intelligence</title>
				<meeting><address><addrLine>Boca Raton</addrLine></address></meeting>
		<imprint>
			<publisher>CRC Press</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="99" to="122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Comparative Study of Image Resolution Techniques in the Detection of Cancer Using Neural Networks</title>
		<author>
			<persName><forename type="first">O</forename><surname>Nagaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Pillay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Jembere</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-49002-6_13</idno>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="page" from="187" to="202" />
			<date type="published" when="2023">2023</date>
			<publisher>Springer Nature</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Firefly Algorithm and Its Applications in Engineering Optimization</title>
		<author>
			<persName><forename type="first">D</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">G R</forename><surname>Gandhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Bhattacharjya</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-26458-1_6</idno>
	</analytic>
	<monogr>
		<title level="m">Nature-Inspired Methods for Metaheuristics Optimization</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="93" to="103" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Crowding Differential Evolution for Protein Structure Prediction, From Bioinspired Systems and Biomedical Applications to Machine Learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Varela</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Santos</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-19651-6_19</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="193" to="203" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">An Improved Firefly Algorithm for Feature Selection in Classification</title>
		<author>
			<persName><forename type="first">H</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zuo</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11277-018-5309-1</idno>
	</analytic>
	<monogr>
		<title level="j">Wirel. Pers. Commun</title>
		<imprint>
			<biblScope unit="volume">102</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="2823" to="2834" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Quantum Neural Network Classifiers: A Tutorial</title>
		<author>
			<persName><forename type="first">W</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>-D. Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D.-L</forename><surname>Deng</surname></persName>
		</author>
		<idno type="DOI">10.21468/scipostphyslectnotes.61</idno>
	</analytic>
	<monogr>
		<title level="s">SciPost Phys. Lect. Notes</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">The PSO Family: Application to the Portfolio Optimization Problem, Applying Particle Swarm Optimization</title>
		<author>
			<persName><forename type="first">L</forename><surname>Fernández-Brillet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Álvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Fernández-Martínez</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-70281-6_7</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="111" to="132" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Training RBF Neural Network with Hybrid Particle Swarm Optimization</title>
		<author>
			<persName><forename type="first">H</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhu</surname></persName>
		</author>
		<idno type="DOI">10.1007/11759966_86</idno>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Networks -ISNN 2006</title>
				<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="577" to="583" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Research on Ant Colony Algorithm Optimization Neural Network Weights Blind Equalization Algorithm</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Geng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.14257/ijsia.2016.10.2.09</idno>
	</analytic>
	<monogr>
		<title level="j">Int. J. Secur. Its Appl</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="95" to="104" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">companies data</title>
		<author>
			<persName><forename type="first">Group</forename><surname>Visegrad</surname></persName>
		</author>
		<ptr target="https://archive.ics.uci.edu/dataset/861/influenza+outbreak+event+prediction+via+twitter" />
	</analytic>
	<monogr>
		<title level="m">Influenza Outbreak Event Prediction via Twitter</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
