Applying the Neural Network Technologies for Cyclohexane Industrial Oxidation Data Analysis Sergiy Zagorodnyuk a, Bohdan Sus a, Oleksandr Bauzha a and Taras Chaikivskyi b a Taras Shevchenko National University of Kyiv, Kyiv 01033, Ukraine b Lviv Polytechnic National University, Bandera Str, 12, Lviv, 79013, Ukraine Abstract An algorithm for digital processing of data of chemical reactions of liquid-phase oxidation of substances is developed. Such reactions make it possible to obtain valuable oxygen- containing compounds on an industrial scale. The intelligent scheme of the algorithm is based on a multilayer neural network. Qualitative modeling of the influence of the primary concentration of reagents on the dynamics of the flow and the result of the catalysis reaction is carried out. The simulation results were analyzed using an artificial neural network for various catalysts and reagents, including cyclohexane hydroperoxide, cyclohexanol, cyclohexanone, acids and esters. After training, the neural network with high accuracy reproduces the data from a sample of experiments used during training. It also predicts the results of the study in extended time ranges, as well as with higher values of concentrations of catalytic impurities. Formulated and substantiated predictions allow experimenters to choose the most optimal and promising set of concentrations of active catalytic substances. Keywords 1 Neural Network, Hydroperoxide, Activation Functions, Catalyst 1. Introduction The study of oxidation processes using organic compounds is a complex and multifaceted scientific task. As a result of the oxidation reaction, many chemical compounds are formed. Therefore the determination of their chemical characteristics requires an analysis of the effect of different catalysts on the rate of formation of reaction products and the percentage of use of starting materials- reagents. Since the starting materials can be catalysts that have already formed as a result of the oxidation reaction, the reaction time, the oxidation temperature, the formation of reaction products are often unpredictable. Various mathematical and physical mechanisms are used to process and structure the original data and predict the result. In this article, to solve this problem, it is suggested to use artificial neural networks. At present, the scope of neural networks covers many areas of science and technology. Widespread use of neural networks in medicine [1-4], biology [5, 6], chemistry [7], physics [8], energy [9], engineering [10, 11] and environmental protection has been demonstrated [12, 13]. Machine learning is a powerful tool for solving complex multidimensional problems, where the answers are not obvious and cannot be determined by simple mathematical algorithms. Powerful artificial intelligence techniques are increasingly used to develop forecasting models in financial markets [14, 15], in the fields of agro-industrial casting [16], in the oil industry [17]. Neural networks are also used in the field of metrology, production control and logistics, quality management, ensuring a high level of safety and efficiency [18]. Most processes of liquid-phase oxidation of hydrocarbons occur in the presence of homogeneous catalysts, which can be salts of metals with variable valence. The influence of homogeneous catalysts Information Technology and Implementation (IT&I-2021), December 01–03, 2021, Kyiv, Ukraine EMAIL: szagorodniuk@gmail.com (Sergiy Zagorodnyuk); bnsuse@gmail.com (Bohdan Sus); asb@univ.kiev.ua (Oleksandr Bauzha); taras.v.chaikivskyi@lpnu.ua (Taras Chaikivskyi) ORCID: 0000-0003-3415-7746 (Sergiy Zagorodnyuk); 0000-0002-2566-5530 (Bohdan Sus); 0000-0002-4920-0631 (Oleksandr Bauzha); 0000-0002-1166-8749 (Taras Chaikivskyi) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) 39 on the liquid-phase oxidation of organic substances is connected simultaneously with the selective acceleration or deceleration of individual elementary reactions, as well as with the generation of new sequences of chemical transformations. Variable valence metals are involved in almost all elementary stages of the process - nucleation, continuation, degenerate branching and chain breakage [19,20], which determines their influence on the speed and selectivity of the oxidation process, and characterizes metals such as selective catalysts of the chainless process. The chemical reaction of liquid-phase homogeneous-catalytic oxidation of cyclohexane with molecular oxygen is used as an industrial method for obtaining oxygen-containing compounds with important specific properties. Such compounds include, in particular, cyclohexanol and cyclohexanone, which are intermediates in the production of synthetic fibers - nylon and nylon 6 [21]. Figure 1 shows the chemical scheme of liquid-phase oxidation of cyclohexane to cyclohexanol and cyclohexane to cyclohexanon. It can be seen from the scheme of cyclohexane under the action of catalysts and oxygen is converted into cyclohexyl hydroperoxide. As a result of the subsequent reaction of the mixture and the introduction of hydrogen ions into the chemical reaction leads to the formation of cyclohexanol (CHl) and cyclohexanone (CHn). Figure 1: Chemical scheme of liquid-phase oxidation of cyclohexane to cyclohexanol and cyclohexanone. The main oxidation products of cyclohexane are cyclohexyl hydroperoxide (HPCH) and adipic acid. At low-intensity conversions of cyclohexane, cyclohexane hydroperoxide decomposes by monomolecular reaction or interaction with the original hydrocarbon. Instead, studies have shown that with increasing conversion of the starting hydrocarbon and the appearance of a significant amount of oxygen-containing compounds, alternative ways of consuming cyclohexane hydroperoxide appear. Degenerate branching occurs due to the interaction of cyclohexane hydroperoxide with ketones, alcohols, or acids [22]. Current catalyst systems demonstrate low efficiency, which does not reduce the oxidation of cyclohexane at high rates of conversion of raw materials. Low conversion values have significant energy losses associated with the processing of excess unreacted raw materials. Increasing the selectivity and conversion of the process by at least 1% (abs.) can significantly reduce the cost factors for raw materials and energy consumed. Therefore, the topical issue is the formulation of efficient catalytic systems for the process of homogeneous catalytic oxidation of cyclohexane. The hydrocarbon oxidation reactions occur with greater selectivity by a chainless mechanism in the coordination sphere of a homogeneous catalyst. The use of additives of different nature (electron donor and electron acceptor) to cobalt naphthenate (NC) salt allows to increase the selectivity of the process relative to the target products and change the ratio between them in the right direction [19,22- 23], which determines the direction of further use of such additives. The article analyzes the effect of crown esters, which, according to literature sources, form ionic associations with metal ions and, accordingly, can form the dynamics of the oxidation reaction. The main oxidation products of cyclohexane are cyclohexyl hydroperoxide and adipic acid. At low-intensity conversions of cyclohexane, cyclohexane hydroperoxide decomposes by monomolecular reaction or interaction with the original hydrocarbon. Instead, studies have shown that with the increasing conversion of the starting hydrocarbon and the appearance of a significant amount of oxygen-containing compounds, alternative ways of consuming cyclohexane hydroperoxide appear. Degenerate branching occurs due to the interaction of cyclohexane hydroperoxide with ketones, alcohols, or acids [22]. The scientific novelty of the research lies in the fact that for the first time a specific neural network was used to analyze experimental data. That full-layer multilayer neural network has 3 hidden layers 40 with 20 neurons. As result the efficiency of determining the effect of crown-ester additives has increased significantly due to mathematical predictions of the ratio of oxidation products depending on the conversion of cyclohexane, temperature and the ratio of cobalt naphthenate additives. It was demonstrated that the value of the dependent variable can be predicted more accurately if the forecast takes into account only the important features and basic characteristics that provide a description of the chemical analysis. 2. First level heading As a result of several experiments, many data related to the oxidation process of cyclohexane were obtained. Table 1 shows the experimental data of the oxidation of cyclohexane three T = 423K, using the catalyst NC and catalytic systems NC - 15-KR-5 and NC - DBKR, where K is the percentage of cyclohexane that reacted. Table 1 The composition of the products of catalytic oxidation of cyclohexane. [NC]= 510-4 mol per litre, [NC]/[additive] = 5/1 Additive t, К, Selectivity,% min. % HPCH CHl CHn Acids Esters - 15 3,0 16,8 31,4 16,8 16,4 18,6 15-KR-5 10 2,8 12,9 27,8 18,6 17,9 22,8 DBKR 15 3,5 13,3 28,8 14,6 8,9 34,4 - 20 6,7 6,1 36,5 21,7 15,8 19,9 15-KR-5 20 7,9 4,3 35,7 23,0 17,2 19,8 DBKR 25 8,0 3,8 33,4 23,8 15,5 23,5 - 35 12,9 0,8 27,3 26,5 22,6 22,8 15-KR-5 35 12,9 1,6 26,5 25,5 20,9 25,5 DBKR 40 12,5 0,6 21,6 26,8 26,7 24,3 As can be seen from Table 1, the change in the concentration of substances formed (as a result of complex oxidation reactions of cyclohexane) does not have a simple pattern of growth over time. Complex changes in the concentration of the obtained substances occur during changes of the other parameters of the experiment (catalyst concentration, or additive concentration, temperature change). Due to the complexity of the problem, it was decided to use the possibilities of artificial neural networks. The neural network was trained with a fragment of the received experimental data set. The inputs of the neural network include the initial concentration of HPCH, CHl, CHn, acids and esters (inputs 1- 5). At such inputs of the neural network, the values of the concentrations of the substances are obtained as a result of the reaction. The values are conveniently normalized to the maximum value of the concentration observed at any time in a real experiment. The sixth input of the neural network is the concentration of cyclohexane (the percentage of cyclohexane involved in the initial moment of the reaction). Inputs 7-8 were, respectively, the concentrations of impurities in the industrial catalyst of cobalt naphthenate (NC). Inputs 7 and 8 were fed normalized values of additives 15-KR-5 and DBKR, respectively. Therefore, the maximum value of additives at inputs 7-8 is 5×10-4 mol / l, which at the corresponding input corresponds to the value 1. Input 9 is the reaction time normalized to a maximum time of 100 minutes, which did not exceed the sampling time of experimental data. Finally, the inlet 10 inlet is the temperature at which the reaction takes place. The experimental data do not have a large temperature variation, so most experiments were performed at an average temperature of T = 413K. This value is taken as a value of 0.5, and therefore a change in temperature leads to slight fluctuations from this given value. In particular, the temperature T = 403K is assigned a value of 0.4 input 10, and the temperature T = 423K - a value of 0.6 of the same input, respectively. 41 The concentration values of HPCH, CHl, CHn, acids and esters are the outputs of the system (outputs 1-5). These values are obtained at the end of the experiment or in the middle. Output 6 reflects the concentration of cyclohexane, i.e. the percentage of unused cyclohexane. Single-layer neural networks are unable to solve this problem due to the problem of linear resolution. Therefore, the natural solution to this problem was the use of multilayer, fully connected artificial neural network. Direct propagation artificial neural network with multiple hidden layers is capable of recognizing dependencies of arbitrary shape. In addition, in the case of linearity of activation functions, the multilayer neural network can be reduced to an equivalent single-layer. Therefore, the formation of such structures makes sense only in the case of application of nonlinear activation functions in neurons. Therefore, the sigmoidal logical function was chosen [24]. as the most mathematically simple of the nonlinear.  f ( x)  1  e  x  1 (1) Full-layer multilayer networks provide the ability to transfer information from each neuron in the previous layer to any neuron in the next. Thus, the unidirectionality of connections leads to the construction of exclusively hierarchical structures in which information processing is distributed by levels. Each level of hierarchical information processing is responsible for its layer of neurons. The choice of the number of hidden layers and the number of neurons in them is a problem of balance between the speed of learning the neural network and the complexity of the effects that can describe (learn) this network. In this paper, we did not research the balance between learning speed and sufficient system complexity. We took 3 hidden layers with 20 neurons each. This network has an overfitted model. However, our task is to process the experimental data and not to optimize the neural network for that task, and this model is overfitted to increase the experimental sample. Training from precedents is based on the use of a sequence of samples that specify the desired values of the output vectors of the neural network for the corresponding values of the input vectors. The mechanism of learning with the teacher is to modify the weights of neurons. The main criterion for evaluating the effectiveness of training is the target function or the error of the output vector, which presents a functional description of the model output that coincides with the ideal for a given input sample. In particular, the article uses the standard deviation:     2 1 N E W x ( i )  y ( i ) . (2) 2 N i 1 Where N is the number of pairs "input vector / output vector" in the training set; x (i) , y (i ) the values of the pair "input vector / output vector"; W is an approximating function. The paper uses the gradient descent method to find the minimum of the objective function. The idea of the gradient descent method is to sequentially change the parameters of the artificial neural network in the direction that reduces the objective function E. Since function E is differentiable for each of the parameters, it is possible to calculate the gradient vector. Moving in the direction of the negative gradient for each of the parameters, we find the local minima of the objective function. Fig. 2 presents a diagram of the algorithm for the training of artificial neural networks. In this system, W is the matrix weight of the artificial neural network, t is the number of the iteration step, η is the learning factor, ε is the definition of the change in the objective function at which the learning process is stopped. The gradient descent method has a high stability, which provides slight fluctuations of the modulus of the objective function under the condition of changing the nature of the input data. But the low rate of convergence necessitates a significant amount of time to train artificial neural network. The main parameter that affects the rate of convergence of the gradient descent algorithm is the training factor η. As described above, the input layer of the neural network contains 10 input neurons. The values of the signals of the input layer are normalized to 1. The neural network contains 3 hidden layers, including 20 neurons. The original layer is represented by 6 neurons. Fig. 3 presents the structure of the neutron network. The number of neurons in the layers of this neural network can be changed. The process of learning the neural network takes place within 1000 epochs. 42 In fig. 4. the user interface of the neural network calculation program is demonstrated. The "New system" button starts the process of setting up the user's new neural network. During setup, the user can set the number of input neurons, output neurons, and the number of neurons in each of the three intermediate layers. The “Set I / O”, “Save I / O”, “Set Value” and “File I / O” buttons define the frames - vectors of input and output neurons. The "Train" button directly starts the neural network learning process. The "Empty Weights" button sets for the training weight Wij starting random position. The "Initialize Weights" button changes the scale of training and forms a point of local minimum error in the system. In the upper left corner are the values of the input and output neurons in the current frame. Figure 2: Gradient descent algorithm Figure 3: Multilayer neural network 43 Figure 4: Graphical user interface of the neural network 3. Measured data forecast The complete training process of the neural network is shown in Fig. 4. The graph of error changes is presented below. The dependence of the concentration of cyclohexanol formed as a result of the experiment is shown in fig. 5. 0.60 0.50 0.40 0.30 433 0.20 423 0.10 0.00 413 0 10 403 20 30 40 393 50 60 Figure 5: Dependence Density of cyclohexanol (normalized value), which is formed during the reaction from the time (minutes) and temperature(0K) reaction (with no additives) 44 According to forecasts, it will not be formed as a result of training on a sample set of experimental data. As can be observed from Figure 5, temperature significantly affects the rate of formation of cyclohexanol. If at T =4230K there is a rapid formation of increase in the cyclohexanol. After 20 minutes of reaction, the dependence is on saturation. Decreasing the temperature significantly slows down the reaction rate. The amount of experimental data is modest. However, they are sufficient to predict these dependencies by the means of a neural network. By making this prediction, experimenters can focus their efforts on obtaining the most promising and effective control. Data obtained after the experiment should be used for extra training of the artificial neural network. It is clear from the fig. 6 that as a result of training the neural network reproduces data from a sample set of experiments with high accuracy. 0.12 0.10 0.08 0.06 0.04 0.02 0.00 a 0 20 40 60 80 100 0.30 0.25 0.20 0.15 0.10 0.05 0.00 b 0 20 40 60 0.60 0.50 0.40 0.30 0.20 0.10 0.00 c 0 20 40 60 Figure 6: Comparison dependencies Density of cyclohexanol occasionally experimental data (red line) and examined (provided) neural network-blue lines (in the absence of additives). a - is the reaction temperature 4030K, b - is the reaction temperature 4130K, c is the reaction temperature 4230K It predicts the result of the experiment in the time ranges when the measurements have not yet been carried out. Figures 5, 7 show that the use of additives 15-KR-5 (at a concentration of 1×10-4 mol / l.) leads to the more rapid formation of cyclohexanol at high temperatures. Using the same additive DBKR (at the same concentration of 1×10-4 mol / l) it does not lead to a more rapid formation of cyclohexanol. 45 0.60 0.50 0.40 0.30 0.20 423 0.10 0.00 0 408 10 20 30 40 393 50 60 Figure 7: Dependence of the density of the cyclohexanol (normalized value) formed during the reaction on time (Minutes) and temperature (0K) reactions. a - concentration of the additive 15-KR- 5- 1x10-4 mol / l., b - concentration of the additive DBKR – 1x10-4 mol / l 0.21 0.12 0.18 0.10 0.15 0.08 0.12 0.09 0.06 0.06 0.04 0.03 0.02 a b 0.00 0.00 0 20 40 60 80 100 0 20 40 60 80 100 0.30 0.40 0.25 0.30 0.20 0.15 0.20 0.10 0.10 0.05 c 0.00 d 0.00 0 20 40 60 0 20 40 60 0.50 0.40 0.40 0.30 0.30 0.20 0.20 0.10 0.10 0.00 e 0.00 f 0 20 40 60 0 20 40 60 Figure 8: Comparison of the dependences of cyclohexanol density on the time of experimental data (red lines) and predicted by the neural network-blue lines. a, c, e - concentration of the additive 15- KR-5- 1×10-4 mol / l., b, d, f - concentration of the additive DBKR - 1×10-4 mol / l, a, b - reaction temperature 4030K, c, d - reaction temperature 4130K, e, f - reaction temperature 4230K 46 The increase in the concentration of formed cyclohexanol does not stop with time and continues to grow. As follows from Fig. 8, the introduction of different impurities in different ways affects the rate and concentration of cyclohexanol formation in the chemical reaction. Therefore, the trained neural network accurately reproduces the sample data from the experiment and predicts the result of the measurement in new time ranges and with new values of impurity concentrations. The dependence of the concentration of cyclohexanone obtained as a result of the experiment is presented in Fig. 9. 0.60 0.50 433 0.40 0.30 0.20 423 0.10 0.00 413 0 5 10 15 403 20 25 30 35 40 45 393 50 55 a 60 65 0.60 0.50 0.40 433 0.30 0.20 423 0.10 0.00 413 0 10 403 20 30 40 393 50 с 60 Figure 9: Dependence of the cyclohexanone density (normalized value) formed during the reaction on time (Minutes) and on the temperature (0K) of the reaction. a - in the absence of additives , b -concentration of the additive 15-KR-5- 1x10-4 mol / l., c - concentration of the additive DBKR – 1x10-4 mol / l. 47 According to forecasts, it should not be formed as a result of training on a sample set of experimental data. According to forecasts, it will not be formed as a result of training on a sample set of experimental data. Additives used in the oxidation of cyclohexane significantly affect the course of the reaction. As can be observed from Figure 9, the addition of additives 15-KR-5 and DBKR leads to an increase in the rate and concentration of cyclohexanone as a result of oxidation of cyclohexane. However, if the addition of 15-KR-5 leads to an increase in the rate of increase in the concentration of cyclohexanone at low reaction times. The addition of DBKR makes changes in the rate of response to pain the next time. As in the case of the formation of cyclohexanol (Fig. 5, 7), the temperature significantly affects the reaction rate. Obviously, a neural network trained with a limited set of experimental data can expand the range of data that an experimenter can receive. Instead, it should be taken into account that the data predicted the neural network cannot be 100% reliable. 4. Conclusions The neural network can predict data that allow to determine areas for improvement of the catalysis process, in particular, can allow a more dynamic process of oxidation of cyclohexane, at higher feed rates. The proposed approach creates additional data for preliminary analysis of the catalysis process and identification of its components, as well as reduces the cost of raw materials and energy. It is shown that only important features should be taken into account during forecasting, then the exact dependence of the variable is well predicted, allows to reduce the time and resources required to prepare experiments. It is desirable to use the neural network to expand the range of available calculated data based on an experiment with a limited set of data. Each subsequent experiment increases the range of data for the training of the neural network, which can increase the efficiency of the oxidation of cyclohexane. 5. References [1] D. Clymer et al., “Decidual Vasculopathy Identification in Whole Slide Images Using Multiresolution Hierarchical Convolutional Neural Networks,” The American Journal of Pathology, vol. 190, no. 10, pp. 2111–2122, Oct. 2020. doi: 10.1016/j.ajpath.2020.06.014. [2] R. C. Deo, “Machine Learning in Medicine,” Circulation, vol. 132, no. 20, pp. 1920–1930, Nov. 2015, doi: 10.1161/CIRCULATIONAHA.115.001593. [3] M. Schwyzer et al., “Automated detection of lung cancer at ultralow dose PET/CT by deep neural networks – Initial results,” Lung Cancer, vol. 126, pp. 170–173, Dec. 2018, doi: 10.1016/j.lungcan.2018.11.001. [4] T. J. Brinker et al., “Deep neural networks are superior to dermatologists in melanoma image classification,” European Journal of Cancer, vol. 119, pp. 11–17, Sep. 2019, doi: 10.1016/j.ejca.2019.05.023. [5] F. Chen, H. Li, Z. Xu, S. Hou, and D. Yang, “User-friendly optimization approach of fed-batch fermentation conditions for the production of iturin A using artificial neural networks and support vector machine,” Electronic Journal of Biotechnology, vol. 18, no. 4, pp. 273–280, Jul. 2015, doi: 10.1016/j.ejbt.2015.05.001. [6] M. Haesemeyer, A. F. Schier, and F. Engert, “Convergent Temperature Representations in Artificial and Biological Neural Networks,” Neuron, vol. 103, no. 6, pp. 1123-1134.e6, Sep. 2019, doi: 10.1016/j.neuron.2019.07.003. [7] H. Li, Z. Zhang, and Z. Liu, “Application of Artificial Neural Networks for Catalysis: A Review,” Catalysts, vol. 7, no. 10, p. 306, Oct. 2017, doi: 10.3390/catal7100306. [8] Y.-F. Shen, R. Pokharel, T. J. Nizolek, A. Kumar, and T. Lookman, “Convolutional neural network-based method for real-time orientation indexing of measured electron backscatter 48 diffraction patterns,” Acta Materialia, vol. 170, pp. 118–131, May 2019, doi: 10.1016/j.actamat.2019.03.026. [9] Z. Liu, H. Li, X. Tang, X. Zhang, F. Lin, and K. Cheng, “Extreme learning machine: a new alternative for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters,” SpringerPlus, vol. 5, no. 1, p. 626, Dec. 2016, doi: 10.1186/s40064-016-2242-1. [10] J. Acquarelli, T. van Laarhoven, J. Gerretzen, T. N. Tran, L. M. C. Buydens, and E. Marchiori, “Convolutional neural networks for vibrational spectroscopic data analysis,” Analytica Chimica Acta, vol. 954, pp. 22–31, Feb. 2017, doi: 10.1016/j.aca.2016.12.010. [11] T. Chaikivskyi, B. Sus, O. Bauzha, S. Zagorodnyuk, “Multicomponent analyzer of volatile compounds characterization based on artificial neural networks,” CMIS-2020, CEUR Workshop Proceedings, vol. 2608, p. 819-831. [12] Z. Liu, H. Li, and G. Cao, “Quick Estimation Model for the Concentration of Indoor Airborne Culturable Bacteria: An Application of Machine Learning,” IJERPH, vol. 14, no. 8, p. 857, Jul. 2017, doi: 10.3390/ijerph14080857. [13] K. Zeng and Y. Wang, “A Deep Convolutional Neural Network for Oil Spill Detection from Spaceborne SAR Images,” Remote Sensing, vol. 12, no. 6, p. 1015, Mar. 2020, doi: 10.3390/rs12061015. [14] M. Tkáč and R. Verner, “Artificial neural networks in business: Two decades of research,” Applied Soft Computing, vol. 38, pp. 788–804, Jan. 2016, doi: 10.1016/j.asoc.2015.09.040. [15] X. Zhong and D. Enke, “Predicting the daily return direction of the stock market using hybrid machine learning algorithms,” Financ Innov, vol. 5, no. 1, p. 24, Dec. 2019, doi: 10.1186/s40854-019-0138-0. [16] S. Haider et al., “LSTM Neural Network Based Forecasting Model for Wheat Production in Pakistan,” Agronomy, vol. 9, no. 2, p. 72, Feb. 2019, doi: 10.3390/agronomy9020072. [17] Ahmed, Ali, Elkatatny, and Abdulraheem, “New Artificial Neural Networks Model for Predicting Rate of Penetration in Deep Shale Formation,” Sustainability, vol. 11, no. 22, p. 6527, Nov. 2019, doi: 10.3390/su11226527. [18] A. Rácz-Szabó, T. Ruppert, L. Bántay, A. Löcklin, L. Jakab, and J. Abonyi, “Real-Time Locating System in Production Management,” Sensors, vol. 20, no. 23, p. 6766, Nov. 2020, doi: 10.3390/s20236766. [19] S. Mudryy, V. Reutskyy, O. Ivashchuk, O. Suprun, and V. Ivasiv, “Influence of Organic Additives on Catalysts of Liquid-Phase Cyclohexane Oxidation,” ChChT, vol. 9, no. 1, pp. 37– 42, Mar. 2015, doi: 10.23939/chcht09.01.037. [20] I. Graça and D. Chadwick, “NH4-exchanged zeolites: Unexpected catalysts for cyclohexane selective oxidation,” Microporous and Mesoporous Materials, vol. 294, p. 109873, Mar. 2020, doi: 10.1016/j.micromeso.2019.109873. [21] R. Jevtic, P. A. Ramachandran, and M. P. Dudukovic, “Effect of Oxygen on Cyclohexane Oxidation: A Stirred Tank Study,” Ind. Eng. Chem. Res., vol. 48, no. 17, pp. 7986–7993, Sep. 2009, doi: 10.1021/ie900093q. [22] Y. Melnyk, V. Reutskyy, and S. Melnyk, “Catalytic oxidation of cyclohexane in the presence of substances that affect on the surface tension,” CTAS, vol. 1, no. 2, pp. 57–62, Nov. 2018, doi: 10.23939/ctas2018.02.057. [23] O. Ivashchuk, V. Reutskyy, S. Mudryy, O. Zaichenko, and N. Mitina, “Cyclohexane Oxidation in the Presence of Variable Valency Metals Chelates,” ChChT, vol. 6, no. 3, pp. 339–343, Sep. 2012, doi: 10.23939/chcht06.03.339. [24] L. Tarassenko, “Mathematical background for neural computing,” in Guide to Neural Computing Applications, Elsevier, 1998, pp. 5–35. doi: 10.1016/B978-034070589-6/50002-6. [25] P. Werbos and P. John, “Beyond regression : new tools for prediction and analysis in the behavioral sciences /,” Jan. 1974. 49