Information Technology for Early Diagnosis of Pneumonia on Individual Radiographs Iurii Kraka,b, Oleksander Barmakc and Pavlo Radiukc a Taras Shevchenko National University of Kyiv, Kyiv, 64/13, Volodymyrska str., 01601, Ukraine b Glushkov Cybernetics Institute, Kyiv, 40, Glushkov ave., 03187, Ukraine c Khmelnytskyi National University, Khmelnytskyi, 11, Institutes str., 29016, Ukraine Abstract Nowadays, pneumonia remains a disease with one of the highest death rates around the world. The ailment’s pathogen instantly causes a large amount of fluid into the lungs, leading to acute exacerbation. Without preliminary examination and timely treatment, pneumonia can result in severe pulmonary complications. Consequently, early diagnosis of pneumonia becomes a decisive factor in treatment and monitoring the disease. Therefore, information systems that can identify early pneumonia on the Chest X-ray images are becoming more demanding nowadays. An individual approach to a person might be a promising way of early diagnosis. The presented study considers an approach to feature extraction of the early stage of pneumonia and identifying the disease using a relatively simple convolutional neural network. With only three convolutional and two linearization layers, the proposed architecture classifies radiographs with 90.87% accuracy, approaching the results of deep multilayer and resource-intensive architectures in classification accuracy and exceeding them in time efficiency. Our approach requires relatively fewer computing resources, confirming its efficiency in solving practical tasks on available computing devices. Keywords 1 Convolutional neural network, pneumonia, early diagnosis, chest X-ray, radiograph, feature extraction, individual approach 1. Introduction For the past few years, pneumonia has been recognized as one of the most dangerous human diseases. Pneumonia, along with other lower respiratory infections, is the fourth leading cause of death worldwide. In 2017, roughly 2.17 million people died due to lower respiratory tract infections [1]. Furthermore, the sudden COVID-19 pandemic erupted worldwide in 2020, further increasing the fatal outcome of lung diseases. Many clinical studies have confirmed that the COVID-19 virus causes severe pneumonia in numerous people [2, 3, 4]. Besides, the quantitative difference between infection and death rates [5] confirms the vital importance of early lung disease diagnosis. Being an instantaneous inflammatory disease, pneumonia initially causes damage to alveoli in the lungs [6]. The disease’s early symptoms include a combination of dry cough, difficulty breathing, chest pain, and fever. Various viruses and bacteria commonly cause pneumonia; sometimes, microorganisms may lead the lung complications. As the pathogen reaches the lungs, the white blood cells counteract it, resulting in the lungs’ inflammation. Therefore, the pneumonic fluid fills the alveoli causing coughing, breathing problems, and fever. One of the most widely spread lung disease diagnostics methods is chest X-ray processing [7, 8]. A concentrated electron beam, called X-ray photons, passes through body tissues and creates a so- called image on a metal surface (photographic film). While performing a diagnosis, healthcare IDDM’2020: 3rd International Conference on Informatics & Data-Driven Medicine, November 19–21, 2020, Växjö, Sweden EMAIL: yuri.krak@gmail.com (I. Krak); аlexander.barmak@gmail.com (O. Barmak); radiukpavlo@gmail.com (P. Radiuk) ORCID: 0000-0002-8043-0785 (I. Krak); 0000-0003-0739-9678 (O. Barmak); 0000-0003-3609-112X (P. Radiuk) ©️ 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) professionals compare white infiltrates on the image indicating the infection with white pneumonic fluid areas in the lungs. Fig. 1 illustrates the radiographs of a healthy person and a person suffering from severe pneumonia. Figure 1: Radiographs of lungs without (left) and with (with) detected pneumonia [8] It should be considered that chest X-ray images have a limited color scheme consisting of different shades of gray. Therefore, healthcare professionals regularly encounter severe difficulties identifying infected areas on X-rays [8, 9]. The issues arise from the high intensity of the film’s white wavelength when fluid in the lungs is difficult to treat as dense and hard tissue. Specifically, the features of pneumonia on X-ray can be clearly observed when the lung tissue becomes dense due to filling the lungs with much fluid. Under those circumstances, the color spectrum of the image will be shifted from dark shades corresponding to the lungs’ air to light shades signaling the pulmonary disease’s presence. Thus, early detection of pneumonia gets complicated by the limited color spectrum of the X- ray image, and as a consequence, the low severity of the disease’s features in the image. Another problem with early diagnosis considers the human factor. Radiologists must have well- prepared eyes to distinguish the various gray-scale color scheme of air from the so-called ground- glass representing the disease. Such a scheme can be depicted on an X-ray image in diverse colors, yet not be a pneumonic fluid itself. So, a healthcare professional should be able to determine whether the white blurs on the radiograph correspond to the fluid. There are known cases when radiologists made a wrong diagnosis owing to the human eye’s mistake [7, 10]. Both false positive and false negative diagnoses can have a significant negative impact on the human body [4, 6, 11]. Hence, computational methods can considerably facilitate the correct diagnosis and increase the reliability of preventive measures or subsequent treatment. The presented work is devoted to analyzing individual features of the early stage of pneumonia on radiographs and developing an information system to classify the disease with a well-fitted model. 2. Related work The scientific community has addressed numerous studies in identifying pneumonia on radiographs using machine learning (ML). At Stanford University [11], the chest X-ray image’s breakdown into thermal maps was applied, each section of which displayed different infrared radiation. Thus, lung pathology was differentiated, and pneumonic areas were detected. In another study [12], the approach with thermal maps was used to recognize bacterial and viral pneumonia in pediatrics. The application of imaging strategies to detect pneumonia and explain signs’ choice proved to be a successful disease diagnostics method. Over the past few years, different types of artificial neural networks (ANN) have been widely utilized in medical screening. For example, in work [13], authors applied long short-term memory (LSTM) architecture to extract information about 14 different diseases. Using only one model, they could achieve 71.3% classification accuracy. However, to classify a single image, the LSTM model requires several additional models, which is a significant disadvantage in the limited medical data environment. In addition to using thermal maps in [11], the authors also presented an approach based on deep learning (DL). They built a 121-layer convolutional neural network (CNN) to extract characteristic maps and applied statistical methods (standard deviation and average calculation) to preprocessing images. In work [14], the authors applied different machine learning techniques to segment regions of interest of pneumonia on X-rays. Another recent study [15] aims at the early detection of COVID-19 in X-ray and CT scans. This work presented a method to adjust the learning control parameters of CNN called hyperparameters and was devoted to optimizing ultra-precise cores, convolution layers, and a fully coherent layer. In [16], the authors investigated DL models’ application in early diagnosis of diseases caused by COVID-19. The study showed the practical benefits of state-of-the-art architectures, namely VGG16 [17] and Inception v4 [18] in classifying poorly presented lung diseases on chest X-ray images. The presented paper aims to address the issue of efficient early detection of pneumonia on radiographs. In addition to the various methods used in other studies, our study proposes a novel approach that allows designing the efficient CNN model individually for every person. The study considers an approach to selecting pneumonia symptoms in an X-ray image, which allows the early diagnosis of pneumonia. 3. Implementation 3.1. Model There is the CNN topology sharpened for pneumonia detection at the early stages. Each layer of feature extraction gets the result of the layer immediately preceding it, and its output is passed to the input of the subsequent layers. The employed in this study architecture comprises convolutional layers (ConvLs), activation function (ReLU), MaxPool layers (MPLs), two linearization layers (Dense), a flattening layer (Flatten), and a classification layer (SML). Fig. 2 shows the proposed CNN. Figure 2: The topology of a CNN designed for early diagnosis of pneumonia Wrapping layers of feature extraction contain: ConvL – 3 × 3, 16 layers; ConvL – 3 × 3; 32 layers; ConvL – 3 × 3, 64 layers; MPL – 2 × 2. The output of convolution and fully-coupled operations take place in a two-dimensional plane, called attribute cards. It should be noted that each layer plane in the network was obtained by combining one or more planes of the previous layers. The proposed classifier requires individual features of performing calculations. Therefore, the outer output part of feature extraction (the result of convolution operation) is smoothed into a one- dimensional feature vector for further classification. The classification layer contains a smoothed layer, a 0.5 dropout layer, and a Soft-Max activation function that performs classification tasks. 3.2. Feature extraction and data preprocessing The approach presented in this paper assumes a texture classification between conventional lung radiographs and pneumonia-infected lung radiographs. Fig. 3 demonstrates a sequence of steps to identify the infected lungs. Figure 3: Diagram of information system execution for identification of pneumonia First, the images prepared in the training dataset were resized to 512 × 512 in order to locate an appropriate region of interest (ROI). For the pneumonia classification task, ROI is represented as the pale mass of fluid in the infected lung. This pale area on a radiograph can be extracted by the co- occurrence distribution, i.e., with a square gray-level co-occurrence matrix (GLCM) [18]. The co- occurrence matrix is represented as k k  1,  if I ( i, j ) = k and I ( i + Dx , m + Dy ) = k , CM =  (1) i =1 j =1  0,  if I ( i, j )  k and I ( i + Dx , m + Dy )  k . where I ( k , k ) stands for the gray-scale image of the central pixel ( ic , jc ) , Dx = D  cos ( ) and Dy = D  sin ( ) ,  represents the direction of the GLCM from the ( ic , jc ) , and D is the distance from a current pixel to the ( ic , jc ) . Using GLCMs, we could measure the texture based on the gray-scale values or the image’s intensity. GLCMs were scattered throughout the image and, following the advice from [20], two identical matrices were rotated at angles 0 , 45 , 90 , and 135 to ensure the critical evaluation of different features. In this study, the considered pulmonary features were determined by the Haralick texture features and extracted from the altered target image size. Haralick features represent the relationship between neighboring pixels in terms of the influence of their intensity, which is essential for the detection of early pneumonia as the targeted radiographs are defined by the limited color spectrum. Haralick texture features [20] employed in this study are presented below. 1. Image homogeneity is a similarity between pixels and is determined by the equation p ( i, j ) F1 =  N N , (2) i =0 j =0 1 + ( i + j ) 2 where N is a dimension of GLCM, p ( i, j ) is the position of the matrix element ( i, j ) in the matrix, i.e., the probability that a pixel of i is adjacent to a pixel of j. 2. Contrast is the difference between the maximum and minimum pixel values N −1 F2 =  m2  p ( i, j ) , N N (3) m =0 i =0 j =0 where m = i − j . 3. Correlation is the dependencies of gray levels of a matrix element ( i, j ) p ( i , j )  ( i − i )  ( j −  j ) F3 =  N N , (4) i =1 j =1  i j where i ,  j ,  i , and  j represent a mean and standard deviation of probability density functions. 4. The square deviation from the mean of an image is defined as F4 = ( i −  ) p ( i, j ) N N 2 (5) i =1 j =1 5. The inverse difference moment can be presented as follows p ( i, j ) F5 =  . (6) i j 1 + (i − j ) 2 6. The sum of all average values in this image is set as F6 = ipa +b ( i ) , 2N (7) i =2 where a and b represent the rows and columns in the matching matrix, summed up to a + b . 7. The summarized variance is as F7 = ( i − F8 ) pa + b ( i ) . 2N 2 (8) i =2 8. Sum entropy is the overall amount of information encoded for the image F8 = − pa + b ( i ) log ( pa + b ( i ) ) . 2N (9) i =2 9. Regular entropy can be represented as follows F9 = −p ( k , l ) log ( p ( k , l ) ) . N N (10) k =1 l =1 10. The differenced variance of an image is as follows F10 = ( k −  ) pa −b ( k ) . N 2 (11) k =0 11. The difference entropy based on (10) is as F11 = − − pa −b ( k ) log ( pa −b ( k ) ) . N (12) k =0 After that, the obtained values of the Haralick texture features were compared between ordinary images and images of pneumonia, and their differences were evaluated. Three texture features, namely dispersion features F4 (5), the sum of mean values F6 (7), and the sum of dispersion F7 (8), showed the most considerable difference in values between images of a healthy person and images with lung diseases [21]. Thus, these three evaluations were chosen to identify early pneumonia. 3.3. Evaluation criteria and experiment setup For our experiments, we employed the small CheXpert benchmark dataset [22]. The whole dataset comprised 5,856 chest radiographs with a size of 320×320 pixels excluded from 524 patients. In this dataset, all images are labeled of two classes: normal and pneumonia. The whole dataset consists of training, test, and validation subsets, each of which comprises 70%, 20%, and 10% of all images, respectfully. In the training dataset, the image in Normal occupies only a quarter of all data. This study estimates the proposed architecture by two well-known statistical indicators, namely, accuracy (ACC) and area under the curve (AUC). According to recent studies [23, 24, 25], the classification accuracy can be set as TP + TN ACC = , (13) TP + TN + FP + FN where TP stands for true positive cases in the initial dataset, TN – true negative cases, FN – false positive, and FN – false negative cases. The AUC for binary classification is as follows 1 FP1   TP1  AUC2 =  FP2 TP2 −    − . (14) 2  FP2 − TN 2 FP1 − TN1   TP2 − FN 2 TP1 − FN1  For training, we utilized Adam’s optimization method with the following set of training hyperparameters: learning rate of 10−3 , weight decay of 0.5  10−3 , momentum of 0.9, and batch size of 512. Based on experimental results from [26], these values can provide excellent prediction and approximation of a model based on the feature ranking during training [27, 28, 29]. All experiments were performed on Python v3.7 using TensorFlow v.1.15 [30] as the back-end. The computational experiments were executed on an 8-core Ryzen 2700 and a single GPU with 8 GB of memory. We open-sourced the working code on GitHub at the following link [31]. 4. Results and discussion Computational experiments were conducted to evaluate and compare three state-of-the-art architectures, namely CheXNet [11], VVG16 [17], Inception v4 [18], and the proposed one in Fig. 2. According to the preliminary evaluation, all models converged at 100 epochs and started overfitting at >100 epochs. Therefore, the final training was performed in precisely 100 epochs for each model. In Fig. 4, there are the training and testing results of the CheXNet model. (a) (b) (c) (d) Figure 4: Loss function (a), AUC score (b), confusion matrix (c) and ROC curve (d) obtained by CheXNet Although CheXNet achieved a high AUC score (96.32%) and good convergence on the training and testing (Fig. 4b – 4d), there are significant noises on the training loss function (Fig. 4a). Such an outcome may indicate substantive differences in the training and testing, leading to overfitting. Fig. 5 demonstrates the training and testing of model VGG16. (a) (b) (c) (d) Figure 5: Loss function (a), AUC score (b), confusion matrix (c) and ROC curve (d) obtained by VVG16 The VGG16 model achieved relatively low false negative and high AUC scores (Fig. 5c – 5d). However, VGG16 showed too high a false positive rate (10.26%), which was an unsatisfactory indicator for the model’s practical employment. Fig. 6 presents the training and testing on model Inception v4. (a) (b) (c) (d) Figure 6: Loss function (a), AUC score (b), confusion matrix (c) and ROC curve (d) obtained by Inception v4 From Fig. 6a – 6b, it seems that the Inception model suffered from the overfitting on the CheXNet dataset. Fig. 7 shows the final training and testing results of our architecture. (a) (b) (c) (d) Figure 7: Training and validation loss curves (a) and the convergence of training and validation AUC curves (b), confusion matrix (c), and the ROC curve (d) obtained by our architecture (Fig. 2) In Fig. 7a), as the number of epochs increases, the validation loss and training loss approach lines indicate that the model lacks overfitting or underfitting. In Fig. 7b), there is a graph of numerical experiments, where the line of training and validation AUC score converges to an almost equal point. The confusion matrix points out false positive cases of 8.85% and false negative cases of 1.28% (Fig. 7c). The ROC curve in Fig. 7d represents true (ordinate) and false (abscissa) positive rates. Our architecture achieves an AUC score of 96.82% on the validation dataset, indicating that the model shows satisfactory classification results in the validation stage. It is also an excellent point to optimize the AUC score while having a class imbalance to avoid overfitting to a single class. Overall, Table 1 summarizes our architecture’s performance compared to the state-of-the-art models in medical image analysis by evaluation criteria (13) and (14). The best scores for each criterion are highlighted in a bold color. Table 1 A formal comparison of the proposed architecture with state-of-the-art models Architecture FP, % FN, % ACC, % AUC, % Time, h CheXNet [11] 8.49 1.76 89.74 96.32 4.92 VVG16 [17] 10.26 0.64 89.10 97.29 2.4 Inception v4 [18] 9.62 0.96 89.42 96.29 3.89 Proposed architecture 7.85 1.28 90.87 96.82 1.18 As it is seen in Table 1, our architecture exceeds the recognized architectures in classification accuracy (90.87%) and false type I error (7.85%). VGG16 architecture showed the best results in the second kind error (0.64%) and the ROC curve (97.29%). In general, the proposed architecture scored relatively small false positive and false negative rates, 7.85% and 1.28%, respectively. The classification issue was addressed with only three convolutional layers and a new approach to feature extraction. A small number of layers provided less time (1.18 hours) comparing to other considered models. In contrast, the recognized CheXNet architecture achieved the approximate accuracy based on a 121-layer CNN, yet more than three times longer. It should be noticed that the use of 121 layers may lead to excessive training, while the values of weights remain unchanged. However, the testing part varies due to changes in processing power. Overall, the presented approach of early diagnosis of pneumonia demonstrates competitive performance at lower computational costs and can be employed for further investigation. 5. Conclusion The current work proposes a novel approach to detect early pneumonia on chest X-ray images, considering individuals’ particular features. According to our study, three Haralick features, namely dispersion features, the sum of mean values, and the sum of dispersion, indicate the most notable peculiarities of the early stages of pneumonia. We designed a new CNN architecture to obtain individual characteristics on the preprocessed chest X-ray images based on our findings. The model we built based on the architecture could classify the disease on radiographs with 90.87% accuracy, surpassing multilayer and resource-intensive models like CheXNet. Such an approach al-lows adding elements to the architecture, avoiding its complications in the future with the expansion of nosologies. Nevertheless, despite the high percentage of classification and reduction of computational complexity, our work’s main contribution is applying information systems to each person’s characteristics in the early diagnosis order of pneumonia. Our further investigation will focus on improving the algorithms of region localization on the radiographs with infected lungs. 6. References [1] G. A. Roth et al.: Global, regional, and national age-sex-specific mortality for 282 causes of death in 195 countries and territories, 1980-2017: A systematic analysis for the Global Burden of Disease Study 2017. Lancet. 392, 1736–1788 (2018). doi:10.1016/S0140-6736(18)32203-7 [2] G. Raghu, K.C. Wilson: COVID-19 interstitial pneumonia: Monitoring the clinical course in survivors. Lancet Respir. Med. 8, 839–842, (2020). doi:10.1016/S2213-2600(20)30349-0 [3] S. J. Shah et al.: Clinical features, di-agnostics, and outcomes of patients presenting with acute respiratory illness: A retrospective cohort study of patients with and without COVID-19. EClinicalMedicine (2020). doi:10.1016/j.eclinm.2020.100518 [4] H. Ko et al.: COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: Model development and validation. J Med Internet Res. 22(6), e19569 (2020). doi:10.2196/19569 [5] R. G. Wunderink, C. Feldman: Community-acquired pneumonia: A global perspective. Semin. Respir. Crit. Care Med. 41, 453–454 (2020). doi:10.1055/s-0040-1713003 [6] W. S. Lim: Pneumonia–overview. In: Reference Module in Biomedical Sciences. pp. 1–12. Elsevier Inc., Nottingham University Hospitals NHS Trust and University of Nottingham, Nottingham, United Kingdom (2020). doi:10.1016/B978-0-12-801238-3.11636-8 [7] J. Shen et al.: Artificial intelligence versus clinicians in disease diagnosis: Systematic review. JMIR Med Inf. 7(3), e10010 (2019). doi:10.2196/10010 [8] L. Faes et al.: Automated deep learning design for medical image classification by healthcare professionals with no coding experience: a feasibility study. Lancet Digit. Heal. 1, e232–e242 (2019). doi:10.1016/S2589-7500(19)30108-6 [9] A. C. Morris: Management of pneumonia in intensive care. J. Emerg. Crit. Care Med. 2, 101–101 (2018). doi:10.21037/jeccm.2018.11.06 [10] S. Waite et al.: Analysis of perceptual expertise in radiology – current knowledge and a new perspective. Front. Hum. Neurosci. 13(213), (2019). doi:10.3389/fnhum.2019.00213 [11] P. Rajpurkar et al.: Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLOS Med. 15, e1002686 (2018). doi.org/10.1371/journal.pmed.1002686 [12] S. Rajaraman et al.: Visualization and interpretation of convolutional neural network predictions in detecting pneumonia in pediatric chest radiographs. Appl. Sci. 8, 1715 (2018). doi:10.3390/app8101715 [13] A. Del-Río et al.: Time-frequency parametrization of multichannel pulmonary acoustic information in healthy subjects and patients with diffuse interstitial pneumonia. In: Proceedings of the 2018 IEEE International Autumn Meeting on Power, Electronics, and Computing, ROPEC 2018. pp. 1–4. IEEE Inc., Ixtapa, Mexico, Mexico, 14-16 November 2018 (2019). doi:10.1109/ROPEC.2018.8661356 [14] T. B. Chandra, K. Verma: Pneumonia detection on chest X-ray using a machine learning paradigm. In: Proceedings of 3rd International Conference on Computer Vision and Image Processing (CVIP-2020), Singapore, 01 November 2019, vol. 1022. pp. 21–33. Springer, (2020). doi:10.5555/3045390.3045451 [15] Z. Lu et al.: MUXConv: Information multiplexing in convolutional neural networks. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-2020), Seattle, WA, USA, 13–19 June 2020. pp. 12041–12050. IEEE Inc. (2020). doi:10.1109/CVPR42600.2020.01206 [16] D. Dansana et al.: Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm. Soft Comput. (2020). doi:10.1007/s00500-020-05275-y [17] K. Simonyan, A. Zisserman: Very deep convolutional networks for large-scale image recognition. Paper presented at the 3rd International Conference on Learning Representations (ICLR-2015), San Diego, CA, USA, 7–9 May 2015 [18] L. S. Athanasiou et a.: Plaque characterization methods using intravascular ultrasound imaging. In: Athanasiou, L.S., Fotiadis, D.I., and Michalis, L.K.B.T.-A.P.C.M.B. on CI (eds.) Atherosclerotic Plaque Characterization Methods Based on Coronary Imaging. pp. 71–94. Academic Press, Oxford (2017). doi:10.1016/B978-0-12-804734-7.00004-X [19] C. Szegedy et al.: Inception-v4, Inception-ResNet, and the impact of residual connections on learning. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI-2017), San Francisco, California, USA, 4–10 February 2017. pp. 4278–4284. AAAI Press (2017). doi:10.5555/3298023.3298188 [20] T. Löfstedt et al.: Gray-level in-variant Haralick texture features. PLoS One. 14(2), e0212110 (2019). doi:10.1371/journal.pone.0212110 [21] N. Vamsha Deepa et al.: Feature extraction and classification of X-ray lung images using Haralick texture features. In: Proceeding of Smart and Innovative Trends in Next Generation Computing Technologies (NGCT-2017), Singapore, 09 June 2018, vol, 828. pp. 899–907. Springer Singapore, (2018). doi:10.1007/978-981-10-8660-1_68 [22] J. Irvin et al.: CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the 33d AAAI Conference on Artificial Intelligence 2019. pp. 590–597. Association for the Advancement of Artificial Intelligence (AAAI), Honolulu, Hawaii, USA, 27 January – 1 February 2019 (2019). doi:10.1609/aaai.v33i01.3301590 [23] I. V. Krak et al.: An approach to the determination of efficient features and synthesis of an optimal band-separating classifier of dactyl elements of sign language. Cybern. Syst. Anal. 52(2), 173–180 (2016). doi:10.1007/s10559-016-9812-7 [24] I. G. Kryvonos et al.: Methods to create systems for the analysis and synthesis of communicative information. Cybern. Syst. Anal. 53(6), 847–856 (2017). doi:10.1007/s10559-017-9986-7 [25] E. A. Manziuk et al.: Definition of information core for document classification. J. Autom. Inf. Sci. 50(4), 25–34 (2018). doi:10.1615/JAutomatInfScien.v50.i4.30 [26] P. M. Radiuk: Impact of training set batch size on the performance of convolutional neural networks for diverse datasets. Inf. Technol. Manag. Sci. 20(1), (2017). doi:10.1515/itms-2017-0003 [27] I. G. Kryvonos et al.: Predictive text typing system for the Ukrainian language. Cybern. Syst. Anal. 53(4), 495–502 (2017). doi:10.1007/s10559-017-9951-5 [28] I. V. Krak et al.: Multidimensional scaling by means of pseudoinverse operations. Cybern. Syst. Anal. 55(1), 22–29 (2019). doi:10.1007/s10559-019-00108-9 [29] A. V. Barmak et al.: Information technology of separating hyperplanes synthesis for linear classifiers. J. Autom. Inf. Sci. 51(5), 54–64 (2019). doi:10.1615/JAutomatInfScien.v51.i5.50 [30] M. Abadi et al.: TensorFlow: A system for large-scale machine learning. In: Proceedings of 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI-2016), Savannah, GA, USA, 2–4 November 2016. pp. 265–283. USENIX Association (2016) [31] Detecting pneumonia using Convolutional Neural Network. GitHub, Inc. https://github.com/soolstafir/Detect-Pneumonia-Using-CNN (2020). Accessed 25 Sep 2020