Overview of ImageCLEFtuberculosis 2022 – CT-based Cavern Detection and Report Serge Kozlovski1 , Yashin Dicente Cid2 , Vassili Kovalev1 and Henning Müller3,4 1 United Institute of Informatics Problems, Minsk, Belarus 2 Roche Diagnostics, Sant Cugat, Spain 3 University of Applied Sciences Western Switzerland (HES–SO), Sierre, Switzerland 4 University of Geneva, Switzerland Abstract ImageCLEF is a part of the Conference and Labs of the Evaluation Forum (CLEF) initiative and includes a variety of tasks dedicated to multimodal image information retrieval, including image classification and annotation. The tuberculosis (TB) task is one of the ImageCLEF tasks that started in 2017 and changed from year to year with more complex challenges. The 2022 edition was dedicated to the automatic analysis of caverns, and included two subtasks: cavern region detection and cavern reporting. In 2022, 6 groups from 5 countries submitted at least one successful run for at least one subtask. This paper describes the TB task setup, data and the approaches of the participants. Keywords Tuberculosis, Computed Tomography, Image Classification, Detection, Caverns, 3D Data Analysis 1. Introduction ImageCLEF1 is a part of the the CLEF2 initiative and presents a set of image information retrieval tasks. Medical tasks were included in the 2nd edition of ImageCLEF in 2004 and have been held every year since then [1, 2, 3, 4, 5]. The tuberculosis task is one of the medical tasks in 2022. More information on the other ImageCLEF tasks organized in 2022 can be found in [6] and the past editions of ImageCLEF are described in [7, 8, 9, 10, 11, 12, 13, 14]. Tuberculosis (TB) is a bacterial infection caused by a germ called Mycobacterium tuberculosis. About 130 years after its discovery, the disease remains a persistent threat and one of the top 10 causes of death worldwide according to the WHO [15]. The bacterium usually attacks the lungs and generally TB can be cured with antibiotics. However, different types of TB require different treatments and therefore the detection of the specific case characteristics is an important real-world task that is usually done on the imaging data. The setup of this task evolved from year to year. In the first two editions [16, 17] participants had to detect multi-drug resistant patients (MDR subtask) and to classify the TB type (TBT subtask) both based only on the computed tomography (CT) image. After 2 editions it was CLEF 2022: Conference and Labs of the Evaluation Forum, September 5–8, 2022, Bologna, Italy $ kozlovski.serge@gmail.com (S. Kozlovski) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) 1 http://www.imageclef.org/ 2 http://www.clef-initiative.eu/ concluded to drop the MDR subtask because it seemed impossible to solve based only on the image data, and the TBT subtask was also stopped because of only minor improvements in the results between the 1st and the 2nd editions. At the same time, most of the participants obtained good results in the severity scoring (SVR) subtask introduced in 2018. In the 3d edition, the TB task [18] was restructured to allow usage of the uniform dataset. It included two subtasks – a continued severity score (SVR) prediction subtask and a new subtask based on providing an automatic CT report on the TB case (CTR subtask). In the 4th edition [19], the SVR subtask was dropped and the automated CT report generation task was modified to be lung-based rather than CT-based. In the 5th edition [20], the task organizers decided to discontinue the CTR task and brought back to life the Tuberculosis Type classification task from the 1st and 2nd ImageCLEFmed TB editions to check if recent machine learning and deep learning methods allow improving previous rather low results. In the 2022 edition, the task was dedicated to the cavern detection and report, which were split into two subtasks. The first subtask (Cavern Detection) was detection itself: participants must detect lung cavern regions in lung CT images associated with lung tuberculosis. The problem is important because even after successful treatment which fulfills the existing criteria of recovery the caverns may still contain colonies of Mycobacterium Tuberculosis that could lead to unpredictable disease relapse. The second subtask (Cavern Report) is a cavern classification problem. Participants must predict three binary features of caverns suggested by experienced radiologists. This article first describes the task proposed for TB in 2022. Then, details on the data sets, evaluation methodology, and participation are given. The results section describes the submitted runs and the results obtained. A discussion and conclusion section ends the paper. 2. Task, Data Set, Evaluation, Participation 2.1. The Task in 2022 In this task, participants had to automatically detect lung cavern regions in lung CT images associated with lung tuberculosis in the first subtask, and predict 3 binary features of caverns suggested by experienced radiologists. So the first subtask was a 3D object detection task, and the second one was a multi-label classification problem. 2.2. Data Set In this edition, separate data sets were provided for each subtask. The Cavern Detection data set contained 559 train and 140 test cases, while the Caverns Report data included only 60 train and 16 test cases due to the scarcity of labelled data. Each CT image corresponds to one unique patient. For all patients, we provided 3D CT images with a slice size of 512 × 512 pixels and a variable number of slices (the median number was 128). All training CTs for both subtasks were accompanied by cavern area bounding boxes (if any), and labelling of caverns was provided for the Cavern Report subtask. Since bounding boxes Figure 1: Slices of typical CT images with cavern regions. were provided for all CTs, participants were welcomed to use data from one subtask into the other. All the CT images were stored in NIFTI file format with .nii.gz file extension (g-zipped .nii files). This file format stores raw voxel intensities in Hounsfield units (HU) as well as the corresponding image meta-data such as image dimensions, voxel size in physical units, slice thickness, etc. Same as in the previous year, we provided two versions of automatically extracted masks of the lungs for all patients obtained using the methods described in [21, 22, 19]. Typical examples of caverns are shown in Fig. 1. Table 1 details the distribution of patients within each cavern characteristic for Caverns Report subtask. One can note an important imbalance in the label numbers for thick wall presence, which is caused by natural reasons. During the data split, we tried to achieve a similar distribution between the training and the test data. Table 1 Distribution of CT images within each cavern characteristic. Set Thick walls Calcifications Foci Train 49 (82%) 34 (57%) 30 (50%) Test 13 (81%) 9 (56%) 9 (56%) 2.3. Evaluation Measures and Scenario The Cavern Detection subtask is a detection problem that was evaluated on the mean average precision (mAP) at different intersection over union (IoU ) thresholds. The IoU of a set of predicted bounding boxes (PredBB) and ground truth bounding boxes (GTBB) is calculated as: 𝐼𝑜𝑈 = (𝑃 𝑟𝑒𝑑𝐵𝐵 ∩ 𝐺𝑇 𝐵𝐵)/(𝑃 𝑟𝑒𝑑𝐵𝐵 ∪ 𝐺𝑇 𝐵𝐵) The metric sweeps over a range of IoU thresholds 𝑡, for each 𝑡 calculating an average precision (AP) value. At a threshold of 𝑡, a predicted object is considered a "true positive" if its intersection over union with a ground truth object is greater than 𝑡. At each threshold 𝑡, a precision value is calculated based on the number of true positives (TP), false negatives (FN), and false positives (FP) resulting from comparing the predicted bounding box to all ground truth bounding boxes: 𝐴𝑃 (𝑡) = 𝑇 𝑃 (𝑡)/(𝑇 𝑃 (𝑡) + 𝐹 𝑃 (𝑡) + 𝐹 𝑁 (𝑡)) A true positive is counted when a single predicted bounding box matches a ground truth bounding box with an IoU above the threshold. A false positive is counted when a predicted bounding box had no associated ground truth bounding box with an IoU above the threshold. A false negative indicates a ground truth bounding box had no associated predicted bounding box with an IoU above the threshold. If there are no ground truth bounding boxes for a given CT image, any number of predictions (false positives) will result in the image receiving a score of zero, and being included in the mean average precision. The average precision of a single case was calculated as the mean of the above AP(t) values at each IoU threshold 𝑡 = (0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75). The Cavern Report subtask is considered a multi-binary classification problem. The ranking of this subtask was done first by mean ROC-AUC and then by minimum ROC-AUC over the 3 target labels. 2.4. Participation In 2022, 6 groups from 5 countries submitted at least one run. Four groups participated in each task, and 2 groups participated in both tasks. Similar to the previous editions, each group could submit up to 10 runs. 43 scored runs were submitted in total (17 for Caverns Detection and 26 for the Cavern Report). 3. Results The Cavern Detection task was scored using the mean average precision at the different inter- section over union (IoU) thresholds. The Cavern Report task was evaluated as a multi-label classification problem and scored using mean ROC-AUC as a primary score and minimum ROC-AUC as a secondary score. Tables 2, 3 show the final results for each group’s best run in each task. Caverns Detection subtask: The detailed statistics of prediction quality are presented in Table 4. Table 2 Results obtained by the participants of the Cavern Detection task. Only the best run of each participant is reported here. Group name Institution map_iou CSIRO Australian e-Health Research Centre, 0.504 Commonwealth Scientific and Industrial Research Organisation, Herston, Queensland, Australia and CSIRO Data61, Imaging and Computer Vision Group, Pullenvale, Queensland, Australia and Queensland University of Technology, Brisbane, Queensland, Australia SenticLab.UAIC Alexandru Ioan Cuza University of Iasi, 0.295 Romania KDE-lab KDE Laboratory, Department of 0.185 Computer Science and Engineering, Toyohashi University of Technology, Aichi, Japan SDVA-UCSD San Diego VA HCS, San Diego, CA, USA 0.000 Table 3 Results obtained by the participants of the Cavern Report task. Only the best run of each participant is reported here. Group name Institution Mean Min ROC-AUC ROC-AUC SDVA-UCSD San Diego VA HCS, San 0.687 0.513 Diego, CA, USA KDE-lab KDE Laboratory, 0.658 0.317 Department of Computer Science and Engineering, Toyohashi University of Technology, Aichi, Japan KL_BP_SSN Sri Sivasubramaniya Nadar 0.536 0.413 college of Engineering, Chennai, India SSN_Dheepak_Kavitha SSN College of Engineering, 0.461 0.256 Chennai, India CSIRO [23] is the winner of the subtask with a mAP score of 0.504. In their experiments, the CSIRO team compared several approaches based on 2D and 3D CNNs. In the 2D case, YOLO v5 was applied to axial and coronal slices of CT, and in 3D case custom Retina-U-Net-like model was used. The winning method was based on 3D Retina-U-Net-based model followed by a custom predicted 3D boxes merging routine. The SenticLab.UAIC team ranked 2nd with a good true negative rate, but rather poor true positive prediction ratio. The team approach was based on a public pretrained 3D lung nodule detection model. The KDE-lab [24] team ranked 3rd. The team’s best solution was found using slice-wise analysis of masked CT data with YOLO v3 CNN. Cavern Report subtask: The detailed label-wise scores are presented in Table 5. SDVA-UCSD [25] is the winner of the subtask with a mean ROC-AUC score of 0.683 and a minimum ROC-AUC score of 0.513. In their winning approach, the SDVA-UCSD group applied a 3D Resnet model with a convolutional block attention mechanism and a semi-supervised training strategy that allowed involving the data set provided in the detection subtask. The KDE-lab [24] team ranked 2nd with a mean ROC-AUC score of 0.658, but at the same time had the best scores for "Thick walls" and "Foci" labels. The group reported slice-wise analysis using a pre-trained 2D CNN (EffcientNet, DenseNet) and also applied a resolution increase technique using SRGAN as a preprocessing step. The KL_BP_SSN [26] team ranked 3rd using a simple custom 3D CNN with four blocks of (Convolution, MaxPooling, BatchNorm) as feature extractor in their experiments. The SSN_Dheepak_Kavitha [27] team ranked 4th using a simple custom 3D CNN as well. Table 4 Extended prediction statistics for the Cavern Detection subtask. TP (𝑡 = 𝑥) - true positive (correctly predicted bounding boxes with IoU threshold x); TN - true negative (no ground truth bounding box and no predicted one); TP (𝑡 = 0.4) TP (𝑡 = 0.50) TP (𝑡 = 0.60) TP (𝑡 = 0.75) TN Ground Truth 318 318 318 318 40 CSIRO 114 (36%) 96 (30%) 80 (25%) 39 (12%) 38 (95%) SenticLab.UAIC 13 (4%) 8 (4%) 6 (3%) 0 39 (98)% KDE-lab 18 (6%) 11 (3%) 2 (1%) 0 23 (58%) Table 5 ROC-AUC scores for each label in the Caverns Report subtask. Group name Thick walls Calcifications Foci SDVA-UCSD 0.513 0.889 0.659 KDE-lab 0.910 0.317 0.746 KL_BP_SSN 0.718 0.413 0.476 SSN_Dheepak_Kavitha 0.256 0.492 0.635 4. Discussion and Conclusions The results obtained in the task cannot be compared to the previous editions, since it is the first version of the cavern-dedicated task, furthermore, this is the first time that the TB task switched from classification problems to a detection problem. However, we can compare the approaches of the participants. Similar to previous years, all groups used 2D or/and 3D CNNs in both subtasks. Based on the anlysis of their working notes we can conclude that both solutions with the best results used advantages of volumetric analysis to the contrary of previous task editions, where projection-based approaches were more effective. The majority of the participants used transfer learning techniques wherever possible and executed pre-processing steps, such as resizing, grouping, normalization, slice filtering, etc. At the same time we observed two obvious drawbacks in the approaches: similar to previous years, the participants tent to ignore part of the available information. For example, two groups did not use the provided masks at all, none of the groups reported utilizing both masks and only two groups took advantage of sharing data between subtasks. Results analysis shows that the best scores are reasonably high for both subtasks. The top score for the Cavern Detection is better than we expected taking into account the complexity of the 3D detection problem. The winning team was able to predict more than a third of cavern regions with an IoU threshold of 0.4. The top score for the Cavern Report subtask is not as high, but we can note that by combining the best scores for each label from the top-2 teams we would get a pretty high mean ROC-AUC score of 0.85 (although we must remember about estimations quality caused by data scarcity for this subtask). As a result, we can conclude that despite a rather low number of participants in 2022, we saw interesting and effective approaches. In general, the task is successful and its outcome is informative and useful. Possible updates for future editions of cavern-related TB task should consider: (i) extending the data set size and the label count for the cavern report; (ii) switching from detection to a segmentation problem. Acknowledgements All the data for the Tuberculosis task were provided by the Republican Research and Practical Center for Pulmonology and Tuberculosis which is located in Minsk, Belarus. The data were collected and labeled in the framework of several projects that aim at the creation of information resources on lung TB and drug resistance challenges. The projects were conducted by a multi-disciplinary team and funded by the National Institute of Allergy and Infectious Diseases, National Institutes of Health (NIH), U.S. Department of Health and Human Services, USA, through the Civilian Research and Development Foundation (CRDF). The dedicated web-portal3 developed in the framework of the projects stores information of almost 5,000 TB patients from 16 countries. The information includes CT scans, X-ray images, genome data, clinical and social data. Data collection was supported by the National Institute of Allergy and Infectious Diseases, National Institutes of Health, US Department of Health and Human Services, CRDF project RDAA9-20-67103-1 "Year 9: Belarus TB Database and TB Portals". 3 http://tbportals.niaid.nih.gov/ References [1] J. Kalpathy-Cramer, A. García Seco de Herrera, D. Demner-Fushman, S. Antani, S. Bedrick, H. Müller, Evaluating performance of biomedical image retrieval systems: Overview of the medical image retrieval task at ImageCLEF 2004–2014, Computerized Medical Imaging and Graphics 39 (2015) 55 – 61. [2] H. Müller, P. Clough, T. Deselaers, B. Caputo (Eds.), ImageCLEF – Experimental Evalu- ation in Visual Information Retrieval, volume 32 of The Springer International Series On Information Retrieval, Springer, Berlin Heidelberg, 2010. [3] A. García Seco de Herrera, R. Schaer, S. Bromuri, H. Müller, Overview of the ImageCLEF 2016 medical task, in: Working Notes of CLEF 2016 (Cross Language Evaluation Forum), 2016. [4] H. Müller, P. Clough, W. Hersh, A. Geissbuhler, ImageCLEF 2004–2005: Results experiences and new ideas for image retrieval evaluation, in: International Conference on Content– Based Multimedia Indexing (CBMI 2005), IEEE, Riga, Latvia, 2005. [5] T. Deselaers, T. M. Deserno, H. Müller, Automatic medical image annotation in ImageCLEF 2007: Overview, results, and discussion, Pattern Recognition Letters 29 (2008) 1988–1995. [6] B. Ionescu, H. Müller, R. Peteri, J. Rückert, A. Ben Abacha, A. G. S. de Herrera, C. M. Friedrich, L. Bloch, R. Brüngel, A. Idrissi-Yaghir, H. Schäfer, S. Kozlovski, Y. D. Cid, V. Ko- valev, L.-D. Ştefan, M. G. Constantin, M. Dogariu, A. Popescu, J. Deshayes-Chossart, H. Schindler, J. Chamberlain, A. Campello, A. Clark, Overview of the ImageCLEF 2022: Multimedia retrieval in medical, social media and nature applications, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction, Proceedings of the 13th Interna- tional Conference of the CLEF Association (CLEF 2022), LNCS Lecture Notes in Computer Science, Springer, Bologna, Italy, 2022. [7] B. Ionescu, H. Müller, R. Peteri, A. Ben Abacha, M. Sarrouti, D. Demner-Fushman, S. A. Hasan, S. Kozlovski, V. Liauchuk, Y. Dicente, V. Kovalev, O. Pelka, A. G. S. de Herrera, J. Jacutprakart, C. M. Friedrich, R. Berari, A. Tauteanu, D. Fichou, P. Brie, M. Dogariu, L. D. Ştefan, M. G. Constantin, J. Chamberlain, A. Campello, A. Clark, T. A. Oliver, H. Moustahfid, A. Popescu, J. Deshayes-Chossart, Overview of the ImageCLEF 2021: Multimedia retrieval in medical, nature, internet and social media applications, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction, Proceedings of the 12th International Conference of the CLEF Association (CLEF 2021), LNCS Lecture Notes in Computer Science, Springer, Bucharest, Romania, 2021. [8] B. Ionescu, H. Müller, R. Péteri, A. B. Abacha, V. Datla, S. A. Hasan, D. Demner-Fushman, S. Kozlovski, V. Liauchuk, Y. D. Cid, V. Kovalev, O. Pelka, C. M. Friedrich, A. G. S. de Herrera, V.-T. Ninh, T.-K. Le, L. Zhou, L. Piras, M. Riegler, P. l Halvorsen, M.-T. Tran, M. Lux, C. Gur- rin, D.-T. Dang-Nguyen, J. Chamberlain, A. Clark, A. Campello, D. Fichou, R. Berari, P. Brie, M. Dogariu, L. D. Ştefan, M. G. Constantin, Overview of the ImageCLEF 2020: Multimedia retrieval in medical, lifelogging, nature, and internet applications, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction, volume 12260 of Proceedings of the 11th International Conference of the CLEF Association (CLEF 2020), LNCS Lecture Notes in Computer Science, Springer, Thessaloniki, Greece, 2020. [9] B. Ionescu, H. Müller, R. Péteri, Y. Dicente Cid, V. Liauchuk, V. Kovalev, D. Klimuk, A. Tarasau, A. B. Abacha, S. A. Hasan, V. Datla, J. Liu, D. Demner-Fushman, D.-T. Dang- Nguyen, L. Piras, M. Riegler, M.-T. Tran, M. Lux, C. Gurrin, O. Pelka, C. M. Friedrich, A. G. S. de Herrera, N. Garcia, E. Kavallieratou, C. R. del Blanco, C. C. Rodríguez, N. Vasillopoulos, K. Karampidis, J. Chamberlain, A. Clark, A. Campello, ImageCLEF 2019: Multimedia retrieval in medicine, lifelogging, security and nature, in: Experimental IR Meets Multilin- guality, Multimodality, and Interaction, volume 2380 of Proceedings of the 10th International Conference of the CLEF Association (CLEF 2019), LNCS Lecture Notes in Computer Science, Springer, Lugano, Switzerland, 2019. [10] B. Ionescu, H. Müller, M. Villegas, A. G. S. de Herrera, C. Eickhoff, V. Andrearczyk, Y. Di- cente Cid, V. Liauchuk, V. Kovalev, S. A. Hasan, Y. Ling, O. Farri, J. Liu, M. Lungren, D.-T. Dang-Nguyen, L. Piras, M. Riegler, L. Zhou, M. Lux, C. Gurrin, Overview of ImageCLEF 2018: Challenges, datasets and evaluation, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction, Proceedings of the Ninth International Conference of the CLEF Association (CLEF 2018), LNCS Lecture Notes in Computer Science, Springer, Avignon, France, 2018. [11] B. Ionescu, H. Müller, M. Villegas, H. Arenas, G. Boato, D.-T. Dang-Nguyen, Y. Dicente Cid, C. Eickhoff, A. Garcia Seco de Herrera, C. Gurrin, B. Islam, V. Kovalev, V. Liauchuk, J. Mothe, L. Piras, M. Riegler, I. Schwall, Overview of ImageCLEF 2017: Information extraction from images, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction 8th International Conference of the CLEF Association, CLEF 2017, volume 10456 of Lecture Notes in Computer Science, Springer, Dublin, Ireland, 2017. [12] M. Villegas, H. Müller, A. Garcia Seco de Herrera, R. Schaer, S. Bromuri, A. Gilbert, L. Piras, J. Wang, F. Yan, A. Ramisa, A. Dellandrea, R. Gaizauskas, K. Mikolajczyk, J. Puigcerver, A. H. Toselli, J.-A. Sanchez, E. Vidal, General overview of ImageCLEF at the CLEF 2016 labs, in: CLEF 2016 Proceedings, Lecture Notes in Computer Science, Springer, Evora. Portugal, 2016. [13] M. Villegas, H. Müller, A. Gilbert, L. Piras, J. Wang, K. Mikolajczyk, A. García Seco de Herrera, S. Bromuri, M. A. Amin, M. Kazi Mohammed, B. Acar, S. Uskudarli, N. B. Marvasti, J. F. Aldana, M. d. M. Roldán García, General overview of ImageCLEF at the CLEF 2015 labs, in: Working Notes of CLEF 2015, Lecture Notes in Computer Science, Springer International Publishing, 2015. [14] B. Caputo, H. Müller, B. Thomee, M. Villegas, R. Paredes, D. Zellhofer, H. Goeau, A. Joly, P. Bonnet, J. Martinez Gomez, I. Garcia Varea, C. Cazorla, ImageCLEF 2013: the vision, the data and the open challenges, in: Working Notes of CLEF 2013 (Cross Language Evaluation Forum), 2013. [15] World Health Organization, et al., Global tuberculosis report 2019 (2019). [16] Y. Dicente Cid, A. Kalinovsky, V. Liauchuk, V. Kovalev, , H. Müller, Overview of ImageCLEF- tuberculosis 2017 - predicting tuberculosis type and drug resistances, in: CLEF2017 Work- ing Notes, CEUR Workshop Proceedings, CEUR-WS.org , Dublin, Ireland, 2017. [17] Y. Dicente Cid, V. Liauchuk, V. Kovalev, , H. Müller, Overview of ImageCLEFtuberculosis 2018 - detecting multi-drug resistance, classifying tuberculosis type, and assessing sever- ity score, in: CLEF2018 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Avignon, France, 2018. [18] Y. Dicente Cid, V. Liauchuk, D. Klimuk, A. Tarasau, V. Kovalev, H. Müller, Overview of ImageCLEFtuberculosis 2019 - Automatic CT-based Report Generation and Tuberculosis Severity Assessment, in: CLEF2019 Working Notes, CEUR Workshop Proceedings, CEUR- WS.org , Lugano, Switzerland, 2019. [19] S. Kozlovski, V. Liauchuk, Y. Dicente Cid, A. Tarasau, V. Kovalev, H. Müller, Overview of Im- ageCLEFtuberculosis 2020 - automatic CT-based report generation, in: CLEF2020 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Thessaloniki, Greece, 2020. [20] S. Kozlovski, V. Liauchuk, Y. Dicente Cid, V. Kovalev, H. Müller, Overview of ImageCLEFt- uberculosis 2021 - CT-based tuberculosis type classification, in: CLEF 2021 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Bucharest, Romania, 2021. [21] Y. Dicente Cid, O. Jimenez-del-Toro, A. Depeursinge, H. Müller, Efficient and fully auto- matic segmentation of the lungs in CT volumes, in: O. Orcun Goksel, Jimenez-del-Toro, A. Foncubierta-Rodriguez, H. Müller (Eds.), Proceedings of the VISCERAL Challenge at ISBI, number 1390 in CEUR Workshop Proceedings, 2015, pp. 31–35. [22] V. Liauchuk, V. Kovalev, Imageclef 2017: Supervoxels and co-occurrence for tuberculosis CT image classification, in: CLEF2017 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Dublin, Ireland, 2017. [23] B. Xin, H. Min, A. Gillman, K. Bevan, D. Jason, N. Aaron, CSIRO at the ImageCLEFmed 2022 Tuberculosis Caverns Detection Challenge: A 2D and 3D deep learning detection network approach, in: CLEF2022 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Bologna, Italy, 2022. [24] T. Asakawa, R. Tsuneda, K. Shimizu, M. Aono, Caverns Detection and Caverns Report in Tuberculosis: lesion detection based on image using YOLO-V3 and median based multi-label multi-class classification using SRGAN, in: CLEF2022 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Bologna, Italy, 2022. [25] X. Lu, A. Yan, E. Y. Chang, C.-n. Hsu, J. McAuley, J. Du, A. Gentili, Semi-supervised Multi-Label Classification with 3D CBAM Resnet for Tuberculosis Cavern Report, in: CLEF2022 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Bologna, Italy, 2022. [26] S. Srinivasan, S. Shankar, L. Kalinathan, P. Balasundaram, N. K. N, S. Velayutham, T. N, V. A. N, Automated Classification of Lung Tuberculosis using 3D Deep Convolutional Neural Networks, in: CLEF2022 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Bologna, Italy, 2022. [27] S. Dheepak, S. Kavitha, G. Raghuraman, SSN MLRG at ImageCLEF 2022 Tuberculosis: Caverns Report using 3D CNN and Uniformizing Techniques, in: CLEF2022 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org , Bologna, Italy, 2022.