=Paper= {{Paper |id=Vol-3649/Paper21 |storemode=property |title=PMC: Paired Multi-Contrast MRI Dataset at 1.5T and 3T for Supervised Image2Image Translation (short paper) |pdfUrl=https://ceur-ws.org/Vol-3649/Paper21.pdf |volume=Vol-3649 |authors=Fatemeh Bagheri,Kamil Uludag |dblpUrl=https://dblp.org/rec/conf/aaai/BagheriU24 }} ==PMC: Paired Multi-Contrast MRI Dataset at 1.5T and 3T for Supervised Image2Image Translation (short paper)== https://ceur-ws.org/Vol-3649/Paper21.pdf
                                PMC: Paired Multi-Contrast MRI Dataset at 1.5T and 3T for
                                Supervised Image2Image Translation
                                Fatemeh Bagheri1,2,∗ , Kamil Uludag1,2,3
                                1
                                  Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
                                2
                                  Krembil Brain Institute, University Health Network, Toronto, ON, Canada
                                3
                                  Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON, Canada


                                               Abstract
                                               Access to magnetic resonance imaging (MRI) scans on the same subjects, encompassing various contrasts and field strengths,
                                               is crucial for brain studies involving supervised image translation for predicting missing or unavailable MRI data. However,
                                               there is a scarcity of such datasets covering both low and high fields. To bridge this gap, we propose a semi-synthesized
                                               dataset including Paired Multi-Contrast magnetic resonance (MR) images in T1, T2, and PD contrasts at both 1.5T and 3T
                                               for the same subjects. We also present it in both 2- and 3-dimensional formats, making it compatible with a wide range of
                                               models. We evaluate our proposed dataset using evaluation metrics along with morphology-based methods, and showcase
                                               the performance of a U-Net based architecture in different applications using our dataset. Finally, we release our dataset to
                                               facilitate future research involving multi-contrast MR image translation.

                                               Keywords
                                               Magnetic resonance imaging, supervised image translation, paired MRI dataset, multi-contrast MRI dataset



                                1. Introduction                                                                                         (i.e., cross-modality) and from low- to high-field MR im-
                                                                                                                                        ages for the same contrast. Although, this technique
                                Within the domain of brain studies, magnetic resonance                                                  can be applied using both supervised and unsupervised
                                imaging (MRI) provides unrivaled soft tissue contrast and                                               approaches, supervised learning has shown higher per-
                                is now the leading imaging modality for clinical research                                               formance as it enables the generation of high-quality
                                and care. It serves as a cornerstone for disease detection,                                             images with sharp details and robust quantitative per-
                                precise diagnostics, and vigilant treatment monitoring                                                  formance [5, 6]. However, the requirement for paired
                                across diverse age groups [1]. The distinctive feature of                                               datasets imposes a significant challenge as there is al-
                                MRI lies in its remarkable capability to generate highly                                                most no accessible dataset available that includes paired
                                detailed 3-dimensional (3D) images, with a particular fo-                                               MR images at both low and high field strengths for the
                                cus on capturing the intricacies of soft tissues, such as                                               same subjects and in multiple contrasts. For instance,
                                gray and white matters. This unique attribute positions                                                 the most widely used datasets in previous in the field
                                MRI as an invaluable tool for delving into the complex-                                                 of MRI include Alzheimer’s Disease Neuroimaging Ini-
                                ities of the brain’s internal structure and function [2].                                               tiative (ADNI)1 [7], Information eXtraction from Images
                                Magnetic resonance (MR) images are acquired across di-                                                  (IXI)2 , and datasets sourced from the Human Connectome
                                verse biophysical contrasts (e.g., T1, T2, and PD) and at                                               Project (HCP)3 , each of which has limitations. For exam-
                                different magnetic field strengths (i.e., 0.2 to 7T), each cap-                                         ple, in all mentioned datsets, only raw 3D MR images are
                                turing specific characteristics of the underlying anatomy                                               presented, which necessitates intricate pre-processing
                                [3, 4]. Consequently, higher field strengths, along with                                                steps including registration and brain extraction. More-
                                higher spatial resolution can reveal richer information                                                 over, they include either MR images of paired subjects
                                and superior image quality of the brain tissue relative to                                              limited to a single contrast, or multiple contrasts but
                                images acquired at lower field strength and resolution.                                                 limited to one field strength.
                                   Image-to-image (I2I) is a computer vision technique                                                      To address this gap, we leverage the IXI dataset, which
                                employed to enhance image quality and content. Within                                                   includes unpaired 3D MRI scans in T1, T2, and PD
                                the field of MRI, it includes translation tasks such as                                                 for different subjects at 1.5T and 3T. We propose a
                                one contrast to another within the same field strength                                                  semi-synthesized dataset, PMC, which includes Paired
                                                                                                                                        Multi-Contrast MR images at 1.5T and 3T for the same
                                Machine Learning for Cognitive and Mental Health Workshop                                               subjects.
                                (ML4CMH), AAAI 2024, Vancouver, BC, Canada
                                ∗
                                     Corresponding author.
                                Envelope-Open fatemeh.bagheri@mail.utoronto.ca (F. Bagheri);
                                kamil.uludag@uhn.ca (K. Uludag)                                                                         1
                                                                                                                                            https://adni.loni.usc.edu/data-samples/access-data/
                                Orcid 0009-0001-7860-0992 (F. Bagheri)                                                                  2
                                                                                                                                            https://brain-development.org/ixi-dataset/
                                         © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License   3
                                         Attribution 4.0 International (CC BY 4.0).                                                         https://www.humanconnectome.org




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. PMC Dataset                                                its augmentations) is distributed across different subsets.
                                                              All versions of our proposed dataset will be released
The PMC dataset is pre-processed and ready to use for         through our GitHub repository4 .
supervised and semi-supervised learning methods in
tasks, such as cross-modality, high-field MR image pre-
diction, super-resolution, and multi-contrast MR image
                                                              2.1. Data Synthesis Pipeline
translation. This comprehensive dataset comprises MR          To create a dataset consist of MR images in multiple con-
images from 181 subjects, preciously crafted in both 2-       trasts at both 1.5T and 3T for the same pseudo-subjects,
dimensional (2D) and 3D to accommodate a diverse range        a series of processing steps are undertaken as illustrated
of models compatible with each of these formats. As Fig-      in Figure 2.
ure 1 represents, the dataset includes paired images in
T1, T2, and PD contrasts at both 1.5T and 3T for each                                                                                           Using FSL

subject generated from the IXI dataset.                         1.5T                                        ...




            T1                T2                 PD              3T                                         ...




                                                                Pairing subjects at 1.5T with 3T based on         Reorienting to the standard orientation           Extracting the brain
                                                                         sex, age, and ethnicity                   and cropping to the size of 256x150            and removing the skull

                                                                                                                                                Using ANTs




                                                                                     +5 rotated
                                                                                                                      1.5T                3T                     1.5T             3T




                                                                original




                                                                                                  noisy
 1.5T




                                                                                                                      T1w                T1w                     T1w             T1w

                                                                                                                      T2w                T2w                     T2w             T2w




                                                                                     -5 rotated
                                                                zoomed




                                                                                                  flipped
                                                                                                                      PDw               PDw                      PDw             PDw
                                                                                                                  Applying rigid registration for images within the same field strength, then
                                                                           Making 2D images by selecting best     applying non-linear registration within the same contrast for the obtained
                                                                             slices and augmenting the data                                images of paired subjects




                                                              Figure 2: Pipeline for data synthetization.
 3T




                                                               Firstly, by leveraging demographic information from
                                                             the IXI dataset, we meticulously select 181 subjects from
                                                             the 1.5T set and 181 subjects from the 3T set, aiming
                                                             for the closest possible match in terms of demographic
Figure 1: Examples of T1-, T2-, and PD-weighted MR images details. Subsequently, subjects at 1.5T and 3T are paired
at 1.5T and 3T for the same pseudo-subjects in PMC dataset. based on matching sex, age, and ethnicity as closely as
                                                             possible. The reason for pairing based on these demo-
                                                             graphic information is that aging, sex and ethnicity affect
   In the 3D format, the total number of images in each the brain overall structure and gray and white matter con-
contrast at each field strength is 181. All MR images tributions [8].
across contrasts and field strengths for each subject are      Following this, MR images are reoriented to the stan-
registered and have the same orientation. Additionally, dard orientation, cropped to dimensions of 256×150 to
the brain is extracted and the skull is removed.             ensure uniform size, and reduce neck parts in the image
   In the 2D format, there are a total of 6576 images in with the aim of improving the brain extraction step. Sub-
each contrast at each field strength. These images are sequently, the brain is extracted and the skull is removed.
pre-processed and have the same size of 256×150. Similar We employ the FMRIB Software Library (FSL)5 software
to the 3D counterparts, they have undergone registration for these tasks as it provides a comprehensive set of tools
for a consistent orientation, brain extraction with skull for image analysis and statistical analysis for functional,
removal, and augmentation using techniques such as structural, and diffusion MRI brain imaging data [9].
flipping, rotation, scaling, and adding noise.                 Next, to generate MR images for the same subjects we
   Furthermore, we provide a split version of the dataset follow two main steps: Firstly, T2- and PD-weighted MR
for the 2D format. The entire dataset is divided into images of each subject are registered to corresponding
three subsets: the training set, the validation set, and the T1-weighted MR images at each field strength using rigid
test set, with an as-close-as-possible ratio of 80% - 10% registration. It is worth mentioning that rigid registra-
- 10%. Consequently, the data size for each contrast at tion is necessary for MR images of the same subject, due
each field strength is 5268, 648, and 660 for the training,
validation, and test sets, respectively. To prevent models 4 https://github.com/FaatemehBaagheri/PMC-Paired-Multi-Contrast-
from exploiting subject-specific patterns in predictions, MRI-Dataset-at-1.5T-and-3T-for-Supervised-Image2Image-
we ensure that no image from the same subject (including 5Translation
                                                               https://fsl.fmrib.ox.ac.uk/fsl/fslwiki
to the difference in the angle and position of the head              differences in the relative signal intensities in gray and
during acquisition of the data. Secondly, 1.5T MR images             white matter and accordingly in the resulting output con-
are taken as reference and 3T MR images at each contrast             trast [12]. Consequently, to investigate the quality of the
are registered to their respective contrast at 1.5T using            synthesized images and minimize the impact of contrast
non-linear registration. For the registration steps, we uti-         differences during evaluation, we conduct morphology-
lize Advanced Normalization Tools (ANTs)6 software as                based comparative analyses which have been proven to
it is widely recognized as an advanced medical image reg-            be reliable in the state-of-the-art studies in related fields
istration and segmentation toolkit that effectively man-             [13]. We extract the morphological patterns of images
ages, interprets, and visualizes multidimensional data               (using edge detection techniques) at both 1.5T and 3T for
[10]. Also, it should be noted that all aforementioned               each contrast as shown in Figure 3 to assess whether the
processing steps are applied to the 3D MR images, result-            patterns and morphology of the synthesized data at 3T
ing in PMC dataset in 3D format.                                     align with the reference data at 1.5T. Next, we evaluate
    Moreover, to extend the data generalizability to net-            the extracted patterns using MSE and structural index
works solely employing 2D data and increase the number               similarity measure (SSIM) [14] as reported in Table 2.
of samples, 3D MR images are transformed to 2D. Specif-              Also, to compare the synthesized images with references
ically, we select slices that predominantly contain the              within different spatial frequency ranges and accordingly
brain (i.e., 10 slices per 3D MR image) while avoiding               different levels of details, we perform 2D wavelet analysis
slices with minimal or no brain content. Additionally,               on the synthesized images and corresponding references
to increase the size and generalizability of the dataset,            to decompose them into four different frequency com-
data augmentation techniques, including flipping, rota-              ponents and select the three most high frequency ones
tion (with an angle of ±5 degrees), noise addition (e.g.,            named as Subband 1, 2, and 3, respectively [15] as Figure
Gaussian with random standard deviation in range of                  4 illustrates. Table 3 displays the subband-wise compara-
[5,10] and salt-and-pepper with a probability uniformly              tive results.
sampled from the interval of [0.05,0.1]), and scaling (with
a factor of 1.2) are applied. As a result, the data size for                         Image               Extracted Pattern
each contrast at each field strength increased to 6576.

2.2. Data quality assessment
To assess the quality of the synthesized MR images at
3T compared to the reference images at 1.5T, we first
employ evaluation metrics including mean squared error
                                                                      1.5T




(MSE), peak signal-to-noise ratio (PSNR), Pearson cor-
relation (CORR), and mutual information (MI) [11]. We
compare the synthesized 3T images with corresponding
reference images at 1.5T as there are no labels available
at 3T for checking the synthesis quality. Thus, utilizing
these metrics, we assess how close 3T images are syn-
thesized compared to 1.5T ones in terms of contrast and
overall structure as reported in Table 1.

Table 1
Synthesized MR images at 3T compared with the reference
images at 1.5T evaluated using MSE, PSNR, CORR, and MI
metrics (The directions of vertical arrows indicate higher im-
                                                                      3T




age quality. Results are reported as the mean±standard devia-
tion).
    Contrast       MSE↓         PSNR↑       CORR↑          MI↑
    T1          0.014±0.006    20.3±1.02   0.97±0.005   0.88±0.035
    T2          0.015±0.006    21.3±1.77   0.90±0.020   0.77±0.032
    PD          0.012±0.004    20.5±1.57   0.96±0.008   0.80±0.034


  However, it should be noted that in MR images ac- Figure 3: Example of extracted patterns from reference MR
quired at 1.5T and 3T even for the same contrast, there are image at 1.5T and its corresponding synthesized MR image at
                                                                     3T for the T2 contrast.
6
    http://stnava.github.io/ANTs/
       Image        Subband1            Subband2       Subband3         U-Net is one of the most commonly used neural net-
                                                                     works for tasks such as cross-modality, super-resolution,
                                                                     and multi-contrast MR image translation [16, 13, 17, 18].
                                                                     Thus, to further investigate the application of the pro-
                                                                     posed dataset, a U-Net based architecture, which was
1.5T




                                                                     previously proposed in [17] and has shown high perfor-
                                                                     mance in the mentioned applications, is implemented in
                                                                     this paper for the following tasks:
                                                                            1. Cross-modality MR image translation
                                                                            2. 3T MR image prediction from the same contrast
                                                                               at 1.5T
3T




                                                                            3. 3T MR image prediction using 1.5T multi-contrast
                                                                               MR images
                                                                       Table 4 displays the results for image generation in
                                                                     each task using the PMC dataset, indicating the highest
Figure 4: Example of the selected subbands for reference MR          performance in Task 1 and 1.5T T1 to 1.5T T2 translation.
image at 1.5T and its corresponding synthesized MR image at
3T for the PD contrast.
                                                                     Table 4
                                                                     Quantitative results of generated MR images using U-Net
                                                                     compared with the ground truth images, using PMC dataset
Table 2                                                              (The directions of vertical arrows indicate higher image quali-
Patterns extracted from synthesized MR images at 3T com-             ties. Results are reported as the mean±standard deviation).
pared with the ones extracted from reference images at 1.5T           Task            Translation           MSE↓         PSNR↑
evaluated using MSE and SSIM metrics (The directions of                            1.5T T2→ 1.5T T1      0.0022±0.001   26.97±1.89
                                                                        1
vertical arrows indicate higher image qualities. Results are                       1.5T T1→ 1.5T T2      0.0019±0.001   27.93±2.38
reported as the mean±standard deviation).                                           1.5T T1 → 3T T1      0.0028±0.002   25.83±1.71
                                                                        2           1.5T T2 → 3T T2      0.0046±0.002   23.78±1.95
               Contrast       MSE↓          SSIM↑                                  1.5T PD → 3T PD       0.0047±0.002   23.55±1.76
               T1           0.12±0.012    0.62±0.033
                                                                                1.5T T1, T2, PD→ 3T T1   0.0033±0.002   25.16±1.87
               T2           0.11±0.033    0.60±0.037
                                                                        3       1.5T T1, T2, PD→ 3T T2   0.0043±0.002   23.97±1.72
               PD           0.12±0.013    0.60±0.036
                                                                                1.5T T1, T2, PD→ 3T PD   0.0047±0.002   23.49±1.73


                                                                       Moreover, to investigate the effectiveness of the PMC
Table 3
                                                                     dataset in developing models based on cross-dataset eval-
Subbands of synthesized MR images at 3T compared with the
                                                                     uation scenarios, we utilize the latest release of the Open
reference images at 1.5T evaluated using MSE and SSIM met-
rics (The directions of vertical arrows indicate higher image        Access Series of Imaging Studies (OASIS)7 , known as OA-
qualities. Results are reported as the mean±standard devia-          SIS3 dataset [19], which includes MR images at 1.5T and
tion).                                                               3T in T2, for Task 2 (3T MR image prediction from the same
 Contrast      Metric     Subband 1      Subband 2     Subband 3
                                                                     contrast at 1.5T ). First, we train and test the model on the
               MSE↓       0.005±0.004    0.01±0.010    0.009±0.010   OASIS3 dataset. Then, to compare the effectiveness of us-
 T1
               SSIM↑      0.74±0.028     0.70±0.032     0.62±0.033   ing the PMC dataset, we use it to train the model and test
               MSE↓       0.005±0.003    0.007±0.006   0.007±0.007   the model on the OASIS3 dataset. The results for both
 T2
               SSIM↑      0.70±0.034     0.66±0.034     0.62±0.037
               MSE↓       0.004±0.004    0.008±0.009   0.008±0.009
                                                                     approaches shown in Table 5, suggest that our dataset
 PD                                                                  demonstrates acceptable performance. Specifically, the
               SSIM↑      0.74±0.035     0.70±0.037     0.65±0.037
                                                                     U-Net, demonstrates higher efficacy when trained on
                                                                     PMC for 1.5T T2 to 3T T2 MR image translation.

3. Application                                                       4. Conclusion
The PMC dataset can be applied in a wide range of tasks
involving MR image translation, in particular, image gen- In this study, we introduced the PMC dataset, which con-
eration, different stages of model development, and pre- sists of paired MR images in multiple contrasts of T1,
training models for small target dataset sizes. In the T2, and PD and at both 1.5T and 3T field strengths for
following, we investigate the capability of our dataset in
supervised methods for the aforementioned tasks.           7
                                                             https://www.oasis-brains.org/#data
Table 5                                                          [7] C. R. Jack Jr, M. A. Bernstein, N. C. Fox, P. Thomp-
Quantitative results on OASIS3 dataset, using U-Net model            son, G. Alexander, D. Harvey, B. Borowski, P. J. Brit-
trained by OASIS3 vs. PMC dataset (The directions of vertical        son, J. L. Whitwell, C. Ward, et al., The alzheimer’s
arrows indicate higher image qualities. Results are reported         disease neuroimaging initiative (adni): Mri meth-
as the mean±standard deviation).                                     ods, Journal of Magnetic Resonance Imaging: An
 Trained on       Translation         MSE↓        PSNR↑              Official Journal of the International Society for Mag-
                1.5T T1→ 3T T1     0.007±0.002   21.73±1.47          netic Resonance in Medicine 27 (2008) 685–691.
   OASIS3
                1.5T T2→ 3T T2     0.009±0.003   20.93±1.33      [8] Y. Y. Choi, J. J. Lee, K. Y. Choi, E. H. Seo, I. H.
                1.5T T1 → 3T T1    0.011±0.004   19.73±1.31
    PMC                                                              Choo, H. Kim, M.-K. Song, S.-M. Choi, S. H. Cho,
                1.5T T2 → 3T T2    0.007±0.002   21.3±1.44
                                                                     B. C. Kim, K. H. Lee, f. t. A. D. N. I. , The aging
                                                                     slopes of brain structures vary by ethnicity and sex:
                                                                     Evidence from a large magnetic resonance imaging
the same subjects. The dataset is pre-processed and pre-
                                                                     dataset from a single scanner of cognitively
sented in 3D, 2D, and a split version of 2D, ensuring com-
                                                                     healthy elderly people in korea, Frontiers in
patibility with a wide range of models and application
                                                                     Aging Neuroscience 12 (2020). URL: https://www.
in image translation tasks within MRI. Quality evalua-
                                                                     frontiersin.org/articles/10.3389/fnagi.2020.00233.
tion of the proposed dataset involved the use of MSE,
                                                                     doi:10.3389/fnagi.2020.00233 .
PSNR, CORR, SSIM, and MI evaluation metrics, along
                                                                 [9] C. Jack, V. Lowe, M. Senjem, S. Weigand, B. Kemp,
with morphology-based methods. We also demonstrated
                                                                     M. Shiung, R. Petersen, Pre-dementia memory im-
the applicability of the data for supervised methods, par-
                                                                     pairment is associated with white matter tract affec-
ticularly in cross-modality MR image translation, 3T MR
                                                                     tion, The American Journal of Geriatric Psychiatry
image prediction from the same contrast at 1.5T, and
                                                                     17 (2009) 368–375.
3T MR image prediction using 1.5T multi-contrast MR
                                                                [10] B. B. Avants, N. Tustison, G. Song, et al., Advanced
images. Moreover, we highlighted its extendability to
                                                                     normalization tools (ants), Insight j 2 (2009) 1–35.
cross-dataset evaluation scenarios.
                                                                [11] D. Kawahara, Y. Nagata, T1-weighted and t2-
                                                                     weighted mri image synthesis with convolutional
References                                                           generative adversarial networks, reports of practi-
                                                                     cal Oncology and radiotherapy 26 (2021) 35–42.
 [1] T. C. Arnold, C. W. Freeman, B. Litt, J. M. Stein, Low-    [12] M. Hori, A. Hagiwara, M. Goto, A. Wada, S. Aoki,
     field mri: Clinical promise and challenges, Journal             Low-field magnetic resonance imaging: its history
     of Magnetic Resonance Imaging 57 (2023) 25–44.                  and renaissance, Investigative Radiology 56 (2021)
 [2] T. Sindhu, N. Kumaratharan, P. Anandan, A review                669.
     of magnetic resonance imaging and its clinical ap-         [13] J. E. Iglesias, R. Schleicher, S. Laguna, B. Billot,
     plications, in: 2022 6th International Conference               P. Schaefer, B. McKaig, J. N. Goldstein, K. N. Sheth,
     on Devices, Circuits and Systems (ICDCS), IEEE,                 M. S. Rosen, W. T. Kimberly, Quantitative brain
     2022, pp. 38–42.                                                morphometry of portable low-field-strength mri us-
 [3] S. D. Waldman, R. S. Campbell (Eds.),                           ing super-resolution machine learning, Radiology
     CHAPTER 6 - Magnetic Resonance Imag-                            306 (2022) e220522.
     ing, W.B. Saunders, Philadelphia, 2011. URL:               [14] J. Zujovic, T. N. Pappas, D. L. Neuhoff, Structural
     https://www.sciencedirect.com/science/article/pii/              texture similarity metrics for image analysis and
     B9781437709063000067. doi:https://doi.org/10.                   retrieval, IEEE Transactions on Image Processing
     1016/B978- 1- 4377- 0906- 3.00006- 7 .                          22 (2013) 2545–2558.
 [4] T. Magee, M. Shapiro, D. Williams, Comparison of           [15] D. Zhang, Wavelet Transform, Springer In-
     high-field-strength versus low-field-strength mri of            ternational Publishing, Cham, 2019. URL:
     the shoulder, American Journal of Roentgenology                 https://doi.org/10.1007/978-3-030-17989-2_3.
     181 (2003) 1211–1215.                                           doi:10.1007/978- 3- 030- 17989- 2_3 .
 [5] H. Hoyez, C. Schockaert, J. Rambach, B. Mir-               [16] O. Ronneberger, P. Fischer, T. Brox, U-net: Con-
     bach, D. Stricker, Unsupervised image-to-image                  volutional networks for biomedical image segmen-
     translation: A review,          Sensors 22 (2022).              tation, in: N. Navab, J. Hornegger, W. M. Wells,
     URL: https://www.mdpi.com/1424-8220/22/21/8540.                 A. F. Frangi (Eds.), Medical Image Computing and
     doi:10.3390/s22218540 .                                         Computer-Assisted Intervention – MICCAI 2015,
 [6] M. Okada, H. Nakano, A. Miyauchi, Cyclegan us-                  Springer International Publishing, Cham, 2015, pp.
     ing semi-supervised learning, Aust. J. Intell. Inf.             234–241.
     Process. Syst. 15 (2019) 10–19. URL: https://api.          [17] F. Bagheri, K. Uludag, Mr image prediction at high
     semanticscholar.org/CorpusID:214764843.                         field strength from mr images taken at low field
     strength using multi-to-one translation, CMBES
     Proceedings 45 (2023).
[18] N. Siddique, S. Paheding, C. P. Elkin, V. Devab-
     haktuni, U-net and its variants for medical im-
     age segmentation: A review of theory and ap-
     plications, IEEE Access 9 (2021) 82031–82057.
     doi:10.1109/ACCESS.2021.3086020 .
[19] P. J. LaMontagne, T. L. Benzinger, J. C. Morris,
     S. Keefe, R. Hornbeck, C. Xiong, E. Grant, J. Has-
     senstab, K. Moulder, A. G. Vlassenko, et al., Oasis-
     3: longitudinal neuroimaging, clinical, and cogni-
     tive dataset for normal aging and alzheimer disease,
     MedRxiv (2019) 2019–12.