<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Choo, H. Kim, M.-K. Song, S.-M. Choi, S. H. Cho,
B. C. Kim, K. H. Lee, f. t. A. D. N. I. , The aging
slopes of brain structures vary by ethnicity and sex:
Evidence from a large magnetic resonance imaging
dataset from a single scanner of cognitively
healthy elderly people in korea, Frontiers in
Aging Neuroscience</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3389/fnagi.2020.00233</article-id>
      <title-group>
        <article-title>MRI Dataset at 1.5T and 3T for Supervised Image2Image Translation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fatemeh Bagheri</string-name>
          <email>fatemeh.bagheri@mail.utoronto.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kamil Uludag</string-name>
          <email>kamil.uludag@uhn.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Medical Biophysics, University of Toronto</institution>
          ,
          <addr-line>Toronto, ON</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Krembil Brain Institute, University Health Network</institution>
          ,
          <addr-line>Toronto, ON</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Physical Sciences Platform, Sunnybrook Research Institute</institution>
          ,
          <addr-line>Toronto, ON</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>12</volume>
      <issue>2020</issue>
      <fpage>368</fpage>
      <lpage>375</lpage>
      <abstract>
        <p>Access to magnetic resonance imaging (MRI) scans on the same subjects, encompassing various contrasts and field strengths, is crucial for brain studies involving supervised image translation for predicting missing or unavailable MRI data. However, there is a scarcity of such datasets covering both low and high fields. To bridge this gap, we propose a semi-synthesized dataset including Paired Multi-Contrast magnetic resonance (MR) images in T1, T2, and PD contrasts at both 1.5T and 3T for the same subjects. We also present it in both 2- and 3-dimensional formats, making it compatible with a wide range of models. We evaluate our proposed dataset using evaluation metrics along with morphology-based methods, and showcase the performance of a U-Net based architecture in diferent applications using our dataset. Finally, we release our dataset to facilitate future research involving multi-contrast MR image translation.</p>
      </abstract>
      <kwd-group>
        <kwd>Magnetic resonance imaging</kwd>
        <kwd>supervised image translation</kwd>
        <kwd>paired MRI dataset</kwd>
        <kwd>multi-contrast MRI dataset</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Within the domain of brain studies, magnetic resonance</title>
        <p>imaging (MRI) provides unrivaled soft tissue contrast and
is now the leading imaging modality for clinical research
precise diagnostics, and vigilant treatment monitoring
across diverse age groups [1]. The distinctive feature of</p>
      </sec>
      <sec id="sec-1-2">
        <title>MRI lies in its remarkable capability to generate highly</title>
        <p>detailed 3-dimensional (3D) images, with a particular
focus on capturing the intricacies of soft tissues, such as
gray and white matters. This unique attribute positions</p>
      </sec>
      <sec id="sec-1-3">
        <title>MRI as an invaluable tool for delving into the complex</title>
        <p>ities of the brain’s internal structure and function [2].
Magnetic resonance (MR) images are acquired across
diverse biophysical contrasts (e.g., T1, T2, and PD) and at
diferent magnetic field strengths (i.e., 0.2 to 7T), each
capturing specific characteristics of the underlying anatomy
[3, 4]. Consequently, higher field strengths, along with
higher spatial resolution can reveal richer information
and superior image quality of the brain tissue relative to
images acquired at lower field strength and resolution.</p>
      </sec>
      <sec id="sec-1-4">
        <title>Image-to-image (I2I) is a computer vision technique employed to enhance image quality and content. Within the field of MRI, it includes translation tasks such as one contrast to another within the same field strength</title>
        <p>Machine Learning for Cognitive and Mental Health Workshop
subjects.
nEvelop-O</p>
        <p>0009-0001-7860-0992 (F. Bagheri)</p>
        <p>CEUR</p>
        <p>ceur-ws.org
(i.e., cross-modality) and from low- to high-field MR
images for the same contrast. Although, this technique
can be applied using both supervised and unsupervised
approaches, supervised learning has shown higher
performance as it enables the generation of high-quality
formance [5, 6]. However, the requirement for paired
datasets imposes a significant challenge as there is
almost no accessible dataset available that includes paired</p>
      </sec>
      <sec id="sec-1-5">
        <title>MR images at both low and high field strengths for the</title>
        <p>same subjects and in multiple contrasts. For instance,
the most widely used datasets in previous in the field
of MRI include Alzheimer’s Disease Neuroimaging
Initiative (ADNI)1 [7], Information eXtraction from Images
(IXI)2, and datasets sourced from the Human Connectome
Project (HCP)3, each of which has limitations. For
example, in all mentioned datsets, only raw 3D MR images are
presented, which necessitates intricate pre-processing
steps including registration and brain extraction.
Moreover, they include either MR images of paired subjects
limited to a single contrast, or multiple contrasts but
limited to one field strength.</p>
      </sec>
      <sec id="sec-1-6">
        <title>To address this gap, we leverage the IXI dataset, which</title>
        <p>includes unpaired 3D MRI scans in T1, T2, and PD
for diferent subjects at 1.5T and 3T. We propose a
semi-synthesized dataset, PMC, which includes Paired</p>
      </sec>
      <sec id="sec-1-7">
        <title>Multi-Contrast MR images at 1.5T and 3T for the same</title>
      </sec>
      <sec id="sec-1-8">
        <title>1https://adni.loni.usc.edu/data-samples/access-data/</title>
      </sec>
      <sec id="sec-1-9">
        <title>2https://brain-development.org/ixi-dataset/</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. PMC Dataset</title>
      <p>The PMC dataset is pre-processed and ready to use for
supervised and semi-supervised learning methods in
tasks, such as cross-modality, high-field MR image pre- 2.1. Data Synthesis Pipeline
diction, super-resolution, and multi-contrast MR image
translation. This comprehensive dataset comprises MR To create a dataset consist of MR images in multiple
conimages from 181 subjects, preciously crafted in both 2- trasts at both 1.5T and 3T for the same pseudo-subjects,
dimensional (2D) and 3D to accommodate a diverse range a series of processing steps are undertaken as illustrated
of models compatible with each of these formats. As Fig- in Figure 2.
ure 1 represents, the dataset includes paired images in
T1, T2, and PD contrasts at both 1.5T and 3T for each Using FSL
subject generated from the IXI dataset. 1.5T . . .
its augmentations) is distributed across diferent subsets.</p>
      <p>All versions of our proposed dataset will be released
through our GitHub repository4.</p>
      <sec id="sec-2-1">
        <title>4https://github.com/FaatemehBaagheri/PMC-Paired-Multi-Contrast</title>
        <p>MRI-Dataset-at-1.5T-and-3T-for-Supervised-Image2ImageTranslation
5https://fsl.fmrib.ox.ac.uk/fsl/fslwiki
to the diference in the angle and position of the head diferences in the relative signal intensities in gray and
during acquisition of the data. Secondly, 1.5T MR images white matter and accordingly in the resulting output
conare taken as reference and 3T MR images at each contrast trast [12]. Consequently, to investigate the quality of the
are registered to their respective contrast at 1.5T using synthesized images and minimize the impact of contrast
non-linear registration. For the registration steps, we uti- diferences during evaluation, we conduct
morphologylize Advanced Normalization Tools (ANTs)6 software as based comparative analyses which have been proven to
it is widely recognized as an advanced medical image reg- be reliable in the state-of-the-art studies in related fields
istration and segmentation toolkit that efectively man- [13]. We extract the morphological patterns of images
ages, interprets, and visualizes multidimensional data (using edge detection techniques) at both 1.5T and 3T for
[10]. Also, it should be noted that all aforementioned each contrast as shown in Figure 3 to assess whether the
processing steps are applied to the 3D MR images, result- patterns and morphology of the synthesized data at 3T
ing in PMC dataset in 3D format. align with the reference data at 1.5T. Next, we evaluate</p>
        <p>Moreover, to extend the data generalizability to net- the extracted patterns using MSE and structural index
works solely employing 2D data and increase the number similarity measure (SSIM) [14] as reported in Table 2.
of samples, 3D MR images are transformed to 2D. Specif- Also, to compare the synthesized images with references
ically, we select slices that predominantly contain the within diferent spatial frequency ranges and accordingly
brain (i.e., 10 slices per 3D MR image) while avoiding diferent levels of details, we perform 2D wavelet analysis
slices with minimal or no brain content. Additionally, on the synthesized images and corresponding references
to increase the size and generalizability of the dataset, to decompose them into four diferent frequency
comdata augmentation techniques, including flipping, rota- ponents and select the three most high frequency ones
tion (with an angle of ±5 degrees), noise addition (e.g., named as Subband 1, 2, and 3, respectively [15] as Figure
Gaussian with random standard deviation in range of 4 illustrates. Table 3 displays the subband-wise
compara[5,10] and salt-and-pepper with a probability uniformly tive results.
sampled from the interval of [0.05,0.1]), and scaling (with
a factor of 1.2) are applied. As a result, the data size for Image Extracted Pattern
each contrast at each field strength increased to 6576.
2.2. Data quality assessment
To assess the quality of the synthesized MR images at
3T compared to the reference images at 1.5T, we first
employ evaluation metrics including mean squared error
(MSE), peak signal-to-noise ratio (PSNR), Pearson
correlation (CORR), and mutual information (MI) [11]. We
compare the synthesized 3T images with corresponding
reference images at 1.5T as there are no labels available
at 3T for checking the synthesis quality. Thus, utilizing
these metrics, we assess how close 3T images are
synthesized compared to 1.5T ones in terms of contrast and
overall structure as reported in Table 1.</p>
      </sec>
      <sec id="sec-2-2">
        <title>However, it should be noted that in MR images ac</title>
        <p>quired at 1.5T and 3T even for the same contrast, there are</p>
      </sec>
      <sec id="sec-2-3">
        <title>6http://stnava.github.io/ANTs/</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Application</title>
      <sec id="sec-3-1">
        <title>U-Net is one of the most commonly used neural net</title>
        <p>works for tasks such as cross-modality, super-resolution,
and multi-contrast MR image translation [16, 13, 17, 18].
Thus, to further investigate the application of the
proposed dataset, a U-Net based architecture, which was
previously proposed in [17] and has shown high
performance in the mentioned applications, is implemented in
this paper for the following tasks:</p>
      </sec>
      <sec id="sec-3-2">
        <title>1. Cross-modality MR image translation</title>
        <p>2. 3T MR image prediction from the same contrast
at 1.5T
3. 3T MR image prediction using 1.5T multi-contrast</p>
        <p>MR images</p>
      </sec>
      <sec id="sec-3-3">
        <title>Moreover, to investigate the efectiveness of the PMC</title>
        <p>dataset in developing models based on cross-dataset
evaluation scenarios, we utilize the latest release of the Open
Access Series of Imaging Studies (OASIS)7, known as
OASIS3 dataset [19], which includes MR images at 1.5T and
3T in T2, for Task 2 (3T MR image prediction from the same
contrast at 1.5T ). First, we train and test the model on the
OASIS3 dataset. Then, to compare the efectiveness of
using the PMC dataset, we use it to train the model and test
the model on the OASIS3 dataset. The results for both
approaches shown in Table 5, suggest that our dataset
demonstrates acceptable performance. Specifically, the
U-Net, demonstrates higher eficacy when trained on
PMC for 1.5T T2 to 3T T2 MR image translation.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <sec id="sec-4-1">
        <title>The PMC dataset can be applied in a wide range of tasks</title>
        <p>involving MR image translation, in particular, image gen- In this study, we introduced the PMC dataset, which
coneration, diferent stages of model development, and pre- sists of paired MR images in multiple contrasts of T1,
training models for small target dataset sizes. In the T2, and PD and at both 1.5T and 3T field strengths for
following, we investigate the capability of our dataset in
supervised methods for the aforementioned tasks. 7https://www.oasis-brains.org/#data
the same subjects. The dataset is pre-processed and
presented in 3D, 2D, and a split version of 2D, ensuring
compatibility with a wide range of models and application
in image translation tasks within MRI. Quality
evaluation of the proposed dataset involved the use of MSE,
PSNR, CORR, SSIM, and MI evaluation metrics, along
with morphology-based methods. We also demonstrated
the applicability of the data for supervised methods,
particularly in cross-modality MR image translation, 3T MR
image prediction from the same contrast at 1.5T, and
3T MR image prediction using 1.5T multi-contrast MR
images. Moreover, we highlighted its extendability to
cross-dataset evaluation scenarios.
strength using multi-to-one translation, CMBES</p>
        <p>Proceedings 45 (2023).
[18] N. Siddique, S. Paheding, C. P. Elkin, V.
Devabhaktuni, U-net and its variants for medical
image segmentation: A review of theory and
applications, IEEE Access 9 (2021) 82031–82057.</p>
        <p>doi:10.1109/ACCESS.2021.3086020.
[19] P. J. LaMontagne, T. L. Benzinger, J. C. Morris,</p>
        <p>S. Keefe, R. Hornbeck, C. Xiong, E. Grant, J.
Hassenstab, K. Moulder, A. G. Vlassenko, et al.,
Oasis3: longitudinal neuroimaging, clinical, and
cognitive dataset for normal aging and alzheimer disease,
MedRxiv (2019) 2019–12.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>