<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">PMC: Paired Multi-Contrast MRI Dataset at 1.5T and 3T for Supervised Image2Image Translation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fatemeh</forename><surname>Bagheri</surname></persName>
							<email>fatemeh.bagheri@mail.utoronto.ca</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Medical Biophysics</orgName>
								<orgName type="institution">University of Toronto</orgName>
								<address>
									<settlement>Toronto</settlement>
									<region>ON</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Krembil Brain Institute</orgName>
								<orgName type="institution">University Health Network</orgName>
								<address>
									<settlement>Toronto</settlement>
									<region>ON</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kamil</forename><surname>Uludag</surname></persName>
							<email>kamil.uludag@uhn.ca</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Medical Biophysics</orgName>
								<orgName type="institution">University of Toronto</orgName>
								<address>
									<settlement>Toronto</settlement>
									<region>ON</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Krembil Brain Institute</orgName>
								<orgName type="institution">University Health Network</orgName>
								<address>
									<settlement>Toronto</settlement>
									<region>ON</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">Physical Sciences Platform</orgName>
								<orgName type="institution">Sunnybrook Research Institute</orgName>
								<address>
									<settlement>Toronto</settlement>
									<region>ON</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="department">Cognitive and Mental Health Workshop (ML4CMH)</orgName>
								<address>
									<addrLine>AAAI 2024</addrLine>
									<settlement>Vancouver</settlement>
									<region>BC</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">PMC: Paired Multi-Contrast MRI Dataset at 1.5T and 3T for Supervised Image2Image Translation</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A8B21BAEC33224FB0768190DC0A84A1B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Magnetic resonance imaging</term>
					<term>supervised image translation</term>
					<term>paired MRI dataset</term>
					<term>multi-contrast MRI dataset</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Access to magnetic resonance imaging (MRI) scans on the same subjects, encompassing various contrasts and field strengths, is crucial for brain studies involving supervised image translation for predicting missing or unavailable MRI data. However, there is a scarcity of such datasets covering both low and high fields. To bridge this gap, we propose a semi-synthesized dataset including Paired Multi-Contrast magnetic resonance (MR) images in T1, T2, and PD contrasts at both 1.5T and 3T for the same subjects. We also present it in both 2-and 3-dimensional formats, making it compatible with a wide range of models. We evaluate our proposed dataset using evaluation metrics along with morphology-based methods, and showcase the performance of a U-Net based architecture in different applications using our dataset. Finally, we release our dataset to facilitate future research involving multi-contrast MR image translation.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Within the domain of brain studies, magnetic resonance imaging (MRI) provides unrivaled soft tissue contrast and is now the leading imaging modality for clinical research and care. It serves as a cornerstone for disease detection, precise diagnostics, and vigilant treatment monitoring across diverse age groups <ref type="bibr" target="#b0">[1]</ref>. The distinctive feature of MRI lies in its remarkable capability to generate highly detailed 3-dimensional (3D) images, with a particular focus on capturing the intricacies of soft tissues, such as gray and white matters. This unique attribute positions MRI as an invaluable tool for delving into the complexities of the brain's internal structure and function <ref type="bibr" target="#b1">[2]</ref>. Magnetic resonance (MR) images are acquired across diverse biophysical contrasts (e.g., T1, T2, and PD) and at different magnetic field strengths (i.e., 0.2 to 7T), each capturing specific characteristics of the underlying anatomy <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. Consequently, higher field strengths, along with higher spatial resolution can reveal richer information and superior image quality of the brain tissue relative to images acquired at lower field strength and resolution.</p><p>Image-to-image (I2I) is a computer vision technique employed to enhance image quality and content. Within the field of MRI, it includes translation tasks such as one contrast to another within the same field strength (i.e., cross-modality) and from low-to high-field MR images for the same contrast. Although, this technique can be applied using both supervised and unsupervised approaches, supervised learning has shown higher performance as it enables the generation of high-quality images with sharp details and robust quantitative performance <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b6">6]</ref>. However, the requirement for paired datasets imposes a significant challenge as there is almost no accessible dataset available that includes paired MR images at both low and high field strengths for the same subjects and in multiple contrasts. For instance, the most widely used datasets in previous in the field of MRI include Alzheimer's Disease Neuroimaging Initiative (ADNI) 1 <ref type="bibr" target="#b7">[7]</ref>, Information eXtraction from Images (IXI) 2 , and datasets sourced from the Human Connectome Project (HCP) 3 , each of which has limitations. For example, in all mentioned datsets, only raw 3D MR images are presented, which necessitates intricate pre-processing steps including registration and brain extraction. Moreover, they include either MR images of paired subjects limited to a single contrast, or multiple contrasts but limited to one field strength.</p><p>To address this gap, we leverage the IXI dataset, which includes unpaired 3D MRI scans in T1, T2, and PD for different subjects at 1.5T and 3T. We propose a semi-synthesized dataset, PMC, which includes Paired Multi-Contrast MR images at 1.5T and 3T for the same subjects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">PMC Dataset</head><p>The PMC dataset is pre-processed and ready to use for supervised and semi-supervised learning methods in tasks, such as cross-modality, high-field MR image prediction, super-resolution, and multi-contrast MR image translation. This comprehensive dataset comprises MR images from 181 subjects, preciously crafted in both 2dimensional (2D) and 3D to accommodate a diverse range of models compatible with each of these formats. As Figure <ref type="figure" target="#fig_0">1</ref> represents, the dataset includes paired images in T1, T2, and PD contrasts at both 1.5T and 3T for each subject generated from the IXI dataset. In the 3D format, the total number of images in each contrast at each field strength is 181. All MR images across contrasts and field strengths for each subject are registered and have the same orientation. Additionally, the brain is extracted and the skull is removed.</p><p>In the 2D format, there are a total of 6576 images in each contrast at each field strength. These images are pre-processed and have the same size of 256×150. Similar to the 3D counterparts, they have undergone registration for a consistent orientation, brain extraction with skull removal, and augmentation using techniques such as flipping, rotation, scaling, and adding noise.</p><p>Furthermore, we provide a split version of the dataset for the 2D format. The entire dataset is divided into three subsets: the training set, the validation set, and the test set, with an as-close-as-possible ratio of 80% -10% -10%. Consequently, the data size for each contrast at each field strength is 5268, 648, and 660 for the training, validation, and test sets, respectively. To prevent models from exploiting subject-specific patterns in predictions, we ensure that no image from the same subject (including its augmentations) is distributed across different subsets. All versions of our proposed dataset will be released through our GitHub repository<ref type="foot" target="#foot_0">4</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Data Synthesis Pipeline</head><p>To create a dataset consist of MR images in multiple contrasts at both 1.5T and 3T for the same pseudo-subjects, a series of processing steps are undertaken as illustrated in Figure <ref type="figure" target="#fig_1">2</ref>. Firstly, by leveraging demographic information from the IXI dataset, we meticulously select 181 subjects from the 1.5T set and 181 subjects from the 3T set, aiming for the closest possible match in terms of demographic details. Subsequently, subjects at 1.5T and 3T are paired based on matching sex, age, and ethnicity as closely as possible. The reason for pairing based on these demographic information is that aging, sex and ethnicity affect the brain overall structure and gray and white matter contributions <ref type="bibr" target="#b8">[8]</ref>.</p><p>Following this, MR images are reoriented to the standard orientation, cropped to dimensions of 256×150 to ensure uniform size, and reduce neck parts in the image with the aim of improving the brain extraction step. Subsequently, the brain is extracted and the skull is removed. We employ the FMRIB Software Library (FSL) <ref type="foot" target="#foot_1">5</ref> software for these tasks as it provides a comprehensive set of tools for image analysis and statistical analysis for functional, structural, and diffusion MRI brain imaging data <ref type="bibr" target="#b9">[9]</ref>.</p><p>Next, to generate MR images for the same subjects we follow two main steps: Firstly, T2-and PD-weighted MR images of each subject are registered to corresponding T1-weighted MR images at each field strength using rigid registration. It is worth mentioning that rigid registration is necessary for MR images of the same subject, due to the difference in the angle and position of the head during acquisition of the data. Secondly, 1.5T MR images are taken as reference and 3T MR images at each contrast are registered to their respective contrast at 1.5T using non-linear registration. For the registration steps, we utilize Advanced Normalization Tools (ANTs) <ref type="foot" target="#foot_2">6</ref> software as it is widely recognized as an advanced medical image registration and segmentation toolkit that effectively manages, interprets, and visualizes multidimensional data <ref type="bibr" target="#b10">[10]</ref>. Also, it should be noted that all aforementioned processing steps are applied to the 3D MR images, resulting in PMC dataset in 3D format.</p><p>Moreover, to extend the data generalizability to networks solely employing 2D data and increase the number of samples, 3D MR images are transformed to 2D. Specifically, we select slices that predominantly contain the brain (i.e., 10 slices per 3D MR image) while avoiding slices with minimal or no brain content. Additionally, to increase the size and generalizability of the dataset, data augmentation techniques, including flipping, rotation (with an angle of ±5 degrees), noise addition (e.g., Gaussian with random standard deviation in range of <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b10">10]</ref> and salt-and-pepper with a probability uniformly sampled from the interval of [0.05,0.1]), and scaling (with a factor of 1.2) are applied. As a result, the data size for each contrast at each field strength increased to 6576.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Data quality assessment</head><p>To assess the quality of the synthesized MR images at 3T compared to the reference images at 1.5T, we first employ evaluation metrics including mean squared error (MSE), peak signal-to-noise ratio (PSNR), Pearson correlation (CORR), and mutual information (MI) <ref type="bibr" target="#b11">[11]</ref>. We compare the synthesized 3T images with corresponding reference images at 1.5T as there are no labels available at 3T for checking the synthesis quality. Thus, utilizing these metrics, we assess how close 3T images are synthesized compared to 1.5T ones in terms of contrast and overall structure as reported in Table <ref type="table">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Synthesized MR images at 3T compared with the reference images at 1.5T evaluated using MSE, PSNR, CORR, and MI metrics (The directions of vertical arrows indicate higher image quality. Results are reported as the mean±standard deviation). However, it should be noted that in MR images acquired at 1.5T and 3T even for the same contrast, there are differences in the relative signal intensities in gray and white matter and accordingly in the resulting output contrast <ref type="bibr" target="#b12">[12]</ref>. Consequently, to investigate the quality of the synthesized images and minimize the impact of contrast differences during evaluation, we conduct morphologybased comparative analyses which have been proven to be reliable in the state-of-the-art studies in related fields <ref type="bibr" target="#b13">[13]</ref>. We extract the morphological patterns of images (using edge detection techniques) at both 1.5T and 3T for each contrast as shown in Figure <ref type="figure">3</ref> to assess whether the patterns and morphology of the synthesized data at 3T align with the reference data at 1.5T. Next, we evaluate the extracted patterns using MSE and structural index similarity measure (SSIM) <ref type="bibr" target="#b14">[14]</ref> as reported in Table <ref type="table" target="#tab_0">2</ref>. Also, to compare the synthesized images with references within different spatial frequency ranges and accordingly different levels of details, we perform 2D wavelet analysis on the synthesized images and corresponding references to decompose them into four different frequency components and select the three most high frequency ones named as Subband 1, 2, and 3, respectively <ref type="bibr" target="#b15">[15]</ref> as Figure <ref type="figure">4</ref> illustrates. Table <ref type="table" target="#tab_1">3</ref> displays the subband-wise comparative results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Image</head><p>Extracted Pattern 1.5T 3T   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Application</head><p>The PMC dataset can be applied in a wide range of tasks involving MR image translation, in particular, image generation, different stages of model development, and pretraining models for small target dataset sizes. In the following, we investigate the capability of our dataset in supervised methods for the aforementioned tasks.</p><p>U-Net is one of the most commonly used neural networks for tasks such as cross-modality, super-resolution, and multi-contrast MR image translation <ref type="bibr" target="#b16">[16,</ref><ref type="bibr" target="#b13">13,</ref><ref type="bibr" target="#b17">17,</ref><ref type="bibr" target="#b18">18]</ref>. Thus, to further investigate the application of the proposed dataset, a U-Net based architecture, which was previously proposed in <ref type="bibr" target="#b17">[17]</ref> and has shown high performance in the mentioned applications, is implemented in this paper for the following tasks:</p><p>1. Cross-modality MR image translation 2. 3T MR image prediction from the same contrast at 1.5T 3. 3T MR image prediction using 1.5T multi-contrast MR images Table <ref type="table">4</ref> displays the results for image generation in each task using the PMC dataset, indicating the highest performance in Task 1 and 1.5T T1 to 1.5T T2 translation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>Quantitative results of generated MR images using U-Net compared with the ground truth images, using PMC dataset (The directions of vertical arrows indicate higher image qualities. Results are reported as the mean±standard deviation). Moreover, to investigate the effectiveness of the PMC dataset in developing models based on cross-dataset evaluation scenarios, we utilize the latest release of the Open Access Series of Imaging Studies (OASIS) <ref type="foot" target="#foot_3">7</ref> , known as OA-SIS3 dataset <ref type="bibr" target="#b19">[19]</ref>, which includes MR images at 1.5T and 3T in T2, for Task 2 (3T MR image prediction from the same contrast at 1.5T ). First, we train and test the model on the OASIS3 dataset. Then, to compare the effectiveness of using the PMC dataset, we use it to train the model and test the model on the OASIS3 dataset. The results for both approaches shown in Table <ref type="table" target="#tab_3">5</ref>, suggest that our dataset demonstrates acceptable performance. Specifically, the U-Net, demonstrates higher efficacy when trained on PMC for 1.5T T2 to 3T T2 MR image translation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>In this study, we introduced the PMC dataset, which consists of paired MR images in multiple contrasts of T1, T2, and PD and at both 1.5T and 3T field strengths for the same subjects. The dataset is pre-processed and presented in 3D, 2D, and a split version of 2D, ensuring compatibility with a wide range of models and application in image translation tasks within MRI. Quality evaluation of the proposed dataset involved the use of MSE, PSNR, CORR, SSIM, and MI evaluation metrics, along with morphology-based methods. We also demonstrated the applicability of the data for supervised methods, particularly in cross-modality MR image translation, 3T MR image prediction from the same contrast at 1.5T, and 3T MR image prediction using 1.5T multi-contrast MR images. Moreover, we highlighted its extendability to cross-dataset evaluation scenarios.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Examples of T1-, T2-, and PD-weighted MR images at 1.5T and 3T for the same pseudo-subjects in PMC dataset.</figDesc><graphic coords="2,105.94,327.80,58.31,73.07" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Pipeline for data synthetization.</figDesc><graphic coords="2,231.78,327.80,58.31,73.07" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>.006 20.3±1.02 0.97±0.005 0.88±0.035 T2 0.015±0.006 21.3±1.77 0.90±0.020 0.77±0.032 PD 0.012±0.004 20.5±1.57 0.96±0.008 0.80±0.034</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :Figure 4 :</head><label>34</label><figDesc>Figure 3: Example of extracted patterns from reference MR image at 1.5T and its corresponding synthesized MR image at 3T for the T2 contrast.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>T1, T2, PD→ 3T T1 0.0033±0.002 25.16±1.87 1.5T T1, T2, PD→ 3T T2 0.0043±0.002 23.97±1.72 1.5T T1, T2, PD→ 3T PD 0.0047±0.002 23.49±1.73</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>Patterns</figDesc><table><row><cell>Contrast</cell><cell>MSE↓</cell><cell>SSIM↑</cell></row><row><cell>T1</cell><cell cols="2">0.12±0.012 0.62±0.033</cell></row><row><cell>T2</cell><cell cols="2">0.11±0.033 0.60±0.037</cell></row><row><cell>PD</cell><cell cols="2">0.12±0.013 0.60±0.036</cell></row></table><note>extracted from synthesized MR images at 3T compared with the ones extracted from reference images at 1.5T evaluated using MSE and SSIM metrics (The directions of vertical arrows indicate higher image qualities. Results are reported as the mean±standard deviation).</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>Subbands</figDesc><table><row><cell>Contrast</cell></row></table><note>of synthesized MR images at 3T compared with the reference images at 1.5T evaluated using MSE and SSIM metrics (The directions of vertical arrows indicate higher image qualities. Results are reported as the mean±standard deviation).</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Metric Subband 1 Subband 2 Subband 3</head><label></label><figDesc></figDesc><table><row><cell>T1</cell><cell>MSE↓ SSIM↑</cell><cell>0.005±0.004 0.74±0.028</cell><cell>0.01±0.010 0.70±0.032</cell><cell>0.009±0.010 0.62±0.033</cell></row><row><cell>T2</cell><cell>MSE↓ SSIM↑</cell><cell cols="3">0.005±0.003 0.007±0.006 0.007±0.007 0.70±0.034 0.66±0.034 0.62±0.037</cell></row><row><cell>PD</cell><cell>MSE↓ SSIM↑</cell><cell cols="3">0.004±0.004 0.008±0.009 0.008±0.009 0.74±0.035 0.70±0.037 0.65±0.037</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 5</head><label>5</label><figDesc>Quantitative results on OASIS3 dataset, using U-Net model trained by OASIS3 vs. PMC dataset (The directions of vertical arrows indicate higher image qualities. Results are reported as the mean±standard deviation).</figDesc><table><row><cell>Trained on</cell><cell>Translation</cell><cell>MSE↓</cell><cell>PSNR↑</cell></row><row><cell>OASIS3</cell><cell>1.5T T1→ 3T T1 1.5T T2→ 3T T2</cell><cell cols="2">0.007±0.002 21.73±1.47 0.009±0.003 20.93±1.33</cell></row><row><cell>PMC</cell><cell cols="3">1.5T T1 → 3T T1 0.011±0.004 19.73±1.31 1.5T T2 → 3T T2 0.007±0.002 21.3±1.44</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_0">https://github.com/FaatemehBaagheri/PMC-Paired-Multi-Contrast-MRI-Dataset-at-1.5T-and-3T-for-Supervised-Image2Image-Translation</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_1"><ref type="bibr" target="#b4">5</ref> https://fsl.fmrib.ox.ac.uk/fsl/fslwiki</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_2">http://stnava.github.io/ANTs/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_3">https://www.oasis-brains.org/#data</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Lowfield mri: Clinical promise and challenges</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Arnold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Freeman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Litt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Stein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Magnetic Resonance Imaging</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="25" to="44" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A review of magnetic resonance imaging and its clinical applications</title>
		<author>
			<persName><forename type="first">T</forename><surname>Sindhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kumaratharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Anandan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2022 6th International Conference on Devices, Circuits and Systems (ICDCS), IEEE</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="38" to="42" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<idno type="DOI">10.1016/B978-1-4377-0906-3.00006-7</idno>
		<ptr target="https://doi.org/10.1016/B978-1-4377-0906-3.00006-7" />
		<title level="m">CHAPTER 6 -Magnetic Resonance Imaging</title>
				<editor>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Waldman</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Campbell</surname></persName>
		</editor>
		<meeting><address><addrLine>Philadelphia</addrLine></address></meeting>
		<imprint>
			<publisher>W.B. Saunders</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Comparison of high-field-strength versus low-field-strength mri of the shoulder</title>
		<author>
			<persName><forename type="first">T</forename><surname>Magee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shapiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American Journal of Roentgenology</title>
		<imprint>
			<biblScope unit="volume">181</biblScope>
			<biblScope unit="page" from="1211" to="1215" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Unsupervised image-to-image translation: A review</title>
		<author>
			<persName><forename type="first">H</forename><surname>Hoyez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schockaert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rambach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mirbach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Stricker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title/>
		<idno type="DOI">10.3390/s22218540</idno>
		<ptr target="https://www.mdpi.com/1424-8220/22/21/8540.doi:10.3390/s22218540" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Cyclegan using semi-supervised learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Okada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Nakano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Miyauchi</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:214764843" />
	</analytic>
	<monogr>
		<title level="j">Aust. J. Intell. Inf. Process. Syst</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="10" to="19" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The alzheimer&apos;s disease neuroimaging initiative (adni): Mri methods</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">R</forename><surname>Jack</surname><genName>Jr</genName></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Bernstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">C</forename><surname>Fox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Thompson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Alexander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Harvey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Borowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Britson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Whitwell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ward</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="685" to="691" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The aging slopes of brain structures vary by ethnicity and sex: Evidence from a large magnetic resonance imaging dataset from a single scanner of cognitively healthy elderly people in korea</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">Y</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Y</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">H</forename><surname>Seo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Choo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-K</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-M</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">T A D N I</forename></persName>
		</author>
		<idno type="DOI">10.3389/fnagi.2020.00233</idno>
		<ptr target="https://www.frontiersin.org/articles/10.3389/fnagi.2020.00233.doi:10.3389/fnagi.2020.00233" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Aging Neuroscience</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Pre-dementia memory impairment is associated with white matter tract affection</title>
		<author>
			<persName><forename type="first">C</forename><surname>Jack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lowe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Senjem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Weigand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kemp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shiung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Petersen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The American Journal of Geriatric Psychiatry</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="368" to="375" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Advanced normalization tools (ants)</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">B</forename><surname>Avants</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tustison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Song</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Insight j</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1" to="35" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">T1-weighted and t2-weighted mri image synthesis with convolutional generative adversarial networks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Kawahara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nagata</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">reports of practical Oncology and radiotherapy</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="35" to="42" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Low-field magnetic resonance imaging: its history and renaissance</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hagiwara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Goto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Aoki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Investigative Radiology</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page">669</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Quantitative brain morphometry of portable low-field-strength mri using super-resolution machine learning</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Iglesias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Schleicher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Laguna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Billot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schaefer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mckaig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">N</forename><surname>Goldstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">N</forename><surname>Sheth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Rosen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">T</forename><surname>Kimberly</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Radiology</title>
		<imprint>
			<biblScope unit="volume">306</biblScope>
			<biblScope unit="page">e220522</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Structural texture similarity metrics for image analysis and retrieval</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zujovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">N</forename><surname>Pappas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Neuhoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="2545" to="2558" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Wavelet Transform</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-17989-2_3</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-17989-2_3.doi:10.1007/978-3-030-17989-2_3" />
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Springer International Publishing</publisher>
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">U-net: Convolutional networks for biomedical image segmentation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Ronneberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Brox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Navab</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Hornegger</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><forename type="middle">M</forename><surname>Wells</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Frangi</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="234" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Mr image prediction at high field strength from mr images taken at low field strength using multi-to-one translation</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bagheri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Uludag</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CMBES Proceedings</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Devabhaktuni, U-net and its variants for medical image segmentation: A review of theory and applications</title>
		<author>
			<persName><forename type="first">N</forename><surname>Siddique</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Paheding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">P</forename><surname>Elkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2021.3086020</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="82031" to="82057" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Oasis-3: longitudinal neuroimaging, clinical, and cognitive dataset for normal aging and alzheimer disease</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Lamontagne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Benzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Morris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Keefe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hornbeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hassenstab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Moulder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Vlassenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">MedRxiv</title>
		<imprint>
			<biblScope unit="page" from="2019" to="2031" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
