<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>World Conference on eXplainable Artificial
Intelligence: July</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Patch-based Intuitive Multimodal Prototypes Network (PIMPNet) for Alzheimer's Disease classification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lisa Anita De Santi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jörg Schlötterer</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Meike Nauta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincenzo Positano</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christin Seifert</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Datacation</institution>
          ,
          <addr-line>Eindhoven</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Information Engineering, University of Pisa</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Fondazione Toscana G Monasterio - Bioengineering Unit</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Mannheim</institution>
          ,
          <addr-line>Mannheim</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Marburg</institution>
          ,
          <addr-line>Marburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>1</volume>
      <fpage>7</fpage>
      <lpage>19</lpage>
      <abstract>
        <p>Volumetric neuroimaging examinations like structural Magnetic Resonance Imaging (sMRI) are routinely applied to support the clinical diagnosis of dementia like Alzheimer's Disease (AD). Neuroradiologists examine 3D sMRI to detect and monitor abnormalities in brain morphology due to AD, like global and/or local brain atrophy and shape alteration of characteristic structures. There is a strong research interest in developing diagnostic systems based on Deep Learning (DL) models to analyse sMRI for AD. However, anatomical information extracted from an sMRI examination needs to be interpreted together with patient's age to distinguish AD patterns from the regular alteration due to a normal ageing process. In this context, part-prototype neural networks integrate the computational advantages of DL in an interpretable-by-design architecture and showed promising results in medical imaging applications. We present PIMPNet, the first interpretable multimodal model for 3D images and demographics applied to the binary classification of AD from 3D sMRI and patient's age. Despite age prototypes do not improve predictive performance compared to the single modality model, this lays the foundation for future work in the direction of the model's design and multimodal prototype training process.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Interpretability-by-design</kwd>
        <kwd>Prototype</kwd>
        <kwd>Prototype-network</kwd>
        <kwd>Multimodal Deep Learning</kwd>
        <kwd>Alzheimer</kwd>
        <kwd>MRI</kwd>
        <kwd>Age</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        There is a significant research interest in supporting Alzheimer’s Disease (AD) diagnosis with
Deep Learning (DL) models [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Existing diagnostic guidelines often integrate the clinical
evaluation of the patient with structural Magnetic Resonance Imaging (sMRI), to detect pathological
brain patterns like gray matter atrophy.
      </p>
      <p>
        Brain alterations in sMRI might support the early and diferential diagnosis and the prediction
of disease’s progression. There are sets of common practices used for analysing sMRI acquisition,
but there are still no universally accepted methods [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ]. In addition, information collected
from sMRI should be interpreted together with the patient’s age, as there are anatomical brain
changes due to the physiological ageing process [
        <xref ref-type="bibr" rid="ref5">5, 6</xref>
        ].
      </p>
      <p>DL architectures can facilitate the analysis of neuroimaging data, and might be able to
identify unconventional AD subtypes and extract yet unknown image-based biomarkers [7, 8].
Prototypical-Part (PP) networks combine the advantages of DL models in an
interpretable-bydesign architecture, and are collecting interesting results in medical imaging applications where
the black-box nature of standard DL models poses controversy [9].</p>
      <p>
        There are currently diferent variants of PP networks, including PIPNet [ 10], originally
applied to 2D images and then extended to handle 3D scans [11]. PIPNet showed appealing
properties in the medical imaging domain [12], including a reduced number of part-prototypes,
semantic significance of learned prototypes, and ability to cope with Out-of-Distribution data
(which might be particularly useful in dementia diagnosis, where unusual neurodegeneration
pattern are reported [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]). However, sMRI data should be interpreted together with patients’
demographics to discern age-related image alteration from pathological alteration, and existing
PP models cannot be directly applied to perform this task. Adding non-image prototypes to the
standard PP architecture is a non-trivial task, and there are no unique strategies available. There
are some works which integrate the concept of learning prototypes from multiple modalities
which are based on the concatenation (deterministic prototypes) or on the multimodal feature
extraction (shifted prototypes). However, available models cannot be applied to our task, as
specifically designed for images and textual data [13].
      </p>
      <p>We present Patch-based Intuitive Multimodal Prototypes Network (PIMPNet), the first
multimodal prototype classifier which learns 3D image part-prototypes and prototypical values from
structured data, to predict patient’s cognitive level in AD from sMRI and age values.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>This section introduces the architecture (cf. Sect. 2.1 and Fig. 1) and the training process
(Sect. 2.2), of PIMPNet.</p>
      <sec id="sec-2-1">
        <title>2.1. Proposed Model: PIMPNet</title>
        <p>We propose an age-prototypes layer integrated into the original PIPNet 3D model [11] to create
our multimodal architecture. In contrast to “ordinary” age-binning for the inclusion of age
information, the age-prototypes layer has the advantages of: (i) being able to learn important
age values for the diagnostic task performed (which might not be equally distributed, and might
not be easily identifiable in priors); (ii) not to assign diferent age bins to two patients of similar
age close to the bin boundary.</p>
        <p>Our PIMPNet has an input layer which takes the 3D image x ∈ Rℎ× × ×  and the
age x ∈ R1 as input, where ℎ, , ,  respectively represents the number of channels,
slices, rows and columns of the input image volume. Image x and age x are processed
in parallel. A CNN backbone processes x, z =  (x; w ) extracting  3-dimensional
( ×  ×  ) feature maps, where z,,ℎ, represents the activation of image-prototype 
at patch location , ℎ, . Next, a 3D max-pooling applied to every feature map extracts 
image prototype presence scores p ∈ [0., 1.] , where p, measures the presence of the
image prototype  in the input image. This defines the image-prototypes layer.</p>
        <p>In parallel, we have the age-prototypes layer, constituted by  trainable 1-dimensional
tensors t, ∈ R1×  , which aims to learn prototypical age values for the classification task.
This layer computes age prototype presence scores p ∈ [0., 1.] , a similarity measurement
between the input age to every age prototype, defining a smooth age binning 1:
p, = √︂</p>
        <p>1
1 + ︁( x− t, )︁ 2
where t are trainable parameters and  and  are hyper-parameters which regulate the band
and the slope of the similarity function. We then have a prototypes layer which concatenates
the image and age prototype presence scores obtaining a layer of  =  +  prototypes
p ∈ [0., 1.] p = (p, p). The final classification is performed by a sparse linear
1The similarity function employed is inspired by the magnitude of a Butterworth filter [ 14]. In preliminary
experiments, we used an exponential similarity function as in ProtoTree: p, = (−|| x − t,||), but as
(− 2) ≈ 0.13, a 2 years age diference would already result in little similarity, which is not in line with domain
knowledge about age relevance for Alzheimer’s disease.
positive layer wc ∈ R×  which connects image and age prototypes to the  classes acting
≥ 0
as a scoring sheet system. The  class output scores are given by the sum of the prototypes’
presence score weighted for the contribution of prototype  to class  wc,, i.e., o = pwc,
where o is 1 ×  and o = ∑︀</p>
        <p>=1 pwc,. PIMPNet returns the output class only using the
most activated age-prototype, i.e., closest to the patient’s age according to the similarity metric2.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. PIMPNet Training</title>
        <p>We optimize PIMPNet’s parameters by integrating the training of age prototypes into the original
PIPNet training process [10]. This includes two main stages: (1) Self-Supervised pre-training of
Image-Prototypes, and (2) PIMPNet training.</p>
        <p>As the original PIPNet [10], the 1st stage generates positive pairs x′, x′′ by applying
data augmentation transformation to x selected so that humans consider the two views
similar. These are used to minimize the loss function  ℒ +   ℒ by updating w , where
ℒ = − 1 ∑︀ (z′:,,ℎ, · z′:′,,ℎ,) is an Alignment Loss which optimizes
(,ℎ,)∈× × 
positive pairs to activate the same prototype. Together with a softmax over z:,,ℎ,, the alignment
results in near-binary encodings where an image patch corresponds to exactly one prototype.</p>
        <p>1 ∑︀ (ℎ(∑︀ p,) +  ) is a Tanh-Loss used to prevent the trivial solution that
ℒ = − 
one prototype node is activated on all image patches in each image in the dataset and instead
activates multiple distinct prototypes per batch . Only during training, output scores are
calculated as o = log((pw)2 + 1), acting as regularization for sparsity.</p>
        <p>The 2nd training stage includes the training of age prototypes, optimization of classification
performance and image-prototypes fine-tuning for the downstream classification task. The
optimization minimizes  ℒ +   ℒ +   ℒ by updating w , t, w, where ℒ is the
Log-likelihood classification loss</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Evaluation</title>
      <p>
        We used the multimodal dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI)
database3. We selected “ADNI1 Standardized Screening Data Collection for 1.5T scans” processed
with Gradwarp, B1 non-uniformity, and N3 correction, obtaining 307 CN and 243 AD sMRI brain
scans and the corresponding patients’ age. We report statistics on patients’ demographic of the
selected ADNI cohort in Table 1. We preprocessed sMRI data inspired by the pre-processing
2We selected only the most activated age prototype during inference (not during the optimization process).
3https://adni.loni.usc.edu
pipeline applied in previous works [15]. We tranformed all the images to the common ICBM152
Non-Linear Symmetric 2009c standard space [16] with afine registration. We selected the grey
matter structures by applying the ICBM152 Non-Linear Symmetric 2009c brain mask and kept
a margin of 3 from its first and last non-empty slices. We applied an image downsampling of
2 and we scaled all image intensities to the range [
        <xref ref-type="bibr" rid="ref1">0-1</xref>
        ] with a min-max normalization. We
implemented PIMPNet using PyTorch and MONAI4, training our models on an Intel Core i7 5.1
MHz PC, 32 GB RAM, equipped with an NVIDIA RTX3090 GPU with 24 GB of embedded RAM.
As CNN backbones we used ResNet-18 3D pretrained on Kinetics400 [17] and ConvNeXt-Tiny
3D pretrained on the STOIC medical dataset (Study of Thoracic CT in COVID-19) [18]. We
ifnetuned PIMPNet with Adam optimizer using the same hyperparameter settings of the original
PIPNet [10]. We only reduced the batch size to 12 to adapt it to our computational capabilities
and we set the learning rate of age prototypes to 0.15. We arbitrarily set the number of age
prototypes = 5 evenly spaced between 40 and 90 to cover the patients’ age range of our dataset.
For the age similarity function, we respectively set  = 4 and  = 86. We performed 5-fold
cross-validation with patient-wise splits. 20% of training images are used for validation.
      </p>
      <p>We evaluated the models in terms of classification performance and with functionally
grounded metrics of explainability. Results are reported in Tables 2 and 3. We compared
PIMPNet performance (sMRI + age) with PIPNet-3D (sMRI only) [11], to evaluate if including
age information improves diagnostic performance. We measured performance using Accuracy
(Acc), Balanced Accuracy (Bal Acc), Sensitivity (SENS, Acc of Cognitive Normal class), Specificity
(SPEC, Acc of Alzheimer’s Disease class), and F1 score (F1). We measured the Global size (GS)
of the model as the total number of prototypes, and the Local size (LS) of explanations as the
number of detected prototypes in a single 3D sMRI, averaged over all the images in the test set.
Additionally, we report the Sparsity (Sp) of the decision layer as the percentage of zero-weights
in the linear classification layer [ 10] to assess the compactness of the prototypes-classes layer.
We further assessed whether prototypes are consistently located in the same brain region, and
the purity of the prototypes in terms of the anatomical regions included based on the CerabrA
atlas annotation [19]. More specifically, the Prototypes Localization Consistency (LC ) evaluates</p>
      <sec id="sec-3-1">
        <title>4https://monai.io</title>
        <p>5Using the same learning rate of the original PIPNet used to train the image-prototypes (0.05) results in irrelevant
updates of the Age Prototypes
6We leave an extensive hyperparameter search for learning the age prototypes for future work
the diferences in the coordinate centre of the prototypical part in the input image, while the
Prototype Brain Entropy (H) as a measure of purity computes the Shannon Entropy of the
brain regions included in the prototypical part [11]. We show the learned age prototypes t
from five diferent folds (denoted as Mx where x indicates the current fold) in Table 4.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and Conclusion</title>
      <p>Both PIPNet and PIMPNet with the ResNet-18 3D backbone achieve higher classification
performance than with the ConvNext-Tiny backbone. Our preliminary results also show that the
proposed Age-Prototype layer can learn prototypical age values; however, these do not
improve classification performances compared to the baseline model. Our functionally-grounded
evaluation of prototypes shows that all models learn prototypes consistently located in the
same anatomical brain regions (low LC values). We also observe that the models trained with
the ConvNeXt-Tiny 3D backbone have higher compactness. This might partially explain the
lower performance scores (the number of prototypes learned is not enough for performing
the diagnosis), but is an interesting observation for future research as such a highly compact
model can be considered more interpretable than larger ones and can be easily evaluated by
domain experts. We also observe that the image prototypes of the ConvNeXt-Tiny 3D backbone
are generally purer7. Despite purity being a desirable property for prototypes [20], because of</p>
      <sec id="sec-4-1">
        <title>7Purity is measured w.r.t. to the annotation provided by the CerebrA atlas</title>
        <p>the design of the purity metric, also a prototype which only includes the background, i.e., a
clinically irrelevant prototype, will have high purity8.</p>
        <p>In summary, we proposed PIMPNet, an interpretable multimodal prototype-based classifier.
The proposed architecture is the first prototypes-based network which performs an interpretable
classification based on the detection of prototypes learned from diferent data modalities (3D
images and age information). We applied PIMPNet to the binary classification of Alzheimer’s
Disease from 3D sMRI images together with the patient’s age. Despite the usage of age prototypes
do not improve predictive performance compared to the model trained with only images, we
identified diferent potential reasons which define the future directions of our work. First, as
the original PIPNet training paradigm includes a pretraining stage [10] of image prototypes,
we plan to include an age prototypes pretraining step w.r.t. the log-likelihood classification
loss. Second, we also plan to work on the model’s design. As the simple concatenation of the
prototype presence score might not be able to properly represent the relationship between age
and image prototypes for the downstream task, we plan to combine image and age prototypes
using a diferent (but still interpretable) classifier than a scoring-sheet system.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Data used in the preparation of this article was obtained from the Alzheimer’s Disease
Neuroimaging Initiative (ADNI) database. The ADNI was launched in 2003 as a public-private
partnership with the primary goal to test whether serial magnetic resonance imaging (MRI),
positron emission tomography (PET), other biological markers, and clinical and
neuropsychological assessment can be combined to measure the progression of mild cognitive impairment
(MCI) and early Alzheimer’s disease (AD).
8Posterior quantitative evaluation performed w.r.t. the CerebrA atlas revealed that the test set image prototypes
(averaged over the 5-folds) obtained with the ConvNeXt-Tiny backbone have a higher percentage of background
voxels included compared to the ones obtained with ResNet-18 (76.6% vs 59.2%)
[6] R. Sivera, H. Delingette, M. Lorenzi, X. Pennec, N. Ayache, A model of brain
morphological changes related to aging and alzheimer’s disease from cross-sectional assessments,
NeuroImage 198 (2019) 255–270. doi:10.1016/j.neuroimage.2019.05.040.
[7] M. Böhle, F. Eitel, M. Weygandt, K. Ritter, Layer-wise relevance propagation for explaining
deep neural network decisions in MRI-based Alzheimer’s disease classification, Frontiers
in Aging Neuroscience 10 (2019). doi:10.3389/fnagi.2019.00194.
[8] M. Khojaste-Sarakhsi, S. S. Haghighi, S. F. Ghomi, E. Marchiori, Deep learning for
Alzheimer’s disease diagnosis: A survey, Artificial Intelligence in Medicine 130 (2022)
102332. doi:https://doi.org/10.1016/j.artmed.2022.102332.
[9] L. Longo, M. Brcic, F. Cabitza, J. Choi, R. Confalonieri, J. D. Ser, R. Guidotti, Y. Hayashi,
F. Herrera, A. Holzinger, R. Jiang, H. Khosravi, F. Lecue, G. Malgieri, A. Páez, W. Samek,
J. Schneider, T. Speith, S. Stumpf, Explainable artificial intelligence (xai) 2.0: A
manifesto of open challenges and interdisciplinary research directions, Information Fusion
106 (2024) 102301. URL: http://creativecommons.org/licenses/by/4.0/. doi:10.1016/j.
inffus.2024.102301.
[10] M. Nauta, J. Schlötterer, M. van Keulen, C. Seifert, PIP-Net: Patch-Based Intuitive
Prototypes for Interpretable Image Classification, in: IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 2023. doi:10.1109/CVPR52729.2023.00269.
[11] L. A. D. Santi, J. Schlötterer, M. Scheschenja, J. Wessendorf, M. Nauta, V. Positano, C. Seifert,</p>
      <p>Pipnet3d: Interpretable detection of alzheimer in mri scans, 2024. arXiv:2403.18328.
[12] M. Nauta, J. H. Hegeman, J. Geerdink, J. Schlötterer, M. v. Keulen, C. Seifert, Interpreting
and correcting medical image classification with pip-net, in: Artificial Intelligence. ECAI
2023 International Workshops, 2024, pp. 198–215.
[13] Y. Ma, S. Zhao, W. Wang, Y. Li, I. King, Multimodality in meta-learning: A comprehensive
survey, Knowledge-Based Systems 250 (2022). doi:10.1016/j.knosys.2022.108976.
[14] S. Butterworth, et al., On the theory of filter amplifiers, Wireless Engineer 7 (1930) 536–541.
[15] A. W. Mulyadi, W. Jung, K. Oh, J. S. Yoon, K. H. Lee, H.-I. Suk, Estimating explainable
Alzheimer’s disease likelihood map via clinically-guided prototype learning, NeuroImage
273 (2023). doi:10.1016/j.neuroimage.2023.120073.
[16] V. Fonov, A. Evans, R. McKinstry, C. Almli, D. Collins, Unbiased nonlinear average
age-appropriate brain templates from birth to adulthood, NeuroImage 47 (2009) S102.
URL: https://www.sciencedirect.com/science/article/pii/S1053811909708845. doi:https:
//doi.org/10.1016/S1053-8119(09)70884-5, organization for Human Brain
Mapping 2009 Annual Meeting.
[17] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, M. Paluri, A closer look at spatiotemporal
convolutions for action recognition (2017). URL: http://arxiv.org/abs/1711.11248.
[18] D. Kienzle, J. Lorenz, R. Schön, K. Ludwig, R. Lienhart, Covid detection and severity
prediction with 3d-convnext and custom pretrainings (2022). URL: http://arxiv.org/abs/
2206.15073.
[19] A. L. Manera, M. Dadar, V. Fonov, D. L. Collins, Cerebra, registration and manual label
correction of mindboggle-101 atlas for mni-icbm152 template, Scientific Data 7 (2020).
doi:10.1038/s41597-020-0557-9.
[20] M. Nauta, C. Seifert, The co-12 recipe for evaluating interpretable part-prototype image
classifiers, in: Explainable Artificial Intelligence, 2023, pp. 397–420.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Ebrahimighahnavieh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chiong</surname>
          </string-name>
          ,
          <article-title>Deep learning to detect alzheimer's disease from neuroimaging: A systematic literature review</article-title>
          ,
          <source>Computer Methods and Programs in Biomedicine</source>
          <volume>187</volume>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1016/j.cmpb.
          <year>2019</year>
          .
          <volume>105242</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>De Santi</surname>
          </string-name>
          , E. Pasini,
          <string-name>
            <given-names>M.</given-names>
            <surname>Santarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Genovesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Positano</surname>
          </string-name>
          ,
          <article-title>An Explainable Convolutional Neural Network for the Early Diagnosis of Alzheimer's Disease from 18F-FDG PET</article-title>
          ,
          <source>Journal of Digital Imaging</source>
          <volume>36</volume>
          (
          <year>2023</year>
          ).
          <source>doi:10.1007/s10278-022-00719-3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chandra</surname>
          </string-name>
          , G. Dervenoulas,
          <string-name>
            <given-names>M.</given-names>
            <surname>Politis</surname>
          </string-name>
          ,
          <article-title>Magnetic resonance imaging in Alzheimer's disease and mild cognitive impairment</article-title>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1007/s00415-018-9016-3.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Vemuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jack</surname>
          </string-name>
          ,
          <article-title>Role of structural MRI in Alzheimer's disease</article-title>
          ,
          <source>Alzheimer's Research and Therapy</source>
          <volume>2</volume>
          (
          <year>2010</year>
          ). doi:
          <volume>10</volume>
          .1186/alzrt47.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Matlof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Dinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Toga</surname>
          </string-name>
          ,
          <article-title>Age-related diferences in brain morphology and the modifiers in middle-aged and older adults</article-title>
          ,
          <source>Cerebral Cortex</source>
          <volume>29</volume>
          (
          <year>2019</year>
          )
          <fpage>4169</fpage>
          -
          <lpage>4193</lpage>
          . doi:
          <volume>10</volume>
          .1093/cercor/bhy300.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>