<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Patch-based Intuitive Multimodal Prototypes Network (PIMPNet) for Alzheimer&apos;s Disease classification</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Lisa</forename><forename type="middle">Anita</forename><surname>De Santi</surname></persName>
							<email>lisa.desanti@pdh.unipi.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Engineering</orgName>
								<orgName type="institution">University of Pisa</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Fondazione Toscana G Monasterio -Bioengineering Unit</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jörg</forename><surname>Schlötterer</surname></persName>
							<email>joerg.schloetterer@uni-marburg.de</email>
							<affiliation key="aff2">
								<orgName type="institution">University of Marburg</orgName>
								<address>
									<settlement>Marburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<settlement>Mannheim</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Meike</forename><surname>Nauta</surname></persName>
							<email>m.nauta@datacation.nl</email>
							<affiliation key="aff4">
								<orgName type="department">Datacation</orgName>
								<address>
									<settlement>Eindhoven</settlement>
									<country key="NL">Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vincenzo</forename><surname>Positano</surname></persName>
							<email>positano@ftgm.it</email>
							<affiliation key="aff1">
								<orgName type="institution">Fondazione Toscana G Monasterio -Bioengineering Unit</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Christin</forename><surname>Seifert</surname></persName>
							<email>christin.seifert@uni-marburg.de</email>
							<affiliation key="aff2">
								<orgName type="institution">University of Marburg</orgName>
								<address>
									<settlement>Marburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Patch-based Intuitive Multimodal Prototypes Network (PIMPNet) for Alzheimer&apos;s Disease classification</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7738451E01B0C5628CFD5EEBEBC57A69</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Interpretability-by-design, Prototype, Prototype-network, Multimodal Deep Learning, Alzheimer, MRI, Age (C. Seifert) 0000-0001-7239-4270 (L. A. De Santi)</term>
					<term>0000-0002-3678-0390 (J. Schlötterer)</term>
					<term>0000-0002-0558-3810 (M. Nauta)</term>
					<term>0000-0001-6955-9572 (V. Positano)</term>
					<term>0000-0002-6776-3868 (C. Seifert)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Volumetric neuroimaging examinations like structural Magnetic Resonance Imaging (sMRI) are routinely applied to support the clinical diagnosis of dementia like Alzheimer's Disease (AD). Neuroradiologists examine 3D sMRI to detect and monitor abnormalities in brain morphology due to AD, like global and/or local brain atrophy and shape alteration of characteristic structures. There is a strong research interest in developing diagnostic systems based on Deep Learning (DL) models to analyse sMRI for AD. However, anatomical information extracted from an sMRI examination needs to be interpreted together with patient's age to distinguish AD patterns from the regular alteration due to a normal ageing process. In this context, part-prototype neural networks integrate the computational advantages of DL in an interpretable-by-design architecture and showed promising results in medical imaging applications. We present PIMPNet, the first interpretable multimodal model for 3D images and demographics applied to the binary classification of AD from 3D sMRI and patient's age. Despite age prototypes do not improve predictive performance compared to the single modality model, this lays the foundation for future work in the direction of the model's design and multimodal prototype training process.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>There is a significant research interest in supporting Alzheimer's Disease (AD) diagnosis with Deep Learning (DL) models <ref type="bibr" target="#b0">[1]</ref>. Existing diagnostic guidelines often integrate the clinical evaluation of the patient with structural Magnetic Resonance Imaging (sMRI), to detect pathological brain patterns like gray matter atrophy.</p><p>Brain alterations in sMRI might support the early and differential diagnosis and the prediction of disease's progression. There are sets of common practices used for analysing sMRI acquisition, but there are still no universally accepted methods <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>. In addition, information collected from sMRI should be interpreted together with the patient's age, as there are anatomical brain changes due to the physiological ageing process <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>.</p><p>DL architectures can facilitate the analysis of neuroimaging data, and might be able to identify unconventional AD subtypes and extract yet unknown image-based biomarkers <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>. Prototypical-Part (PP) networks combine the advantages of DL models in an interpretable-bydesign architecture, and are collecting interesting results in medical imaging applications where the black-box nature of standard DL models poses controversy <ref type="bibr" target="#b8">[9]</ref>.</p><p>There are currently different variants of PP networks, including PIPNet <ref type="bibr" target="#b9">[10]</ref>, originally applied to 2D images and then extended to handle 3D scans <ref type="bibr" target="#b10">[11]</ref>. PIPNet showed appealing properties in the medical imaging domain <ref type="bibr" target="#b11">[12]</ref>, including a reduced number of part-prototypes, semantic significance of learned prototypes, and ability to cope with Out-of-Distribution data (which might be particularly useful in dementia diagnosis, where unusual neurodegeneration pattern are reported <ref type="bibr" target="#b3">[4]</ref>). However, sMRI data should be interpreted together with patients' demographics to discern age-related image alteration from pathological alteration, and existing PP models cannot be directly applied to perform this task. Adding non-image prototypes to the standard PP architecture is a non-trivial task, and there are no unique strategies available. There are some works which integrate the concept of learning prototypes from multiple modalities which are based on the concatenation (deterministic prototypes) or on the multimodal feature extraction (shifted prototypes). However, available models cannot be applied to our task, as specifically designed for images and textual data <ref type="bibr" target="#b12">[13]</ref>.</p><p>We present Patch-based Intuitive Multimodal Prototypes Network (PIMPNet), the first multimodal prototype classifier which learns 3D image part-prototypes and prototypical values from structured data, to predict patient's cognitive level in AD from sMRI and age values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Method</head><p>This section introduces the architecture (cf. Sect. 2.1 and Fig. <ref type="figure" target="#fig_0">1</ref>) and the training process (Sect. 2.2), of PIMPNet.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Proposed Model: PIMPNet</head><p>We propose an age-prototypes layer integrated into the original PIPNet 3D model <ref type="bibr" target="#b10">[11]</ref> to create our multimodal architecture. In contrast to "ordinary" age-binning for the inclusion of age information, the age-prototypes layer has the advantages of: (i) being able to learn important age values for the diagnostic task performed (which might not be equally distributed, and might not be easily identifiable in priors); (ii) not to assign different age bins to two patients of similar age close to the bin boundary.</p><p>Our PIMPNet has an input layer which takes the 3D image x 𝑖𝑚𝑔 ∈ R 𝑐ℎ×𝑆×𝑅×𝐶 and the age x 𝑎𝑔𝑒 ∈ R 1 as input, where 𝑐ℎ, 𝑆, 𝑅, 𝐶 respectively represents the number of channels, slices, rows and columns of the input image volume. Image x 𝑖𝑚𝑔 and age x 𝑎𝑔𝑒 are processed in parallel. A CNN backbone processes x 𝑖𝑚𝑔 , z = 𝑓 (x 𝑖𝑚𝑔 ; w 𝑓 ) extracting 𝑀 3-dimensional In parallel, we have the age-prototypes layer, constituted by 𝑁 trainable 1-dimensional tensors t 𝑎𝑔𝑒,𝑛 ∈ R 1×𝑁 , which aims to learn prototypical age values for the classification task. This layer computes age prototype presence scores p 𝑎𝑔𝑒 ∈ [0., 1.] 𝑁 , a similarity measurement between the input age to every age prototype, defining a smooth age binning<ref type="foot" target="#foot_0">1</ref> :</p><formula xml:id="formula_0">p 𝑎𝑔𝑒,𝑛 = 1 √︂ 1 + (︁ x𝑎𝑔𝑒−t𝑎𝑔𝑒,𝑛 𝑡 )︁ 2𝑠 ,<label>(1)</label></formula><p>where t 𝑎𝑔𝑒 are trainable parameters and 𝑡 and 𝑠 are hyper-parameters which regulate the band and the slope of the similarity function. We then have a prototypes layer which concatenates the image and age prototype presence scores obtaining a layer of </p><formula xml:id="formula_1">𝐿 = 𝑀 + 𝑁 prototypes p ∈ [0., 1.] 𝐿 p = 𝑐𝑜𝑛𝑐𝑎𝑡(p 𝑖𝑚𝑔 , p 𝑎𝑔𝑒 ).</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">PIMPNet Training</head><p>We optimize PIMPNet's parameters by integrating the training of age prototypes into the original PIPNet training process <ref type="bibr" target="#b9">[10]</ref>. This includes two main stages: (1) Self-Supervised pre-training of Image-Prototypes, and (2) PIMPNet training.</p><p>As the original PIPNet <ref type="bibr" target="#b9">[10]</ref>, the 1st stage generates positive pairs x ′ 𝑖𝑚𝑔 , x ′′ 𝑖𝑚𝑔 by applying data augmentation transformation to x 𝑖𝑚𝑔 selected so that humans consider the two views similar. These are used to minimize the loss function 𝜆 𝐴 ℒ 𝐴 + 𝜆 𝑇 ℒ 𝑇 by updating w 𝑓 , where</p><formula xml:id="formula_2">ℒ 𝐴 = − 1 𝐷𝐻𝑊 ∑︀ (𝑑,ℎ,𝑤)∈𝐷×𝐻×𝑊 𝑙𝑜𝑔(z ′ :,𝑑,ℎ,𝑤 • z ′′ :,𝑑,ℎ,𝑤</formula><p>) is an Alignment Loss which optimizes positive pairs to activate the same prototype. Together with a softmax over z :,𝑑,ℎ,𝑤 , the alignment results in near-binary encodings where an image patch corresponds to exactly one prototype.</p><formula xml:id="formula_3">ℒ 𝑇 = − 1 𝑀 ∑︀ 𝑙𝑜𝑔(𝑡𝑎𝑛ℎ( ∑︀ p 𝑖𝑚𝑔,𝑏 ) + 𝜖</formula><p>) is a Tanh-Loss used to prevent the trivial solution that one prototype node is activated on all image patches in each image in the dataset and instead activates multiple distinct prototypes per batch 𝑏. Only during training, output scores are calculated as o = log((pw 𝑐 )<ref type="foot" target="#foot_2">2</ref> + 1), acting as regularization for sparsity.</p><p>The 2nd training stage includes the training of age prototypes, optimization of classification performance and image-prototypes fine-tuning for the downstream classification task. The optimization minimizes 𝜆 𝐴 ℒ 𝐴 + 𝜆 𝑇 ℒ 𝑇 + 𝜆 𝐶 ℒ 𝐶 by updating w 𝑓 , t 𝑎𝑔𝑒 , w 𝑐 , where ℒ 𝐶 is the Log-likelihood classification loss</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Evaluation</head><p>We used the multimodal dataset from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database <ref type="foot" target="#foot_3">3</ref> . We selected "ADNI1 Standardized Screening Data Collection for 1.5T scans" processed with Gradwarp, B1 non-uniformity, and N3 correction, obtaining 307 CN and 243 AD sMRI brain scans and the corresponding patients' age. We report statistics on patients' demographic of the selected ADNI cohort in Table <ref type="table" target="#tab_1">1</ref>. We preprocessed sMRI data inspired by the pre-processing pipeline applied in previous works <ref type="bibr" target="#b14">[15]</ref>. We tranformed all the images to the common ICBM152 Non-Linear Symmetric 2009c standard space <ref type="bibr" target="#b15">[16]</ref> with affine registration. We selected the grey matter structures by applying the ICBM152 Non-Linear Symmetric 2009c brain and kept a margin of 3 from its first and last non-empty slices. We applied an image downsampling of 2 and we scaled all image intensities to the range [0-1] with a min-max normalization. We implemented PIMPNet using PyTorch and MONAI <ref type="foot" target="#foot_4">4</ref> , training our models on an Intel Core i7 5.1 MHz PC, 32 GB RAM, equipped with an NVIDIA RTX3090 GPU with 24 GB of embedded RAM. As CNN backbones we used ResNet-18 3D pretrained on Kinetics400 <ref type="bibr" target="#b16">[17]</ref> and ConvNeXt-Tiny 3D pretrained on the STOIC medical dataset (Study of Thoracic CT in COVID-19) <ref type="bibr" target="#b17">[18]</ref>. We finetuned PIMPNet with Adam optimizer using the same hyperparameter settings of the original PIPNet <ref type="bibr" target="#b9">[10]</ref>. We only reduced the batch size to 12 to adapt it to our computational capabilities and we set the learning rate of age prototypes to 0.1 <ref type="foot" target="#foot_5">5</ref> . We arbitrarily set the number of age prototypes = 5 evenly spaced between 40 and 90 to cover the patients' age range of our dataset. For the age similarity function, we respectively set 𝑡 = 4 and 𝑠 = 8 6 . We performed 5-fold cross-validation with patient-wise splits. 20% of training images are used for validation.</p><p>We evaluated the models in terms of classification performance and with functionally grounded metrics of explainability. Results are reported in Tables <ref type="table" target="#tab_3">2 and 3</ref>. We compared PIMPNet performance (sMRI + age) with PIPNet-3D (sMRI only) <ref type="bibr" target="#b10">[11]</ref>, to evaluate if including age information improves diagnostic performance. We measured performance using Accuracy (Acc), Balanced Accuracy (Bal Acc), Sensitivity (SENS, Acc of Cognitive Normal class), Specificity (SPEC, Acc of Alzheimer's Disease class), and F1 score (F1). We measured the Global size (GS) of the model as the total number of prototypes, and the Local size (LS) of explanations as the number of detected prototypes in a single 3D sMRI, averaged over all the images in the test set. Additionally, we report the Sparsity (Sp) of the decision layer as the percentage of zero-weights in the linear classification layer <ref type="bibr" target="#b9">[10]</ref> to assess the compactness of the prototypes-classes layer. We further assessed whether prototypes are consistently located in the same brain region, and the purity of the prototypes in terms of the anatomical regions included based on the CerabrA atlas annotation <ref type="bibr" target="#b18">[19]</ref>. More specifically, the Prototypes Localization Consistency (LC 𝑝 ) evaluates the differences in the coordinate centre of the prototypical part in the input image, while the Prototype Brain Entropy (H 𝑝 ) as a measure of purity computes the Shannon Entropy of the brain regions included in the prototypical part <ref type="bibr" target="#b10">[11]</ref>. We show the learned age prototypes t 𝑎𝑔𝑒 from five different folds (denoted as Mx where x indicates the current fold) in Table <ref type="table" target="#tab_4">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion and Conclusion</head><p>Both PIPNet and PIMPNet with the ResNet-18 3D backbone achieve higher classification performance than with the ConvNext-Tiny backbone. Our preliminary results also show that the proposed Age-Prototype layer can learn prototypical age values; however, these do not improve classification performances compared to the baseline model. Our functionally-grounded evaluation of prototypes shows that all models learn prototypes consistently located in the same anatomical brain regions (low LC 𝑝 values). We also observe that the models trained with the ConvNeXt-Tiny 3D backbone have higher compactness. This might partially explain the lower performance scores (the number of prototypes learned is not enough for performing the diagnosis), but is an interesting observation for future research as such a highly compact model can be considered more interpretable than larger ones and can be easily evaluated by domain experts. We also observe that the image prototypes of the ConvNeXt-Tiny 3D backbone are generally purer <ref type="foot" target="#foot_6">7</ref> . Despite purity being a desirable property for prototypes <ref type="bibr" target="#b19">[20]</ref>, because of the design of the purity metric, also a prototype which only includes the background, i.e., a clinically irrelevant prototype, will have high purity <ref type="foot" target="#foot_7">8</ref> . In summary, we proposed PIMPNet, an interpretable multimodal prototype-based classifier. The proposed architecture is the first prototypes-based network which performs an interpretable classification based on the detection of prototypes learned from different data modalities (3D images and age information). We applied PIMPNet to the binary classification of Alzheimer's Disease from 3D sMRI images together with the patient's age. Despite the usage of age prototypes do not improve predictive performance compared to the model trained with only images, we identified different potential reasons which define the future directions of our work. First, as the original PIPNet training paradigm includes a pretraining stage <ref type="bibr" target="#b9">[10]</ref> of image prototypes, we plan to include an age prototypes pretraining step w.r.t. the log-likelihood classification loss. Second, we also plan to work on the model's design. As the simple concatenation of the prototype presence score might not be able to properly represent the relationship between age and image prototypes for the downstream task, we plan to combine image and age prototypes using a different (but still interpretable) classifier than a scoring-sheet system.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: PIMPNet architecture</figDesc><graphic coords="3,89.29,84.19,416.69,269.28" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The final classification is performed by a sparse linear positive layer w c ∈ R 𝐿×𝐾 ≥0 which connects image and age prototypes to the 𝐾 classes acting as a scoring sheet system. The 𝐾 class output scores are given by the sum of the prototypes' presence score weighted for the contribution of prototype 𝑙 to class 𝑘 w c 𝑙,𝑘 , i.e., o = pw c , where o is 1 × 𝐾 and o 𝑘 = ∑︀ 𝐿 𝑙=1 p 𝑙 w c 𝑙,𝑘 . PIMPNet returns the output class only using the most activated age-prototype, i.e., closest to the patient's age according to the similarity metric 2 .</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>Patients' demographics of the selected ADNI cohort, further divided according to the clinical labels.</figDesc><table><row><cell cols="4">Class N°subjects Mean ± SD Age Age Range</cell></row><row><cell>Both</cell><cell>550</cell><cell>76 ± 6</cell><cell>55-91</cell></row><row><cell>CN</cell><cell>307</cell><cell>76 ± 5</cell><cell>60-90</cell></row><row><cell>AD</cell><cell>243</cell><cell>75 ± 8</cell><cell>55-91</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Performances comparison between PIPNet trained on 3D sMRI and PIMPNet trained on 3D sMRI + Age averaged over 5 folds.</figDesc><table><row><cell>Model</cell><cell>Acc</cell><cell>Bal Acc</cell><cell>SENS</cell><cell>SPEC</cell><cell>F1</cell></row><row><cell>PIPNet</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ResNet-18 3D</cell><cell cols="5">83 ± 04 83 ± 04 86 ± 06 79 ± 07 81 ± 05</cell></row><row><cell cols="6">ConvNeXt-Tiny 3D 65 ± 12 66 ± 09 56 ± 32 76 ± 15 66 ± 05</cell></row><row><cell>PIMPNet</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ResNet-18 3D</cell><cell cols="5">84 ± 04 83 ± 04 89 ± 03 77 ± 08 81 ± 05</cell></row><row><cell cols="6">ConvNeXt-Tiny 3D 72 ± 04 70 ± 04 86 ± 10 55 ± 14 63 ± 09</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc>Functionally-grounded evaluation of PIPNet trained on 3D sMRI and PIMPNet trained on 3D sMRI + Age averaged over 5 folds. ↑ and ↓: tendency for better values.</figDesc><table><row><cell>Model</cell><cell>GS ↓</cell><cell>LS ↓</cell><cell>Sp ↑</cell><cell>LC 𝑝 ↓</cell><cell>H 𝑝 ↓</cell></row><row><cell>ResNet-18 3D</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>PIPNet</cell><cell cols="5">149 ± 18 73 ± 10 0.855 ± 0.018 0.008 ± 0.006 2.474 ± 0.249</cell></row><row><cell>PIMPNet</cell><cell cols="5">143 ± 35 74 ± 20 0.861 ± 0.033 0.006 ± 0.006 2.424 ± 0.162</cell></row><row><cell>ConvNeXt-Tiny 3D</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>PIPNet</cell><cell>4 ±</cell><cell>2 ± 1</cell><cell cols="3">0.997 ± 0.001 0.000 ± 0.000 1.803 ± 0.999</cell></row><row><cell>PIMPNet</cell><cell>10 ± 9</cell><cell>4 ± 4</cell><cell cols="3">0.993 ± 0.002 0.000 ± 0.000 1.543 ± 0.626</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>Prototypical Age Values t 𝐴𝑔𝑒,𝑖 learned for folds M1, ..., M5 trained with different backbones. Fold t 𝐴𝑔𝑒,1 t 𝐴𝑔𝑒,2 t 𝐴𝑔𝑒,3 t 𝐴𝑔𝑒,4 t 𝐴𝑔𝑒,5 t 𝐴𝑔𝑒,1 t 𝐴𝑔𝑒,2 t 𝐴𝑔𝑒,3 t 𝐴𝑔𝑒,4 t 𝐴𝑔𝑒,5</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell>ResNet-18 3D</cell><cell></cell><cell></cell><cell></cell><cell cols="3">ConvNeXt-Tiny 3D</cell><cell></cell></row><row><cell>M1</cell><cell>65.77</cell><cell>65.81</cell><cell>66.14</cell><cell>76.81</cell><cell>80.99</cell><cell>56.81</cell><cell>65.00</cell><cell>64.96</cell><cell>74.13</cell><cell>85.80</cell></row><row><cell>M2</cell><cell>68.46</cell><cell>69.40</cell><cell>70.38</cell><cell>77.04</cell><cell>82.38</cell><cell>55.75</cell><cell>58.39</cell><cell>64.96</cell><cell>74.32</cell><cell>85.59</cell></row><row><cell>M3</cell><cell>66.37</cell><cell>67.27</cell><cell>67.91</cell><cell>75.87</cell><cell>81.96</cell><cell>54.86</cell><cell>56.63</cell><cell>65.21</cell><cell>74.40</cell><cell>85.11</cell></row><row><cell>M4</cell><cell>66.72</cell><cell>66.72</cell><cell>67.07</cell><cell>77.07</cell><cell>79.75</cell><cell>58.22</cell><cell>58.59</cell><cell>66.50</cell><cell>75.88</cell><cell>89.09</cell></row><row><cell>M5</cell><cell>66.51</cell><cell>66.52</cell><cell>67.23</cell><cell>77.37</cell><cell>80.00</cell><cell>57.79</cell><cell>66.94</cell><cell>65.44</cell><cell>72.55</cell><cell>84.58</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The similarity function employed is inspired by the magnitude of a Butterworth filter<ref type="bibr" target="#b13">[14]</ref>. In preliminary experiments, we used an exponential similarity function as in ProtoTree: p𝑎𝑔𝑒,𝑛 = 𝑒𝑥𝑝(−||x𝑎𝑔𝑒 − t𝑎𝑔𝑒,𝑛||), but as 𝑒𝑥𝑝(−2) ≈ 0.13,</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">a 2 years age difference would already result in little similarity, which is not in line with domain knowledge about age relevance for Alzheimer's disease.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">We selected only the most activated age prototype during inference (not during the optimization process).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">https://adni.loni.usc.edu</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_4">https://monai.io</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_5">Using the same learning rate of the original PIPNet used to train the image-prototypes (0.05) results in irrelevant updates of the Age Prototypes</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6">Purity is measured w.r.t. to the annotation provided by the CerebrA atlas</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_7">Posterior quantitative evaluation performed w.r.t. the CerebrA atlas revealed that the test set image prototypes (averaged over the 5-folds) obtained with the ConvNeXt-Tiny backbone have a higher percentage of background voxels included compared to the ones obtained with ResNet-18 (76.6% vs 59.2%)</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Data used in the preparation of this article was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The ADNI was launched in 2003 as a public-private partnership with the primary goal to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer's disease (AD).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Deep learning to detect alzheimer&apos;s disease from neuroimaging: A systematic literature review</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Ebrahimighahnavieh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chiong</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cmpb.2019.105242</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Methods and Programs in Biomedicine</title>
		<imprint>
			<biblScope unit="volume">187</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An Explainable Convolutional Neural Network for the Early Diagnosis of Alzheimer&apos;s Disease from 18F-FDG PET</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">De</forename><surname>Santi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pasini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Santarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Genovesi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Positano</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10278-022-00719-3</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Digital Imaging</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Magnetic resonance imaging in Alzheimer&apos;s disease and mild cognitive impairment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chandra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Dervenoulas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Politis</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00415-018-9016-3</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Role of structural MRI in Alzheimer&apos;s disease</title>
		<author>
			<persName><forename type="first">P</forename><surname>Vemuri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jack</surname></persName>
		</author>
		<idno type="DOI">10.1186/alzrt47</idno>
	</analytic>
	<monogr>
		<title level="j">Alzheimer&apos;s Research and Therapy</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Age-related differences in brain morphology and the modifiers in middle-aged and older adults</title>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Matloff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Dinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Toga</surname></persName>
		</author>
		<idno type="DOI">10.1093/cercor/bhy300</idno>
	</analytic>
	<monogr>
		<title level="j">Cerebral Cortex</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="4169" to="4193" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A model of brain morphological changes related to aging and alzheimer&apos;s disease from cross-sectional assessments</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sivera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Delingette</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lorenzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Pennec</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ayache</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2019.05.040</idno>
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">198</biblScope>
			<biblScope unit="page" from="255" to="270" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer&apos;s disease classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Böhle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Eitel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Weygandt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ritter</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnagi.2019.00194</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Aging Neuroscience</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Deep learning for Alzheimer&apos;s disease diagnosis: A survey</title>
		<author>
			<persName><forename type="first">M</forename><surname>Khojaste-Sarakhsi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Haghighi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">F</forename><surname>Ghomi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Marchiori</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artmed.2022.102332</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1016/j.artmed.2022.102332" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence in Medicine</title>
		<imprint>
			<biblScope unit="volume">130</biblScope>
			<biblScope unit="page">102332</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions</title>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brcic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Confalonieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hayashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Khosravi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lecue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Malgieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Páez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Speith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.inffus.2024.102301</idno>
		<ptr target="http://creativecommons.org/licenses/by/4.0/.doi:10.1016/j.inffus.2024.102301" />
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page">102301</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nauta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schlötterer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Van Keulen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Seifert</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR52729.2023.00269</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A D</forename><surname>Santi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schlötterer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Scheschenja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wessendorf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nauta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Positano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Seifert</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2403.18328</idno>
		<title level="m">Pipnet3d: Interpretable detection of alzheimer in mri scans</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Interpreting and correcting medical image classification with pip-net</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nauta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Hegeman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Geerdink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schlötterer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Keulen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Seifert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence. ECAI 2023 International Workshops</title>
				<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="198" to="215" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Multimodality in meta-learning: A comprehensive survey</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>King</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.knosys.2022.108976</idno>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">250</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">On the theory of filter amplifiers</title>
		<author>
			<persName><forename type="first">S</forename><surname>Butterworth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Wireless Engineer</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="536" to="541" />
			<date type="published" when="1930">1930</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Estimating explainable Alzheimer&apos;s disease likelihood map via clinically-guided prototype learning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Mulyadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Jung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Oh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Yoon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H.-I</forename><surname>Suk</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2023.120073</idno>
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">273</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Unbiased nonlinear average age-appropriate brain templates from birth to adulthood</title>
		<author>
			<persName><forename type="first">V</forename><surname>Fonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Evans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mckinstry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Almli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Collins</surname></persName>
		</author>
		<idno type="DOI">10.1016/S1053-8119(09)70884-5</idno>
		<ptr target="https://doi.org/10.1016/S1053-8119(09)70884-5" />
	</analytic>
	<monogr>
		<title level="m">Annual Meeting</title>
				<imprint>
			<date type="published" when="2009">2009. ping 2009</date>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page">S102</biblScope>
		</imprint>
	</monogr>
	<note>organization for Human Brain Map</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">A closer look at spatiotemporal convolutions for action recognition</title>
		<author>
			<persName><forename type="first">D</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Torresani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Paluri</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1711.11248" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Kienzle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Schön</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ludwig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lienhart</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/2206.15073" />
		<title level="m">Covid detection and severity prediction with 3d-convnext and custom pretrainings</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Cerebra, registration and manual label correction of mindboggle-101 atlas for mni-icbm152 template</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Manera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dadar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Fonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Collins</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41597-020-0557-9</idno>
	</analytic>
	<monogr>
		<title level="j">Scientific Data</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">The co-12 recipe for evaluating interpretable part-prototype image classifiers</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nauta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Seifert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Explainable Artificial Intelligence</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="397" to="420" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
