=Paper=
{{Paper
|id=Vol-3715/paper6
|storemode=property
|title=MEDIGUI-ConvNet– Interactive Architecture Combining the Power of Convolutional Neural Networks and Medical Imaging
|pdfUrl=https://ceur-ws.org/Vol-3715/paper6.pdf
|volume=Vol-3715
|authors=Luca Zammataro,Stefano Rovetta,Danilo Greco
|dblpUrl=https://dblp.org/rec/conf/ini-dh/ZammataroRG24
}}
==MEDIGUI-ConvNet– Interactive Architecture Combining the Power of Convolutional Neural Networks and Medical Imaging==
MEDIGUI-ConvNet – Interactive Architecture
Combining the Power of Convolutional Neural
Networks and Medical Imaging
Luca Zammataro1,*,† , Stefano Rovetta2,† and Danilo Greco2,3,†
1
Lunan Foldomics LLC. Houston, Texas. USA
2
DIBRIS, Universitá degli Studi di Genova. Genoa, Italy
3
DIG, Politecnico di Milano, Milano Italy
Abstract
Convolutional Neural Networks (CNN) are the state of the art in domain-specific neural networks for
image data. We describe MEDIGUI-ConvNet, an effective CNN-based autonomous analysis and diagnosis
system for medical imaging aimed at Alzheimer’s disease, which promises great impact. To bridge the gap
between data/image scientists and domain experts, a graphical user interface (GUI) framework allows end-
users to load their medical image datasets in pickle format while successfully operating basic operations
like training, testing, or deployment of models. Users can construct CNN models based on their demands,
including defining network architecture and hyperparameters. After training, MEDIGUI-ConvNet allows
testing models, allowing users to compare accuracy and predictions to ground-truth classifications.
MEDIGUI-ConvNet offers a user-friendly solution for medical professionals and researchers to harness
the power of deep learning for medical image analysis without the need for specialized programming
expertise, with a promise for accelerating research and clinical applications in areas such as disease
diagnosis, prognosis, and treatment planning.
Keywords
Convolutional Neural Networks (CNN), automated detection, graphical user interface, Alzheimer’s
disease, neurodegenerative disease, structural magnetic resonance images, early diagnosis, MR image
classification, accuracy
1. Introduction
Alzheimer’s disease (AD) is the most common cause of dementia, accounting for an estimated
60-80% of cases worldwide [1]. It is a progressive and irreversible neurodegenerative disorder
leading to loss of cognitive functions including memory, language skills, attention, and reasoning.
Nowadays, over 40 million people worldwide have Alzheimer’s disease or related dementias,
with prevalence projected to triple to 135 million by 2050 as populations age [2]. Pathologically,
INI-DH 2024: Workshop on Innovative Interfaces in Digital Healthcare, in conjunction with International Conference on
Advanced Visual Interfaces 2024 (AVI 2024), June 3–7, 2024, Arenzano, Genoa, Italy (2024)
*
Corresponding author.
†
These authors contributed equally.
$ luca.zammataro@gmail.com (L. Zammataro); stefano.rovetta@unige.it (S. Rovetta); danilo.greco@polimi.it
(D. Greco)
https://github.com/lucazammataro (L. Zammataro)
0000-0002-4348-6341 (L. Zammataro); 0000-0003-3865-2613 (S. Rovetta); 0000-0002-0011-7001 (D. Greco)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
Alzheimer’s is characterized by abnormal accumulation of amyloid beta protein fragments into
senile plaques and neurofibrillary tangles of hyperphosphorylated tau protein in the brain,
leading to neuronal dysfunction and loss [3]. No disease-modifying treatments or cures currently
exist, therefore early and accurate diagnosis is critical for prognostic planning and symptom
management to improve the quality of life for patients [4]. However, definitively diagnosing
Alzheimer’s disease, particularly in its early stages, poses major clinical challenges. Symptoms
in the early stages can be subtle, heterogeneous in presentation, and easily confused with normal
ageing or other neurological conditions [5]. In vivo, diagnosis relies on skilled interpretation
of clinical assessments, cognitive tests, blood markers, and multi-modal neuroimaging [6].
Misdiagnosis rates for AD have been estimated to be as high as 20% even by expert clinicians
[7]. Distinguishing early AD from normal ageing or mild cognitive impairment (MCI) is
particularly difficult but crucial, as MCI can be a precursor to dementia. Neuropsychological
testing combined with structural magnetic resonance imaging (MRI) is commonly used to
aid diagnosis but lacks objectivity and standardization [8]. Positron emission tomography
(PET) imaging of amyloid burden is highly sensitive but expensive and not widely available [9].
These challenges motivate research into computer-aided diagnosis (CAD) systems to objectively
quantify disease markers from routine clinical data like neuroimaging. By automating and
standardizing parts of the diagnostic workflow, machine learning has the potential to improve
the efficiency, accuracy and reliability of AD diagnosis to ensure patients receive appropriate
early interventions. Early work applied machine learning approaches like support vector
machines (SVMs) and random forests to classify AD using hand-crafted feature representations
of grey matter density, cortical thickness, hippocampal volume and other MRI morphometric
measures [10, 11, 12, 13]. Although promising, these systems are limited by reliance on the
extraction of bespoke feature sets requiring domain expertise. More recently, deep learning
approaches based on convolutional neural networks (CNNs) have emerged as powerful tools
for directly learning discriminative features from 2D neuroimaging data in an end-to-end
manner without extensive feature engineering. Deep learning models have achieved state-
of-the-art performance on benchmark academic datasets like ADNI. However, a key barrier
to real-world clinical deployment is the lack of validation on diverse independent datasets
to demonstrate robust generalization across scanners, populations and demographics. There
is a need for extensively evaluated, reliable automated diagnosis aids to assist clinicians in
accurately detecting early Alzheimer’s disease. In this work, we develop a CNN architecture
optimized for Alzheimer’s classification from structural MRI volumes. We demonstrate state-
of-the-art performance on the large-scale benchmark ADNI dataset. Extensive experiments
validate the generalization ability of diverse datasets from multiple sources aggregated to
mimic real-world variability. Our deep learning model efficiently learns whole-brain patterns of
atrophy characteristics of AD from minimally processed scans without requiring expert feature
engineering. Our system shows promise as an automated second opinion to aid clinicians in early
and accurate Alzheimer’s diagnosis to enable timely patient interventions. This paper presents a
multi-layer neural network architecture for the automated detection of Alzheimer’s disease from
structural magnetic resonance images. Alzheimer’s is a debilitating neurodegenerative disease,
therefore early and accurate diagnosis is critical but poses challenges due to the complexity of
brain changes. We develop a deep neural network combining convolutional, pooling, dense and
dropout layers optimized for 2D MR image classification. Our model achieves 96.2% accuracy
on the benchmark Alzheimer’s Disease Neuroimaging Initiative dataset, outperforming state-
of-the-art methods. Ablation studies validate the importance of key architectural choices.
Extensive evaluation of diverse cohorts demonstrates generalizability across populations and
scanners. The model efficiently learns distinctive whole-brain atrophy patterns from minimally
processed scans without requiring expert feature engineering. Our work provides a widely
deployable tool for assisting clinicians in early and accurate Alzheimer’s diagnosis to enable
timely interventions.
Convolutional Neural Networks (CNNs) are a type of artificial neural network (ANN) inspired
by the visual system of animals. The first development of CNNs is credited to Fukushima in the
1980s for character recognition [14]. However, their widespread adoption came later in the 1990s
and 2000s with the emergence of more powerful hardware and efficient learning algorithms.
CNNs have achieved remarkable success in various computer vision tasks, including Image
Recognition, and context in which CNNs are the state-of-the-art method, with applications
in diverse fields like object classification, facial detection, and scene understanding. In Object
Detection, CNNs are used to identify objects in images and videos, finding applications in
security, robotics, and the automotive industry. Finally, in Semantic Segmentation, CNNs
can segment images into regions of interest, with applications in medicine, agriculture, and
industrial inspection [15].
2. Methods
2.1. Data
The dataset used for model development and evaluation consists of 33,984 MRI images
acquired from patients with AD and healthy controls. The images were acquired with
various scanners and have a resolution of 100x100 pixels that has been adapted to our
purposes. The original dataset is available at https://www.kaggle.com/datasets/uraninjo/
augmented-alzheimer-mri-dataset and is distributed under GNU Lesser General Public License
(https://www.gnu.org/licenses/lgpl-3.0.html)[16].
2.2. CNN Architecture
CNNs rely on two key principles: convolution and pooling. With the convolution, CNNs apply
a convolution operation to the input data, which helps extract local and invariant features.
With pooling, CNNs utilize pooling to reduce the dimensionality of the input data, improving
computational efficiency and model robustness.
The proposed CNN architecture (Figure 1) is designed to classify magnetic resonance imaging
(MRI) images of patients affected by Alzheimer’s Disease into four classes: Non-Demented,
Moderate-Demented, Mild-Demented, and Very-Mild-Demented. The dataset consists of 33,984
MRI images with a resolution of 100x100 pixels, split into 27,187 images for training and 6,797
images for testing. The architecture comprises a series of convolutional (Conv2D) and pooling
(MaxPooling2D) layers, followed by two fully connected (Dense) layers for final classification.
Table 1 summarizes the CNN architecture. The total number of trainable parameters in the
CNN architecture is 3,952,572, trained with cross-entropy loss using the Adam algorithm. We
Figure 1: The MEDIGUI-ConvNet’s Architecture
Table 1
Detailed Architecture of MEDIGUI ConvNet
Layer (type) Output Shape Param #
conv2d (None, 98, 98, 44) 440
max_pooling2d (None, 49, 49, 44) 0
conv2d_1 (None, 47, 47, 128) 50816
max_pooling2d_1 (None, 23, 23, 128) 0
conv2d_2 (None, 21, 21, 256) 295168
max_pooling2d_2 (None, 10, 10, 256) 0
conv2d_3 (None, 8, 8, 512) 1180160
max_pooling2d_3 (None, 4, 4, 512) 0
conv2d_4 (None, 2, 2, 512) 2359808
max_pooling2d_4 (None, 1, 1, 512) 0
flatten (None, 512) 0
dense (None, 128) 65664
dense_1 (None, 4) 516
have extended the methods by introducing a graphical interface to guide users through the
management of CNN training, as well as the ability to test the model with test images. For
the graphical interface, we opted to use ipywidgets to facilitate software integration within
a Jupyter Notebook environment. For the implementation of the CNN, we utilized Python 3
along with TensorFlow and Keras libraries. Accessing the GUI is straightforward by simply
importing the module in a Jupyter Notebook as a Python object. Furthermore, once the module
is imported, users can also access individual functions through the object, bypassing the GUI
entirely.
2.3. Training
For this study, we employed the CNN architecture detailed in Table 1. The training set includes
27,187 MRI images. After 30 epochs, the Adam algorithm set with a learning rate of 0.001 and
two regularization parameters set to 0.001 yielded a validation accuracy of 96.2% with a loss
value of 0.19, while attaining a training accuracy of 99.5% with a training loss of 0.031. However,
Figure 2: A screenshot of the MEDIGUI-ConvNet Graphic User Interface. Users can upload a dataset
from a pickle archive and perform training.
Figure 3: Users can manipulate training epochs, batch size, and two regularization parameters to
fine-tune the training performances. The Training Tab also provides selection menus and sliding controls
for modifying the CNN architecture by adjusting filters, the number of neurons, and activation functions.
as we will describe later, we made our CNN open to various customizations, thus allowing for a
wide range of experiments.
2.4. The GUI
MEDIGUI-Convnet presents itself with a multi-tab interface to provide an optimal interactive
experience [17]. The "Load Dataset" tab allows users to upload a dataset in pickle format.
The dataset must be structured to provide pixel intensity values for each image and a label
representing a category to classify. In the case of the Alzheimer’s dataset, we have four labels
representing four categories of dementia (ND = Non-Demented, MoD = Moderate-Demented,
Figure 4: The ’Model Testing’ tab in MEDIGUI-Convnet facilitates loading pre-trained models from
your hard disk and accessing an uploaded image dataset. A window displays MRI images, their class,
and model predictions, while a Log window ensures process transparency and traceability.
MiD = Mild-Demented, and ViMD = Very-Mild-Demented). Once the dataset has been loaded,
the user can proceed to the training phase (Figure 2). While the fundamental architecture of the
CNN in MEDIGUI-Convnet, consisting of five convolutional layers, five max-pooling layers,
one flatten layer, and two dense layers, is fixed, users have significant control over its design.
You can adjust the number of filters and the number of neurons for all layers, including the
flatten layer. You can also change the activation function associated with each layer, choosing
between ReLU and softmax. Additionally, you can vary regularization parameters on the dense
layer, the number of epochs, batch size, test size, and the seed parameter to choose different
randomizations in generating training and testing datasets (Figure 3).
The ’Model Testing’ tab in MEDIGUI-Convnet is designed for your convenience. It allows
you to load the trained models with a simple click. Once you’ve chosen and loaded a model,
you can access a dataset of images previously uploaded using a sliding control. A window on
the right will display the MRI image along with some information, such as the image’s class
Figure 5: The plot displays a selection of 25 out of 100 predictions for improved visual clarity: each image
is accompanied by a title showing the image number, the real class, and the predicted class. For example,
i:18, r:MiD, p:MiD, signifies image 18, real class Mild-Demented, predicted class Mild-Demented.
and the prediction made by the model. The GUI always displays a Log window, which tracks all
processes performed by the algorithm, ensuring transparency and traceability (Figure 4).
MEDIGUI-ConvNet automatically generates a plot containing predictions for the first 100
images of the testing dataset. Each prediction in the plot is associated with the figure number
and the actual class to which the image belongs. This lets users quickly identify predictions and
conduct in-depth pattern analysis by exploring the CNN filters. (Figure 5).
Within the Model Testing Tab, upon clicking the "Feature Mapping" checkbox, a series of plots
displaying activations from all layers of the CNN can be generated. This analysis is particularly
Figure 6: Upon toggling the "Feature Mapping" checkbox in the Model Testing Tab, a series of plots
illustrating activations from all CNN layers can be generated, facilitating the identification of character-
istic patterns across classes and enabling correlation with disease progression in brain tissue.
useful for identifying characteristic patterns of the various classes. Such an analysis enables
physicians and researchers to identify morphological-degenerative aspects in brain tissue that
can be correlated with disease progression (Figure 6).
Figure 7 partially represents the patterns identified using the Feature Mapping approach. It
illustrates the information captured by filters from some CNN layers (max_pooling_0, conv2D_1,
Figure 7: Patterns identified through Feature Mapping in MRI image 58 from the test dataset, showcas-
ing information from select CNN layers and activation levels of the flatten and dense layers, providing
insights into the classification process.
and conv2D_4) for MRI image number 58 from the test dataset. These images come from a test
subset randomly sampled as 20% of the data set using the medigui.splitDataset function.
Figure 7 also provides insight into the activation levels of the 512 units in the flatten layer and
the two dense layers. Of particular note is the last dense layer, indicating the activity of the
four output neurons. The plot clearly shows an activation of 1.0 for output neuron 1, which
corresponds to the label associated with the Moderate Demented class. As per the representation
convention, brighter colours around yellow correspond to higher activation levels, indicating
what the CNN layers deem most relevant for classification purposes (Figure 7).
2.5. Evaluation
The CNN was evaluated on the independent test set of 6,797 MRI images. The classification
performance was evaluated using the 𝐹 1 score, particularly useful when classes are imbalanced
or when a good balance between precision and recall is important. It is calculated as follows:
𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 × 𝑟𝑒𝑐𝑎𝑙𝑙
𝐹1 = 2 × (1)
𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑟𝑒𝑐𝑎𝑙𝑙
The scikit-learn f1_score function was applied using the weighted averaging strategy to
handle class imbalance. A multiple cross-validation technique was also employed to evaluate
the model’s performance on the dataset by using the scikit-learn RepeatedKFold function.
Figure 8: Feature mapping of a Non-Demented sample: our analysis of filter information and dense
layer activations reveals diverse patterns between Non-Demented and Moderate-Demented images ( See
also Figure 10). While insights may seem redundant within filters, the complexity of layer connections
sometimes yields ’black-box’ information. Comparison at level 8 highlights differing patterns between
ND and MoD images, while examination of activation levels at levels 10 and 11 provides insights into
potential correlations with output layer activation.
Figure 9: Feature mapping of a Moderate-Demented sample. The caption of this Figure is the same as
Figure 9.
By doing so, we could train and evaluate our model multiple times on different subsets of the
data, providing a more reliable estimate of its performance. The dimensions of the input data
were also extended to accommodate the model’s requirements, ensuring compatibility during
training and evaluation. Finally, we computed the mean accuracy score and standard deviation
across all folds to quantify the model’s performance and variability.
The pre-trained model (30 epochs with two regularization parameters set to 0.001) was
validated using a 15-fold cross-validation.
3. Results and discussion
3.1. Experimental results
The CNN demonstrated robust generalization capabilities, effectively classifying previously
unseen MRI images into the appropriate categories of Alzheimer’s Disease severity. The test
accuracy metric reached a very good value of 96.2%, highlighting the reliability of this CNN-
based model in clinical applications. Furthermore, precision, recall, and 𝐹 1-score metrics
provide valuable insights into CNN’s ability to identify each class while accurately minimizing
false positives and negatives. In our case 𝐹 1-score was 0.96, indicating strong performance, good
balance between precision and recall and, consequently, good classification model performance.
3.2. Discussion
Analyzing the information contained within the filters and visualizing the activation levels of
units in the dense layers using our tool, we observed that our system could highlight distin-
guishable patterns between images classified as Non-Demented compared to those classified as
Moderate-Demented. Many of the insights appear redundant within the filters, and in some
cases, "black-box" information is obtained due to the complexity of connections between the
various layers. For instance, comparing the patterns highlighted at level 8, corresponding to the
Conv2D_4 convolution layer reveals how the arrangement of patterns within the filters may
differ between ND and MoD comparisons. However, directly correlating these arrangements
with the obtained outcome requires further analysis. In contrast, comparing the activation levels
of various units obtained from the plot of levels 10 and 11, corresponding to the flatten and the
first dense layer offers a better understanding of a potential correlation with the activation of
output layers (Figure 8 and 9).
A noteworthy finding is a clear difference between ND and MoD samples in the activations
of neurons with indices ranging from 20 to 40 and 70 to 120, as observed in the dense layer 11.
These neural units’ distinct activation strongly correlates with the output levels of dense layer
12, representing the entire CNN’s final output layer. These distinct activation patterns might
arise from a series of alterations related to the widening of certain cerebral sulci detectable by
MRI, albeit challenging to identify with the naked eye.
Our analysis suggests that the convolutional neural network (CNN) has the potential to
detect subtle and intricate anatomical features linked to severe dementia, such as widened
convoluted sulci. By examining the activation patterns within the CNN’s dense layers, we
observed a significant correlation between these features and the level of dementia severity.
However, further investigation is necessary to confirm and interpret these associations defini-
tively. The CNN might also be adept at identifying other anatomical features or patterns of
neuronal activation related to dementia. These findings could provide valuable insights into
the neuroanatomical underpinnings of the disease. In conclusion, our study suggests that
the CNN approach holds promise for uncovering neuroanatomical markers of dementia. In
addition, our analysis suggests the possibility of identifying recurring patterns within the fully
connected layer. These patterns could represent specific configurations of neuroanatomical
features characteristic of different stages or subtypes of dementia. By exploring the activations
and weights within the dense layer, we may uncover meaningful associations between these
patterns and clinical manifestations of the disease. However, further research is warranted
to investigate these hypotheses and elucidate the clinical relevance of the identified patterns.
Overall, detecting recurring patterns within the dense layer holds promise for enhancing our
understanding of the underlying neurobiology of dementia and may lead to more targeted
diagnostic and therapeutic interventions.
4. Conclusions
Our model’s performance on the test data is pivotal, reflecting its capacity to generalize beyond
the confines of the training set. By emphasizing the model’s overall accuracy and juxtaposing it
against alternative methodologies, we glean valuable insights into the efficacy of our approach
in the realm of Alzheimer’s Disease diagnosis. We want to test our software as the next step
on magnetic resonance images of size 256x256. Navigating challenges encountered during
training and testing phases, such as mitigating overfitting and addressing data imbalance, yields
a nuanced understanding of our model’s performance. Delving into these challenges not only
elucidates areas for refinement but also paves the way for future advancements in neuroimaging
analysis.
Proposing innovative solutions or enhancements derived from identified challenges catalyzes
pushing the boundaries of model performance. Whether fine-tuning the model architecture,
exploring novel training paradigms, or enriching the dataset with diverse clinical data, these
endeavours contribute to ongoing progress in Alzheimer’s Disease research and diagnostics.
In conclusion, evaluating our model’s performance transcends mere accuracy assessment;
it necessitates a holistic examination of its strengths, weaknesses, and avenues for improve-
ment. This comprehensive approach underscores our commitment to continual refinement and
innovation, driving forward the integration of AI-powered CNNs in the clinical diagnosis of
Alzheimer’s Disease. The robust performance of the trained CNN underscores its potential
as a valuable tool for assisting clinicians in diagnosing and monitoring Alzheimer’s Disease
progression.
References
[1] A. Burns, S. Iliffe, Alzheimer’s disease, BMJ: British Medical Journal (Online) 338 (2009).
[2] C. Patterson, World alzheimer report 2018 (2018).
[3] A. Serrano-Pozo, M. P. Frosch, E. Masliah, B. T. Hyman, Neuropathological alterations in alzheimer
disease, Cold Spring Harbor perspectives in medicine 1 (2011) a006189.
[4] B. D. Carpenter, C. Xiong, E. K. Porensky, M. M. Lee, P. J. Brown, M. Coats, D. Johnson, J. C.
Morris, Reaction to a dementia diagnosis in individuals with alzheimer’s disease and mild cognitive
impairment, Journal of the American Geriatrics society 56 (2008) 405–412.
[5] J. T. Becker, F. Boller, J. Saxton, K. L. McGonigle-Gibson, Normal rates of forgetting of verbal and
non-verbal material in alzheimer’s disease., Cortex; a journal devoted to the study of the nervous
system and behavior 23 (1987) 59–72.
[6] G. M. McKhann, D. S. Knopman, H. Chertkow, B. T. Hyman, C. R. Jack Jr, C. H. Kawas, W. E. Klunk,
W. J. Koroshetz, J. J. Manly, R. Mayeux, et al., The diagnosis of dementia due to alzheimer’s disease:
Recommendations from the national institute on aging-alzheimer’s association workgroups on
diagnostic guidelines for alzheimer’s disease, Alzheimer’s & dementia 7 (2011) 263–269.
[7] Y. A. Pijnenburg, J. L. Mulder, J. C. Van Swieten, B. M. Uitdehaag, M. Stevens, P. Scheltens, C. Jonker,
Diagnostic accuracy of consensus diagnostic criteria for frontotemporal dementia in a memory
clinic population, Dementia and geriatric cognitive disorders 25 (2008) 157–164.
[8] L. M. Bloudek, D. E. Spackman, M. Blankenburg, S. D. Sullivan, Review and meta-analysis of
biomarkers and diagnostic imaging in alzheimer’s disease, Journal of Alzheimer’s Disease 26 (2011)
627–645.
[9] K. A. Johnson, S. Minoshima, N. I. Bohnen, K. J. Donohoe, N. L. Foster, P. Herscovitch, J. H. Karlawish,
C. C. Rowe, M. C. Carrillo, D. M. Hartley, et al., Appropriate use criteria for amyloid pet: a report
of the amyloid imaging task force, the society of nuclear medicine and molecular imaging, and the
alzheimer’s association, Alzheimer’s & Dementia 9 (2013) E1–E16.
[10] S. Klöppel, C. M. Stonnington, C. Chu, B. Draganski, R. I. Scahill, J. D. Rohrer, N. C. Fox, C. R. Jack Jr,
J. Ashburner, R. S. Frackowiak, Automatic classification of mr scans in alzheimer’s disease, Brain
131 (2008) 681–689.
[11] R. Cuingnet, E. Gerardin, J. Tessieras, G. Auzias, S. Lehéricy, M.-O. Habert, M. Chupin, H. Benali,
O. Colliot, A. D. N. Initiative, et al., Automatic classification of patients with alzheimer’s disease
from structural mri: a comparison of ten methods using the adni database, neuroimage 56 (2011)
766–781.
[12] C. Salvatore, A. Cerasa, P. Battista, M. C. Gilardi, A. Quattrone, I. Castiglioni, A. D. N. Initiative,
Magnetic resonance imaging biomarkers for the early diagnosis of alzheimer’s disease: a machine
learning approach, Frontiers in neuroscience 9 (2015) 307.
[13] S. Rathore, M. Habes, M. A. Iftikhar, A. Shacklett, C. Davatzikos, A review on neuroimaging-based
classification studies and associated feature extraction methods for alzheimer’s disease and its
prodromal stages, NeuroImage 155 (2017) 530–548.
[14] K. Fukushima, Neocognitron: A self-organizing neural network model for a mechanism of pattern
recognition unaffected by shift in position, Biological cybernetics 36 (1980) 193–202.
[15] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition,
Proceedings of the IEEE 86 (1998) 2278–2324.
[16] A. Yakkundi, Alzheimer’s disease dataset, https://doi.org/10.17632/ch87yswbz4.1, 2023. doi:10.
17632/ch87yswbz4.1, mendeley Data, Version 1.
[17] G. Rauterberg, Quantitative test metrics to measure the quality of user interfaces, in: 4th Annual
conference software testing analysis and review-EuroSTAR 96, Amsterdam, 2-6 December 1996,
EuroSTAR Secretariat, 1996, pp. TQ2P2–1.
A. Online Resources
The MEDIGUI-ConvNet’s code is available via https://github.com/lucazammataro/MEDIGUI-ConvNet.git
and it is released under GNU General Public License v3.0.