<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Advancing e-health with AI: Insights from our research experience in neuroimaging, acoustic signals, and vital parameter monitoring</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gabriella</forename><surname>Casalino</surname></persName>
							<email>gabriella.casalino@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giovanna</forename><surname>Castellano</surname></persName>
							<email>giovanna.castellano@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gennaro</forename><surname>Vessio</surname></persName>
							<email>gennaro.vessio@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gianluca</forename><surname>Zaza</surname></persName>
							<email>gianluca.zaza@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Advancing e-health with AI: Insights from our research experience in neuroimaging, acoustic signals, and vital parameter monitoring</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">63A58BBDA7B4327C394236DC01331192</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:57+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>artificial intelligence</term>
					<term>explainability</term>
					<term>e-health</term>
					<term>neuroimaging</term>
					<term>acoustic signals</term>
					<term>vital parameters</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This contribution briefly describes the research being carried out in the Computational Intelligence Laboratory of the Department of Computer Science, University of Bari Aldo Moro, in AI-based e-health. Our research encompasses a wide array of methodologies and applications aimed at leveraging the capability of AI to empower the diagnosis, monitoring, and treatment of various health conditions. Through multifaceted research that covers neuroimaging analysis, acoustic signal processing, and vital parameter monitoring, our goal is to shed light on the potential of AI in enhancing healthcare services.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Integrating Artificial Intelligence (AI) into healthcare is transforming how we diagnose, monitor, and care for patients. At the Computational Intelligence Laboratory (CILab) of the Department of Computer Science, University of Bari Aldo Moro, we are contributing to this transformation by applying AI to neuroimaging, acoustic signal analysis, and vital signs monitoring. Our work aims to address current healthcare challenges by developing innovative and practical AI solutions.</p><p>This paper presents an overview of our research efforts and achievements in these areas. By sharing our findings and methodologies, we aim to highlight AI's significant impact on improving healthcare services and patient outcomes. Our goal is to showcase our work and encourage ongoing innovation and dialogue in the rapidly evolving field of AI in healthcare.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Neuroimaging</head><p>Neuroimaging is a pivotal area within our research portfolio, where the application of AI-driven algorithms plays a crucial role in enabling the early and precise diagnosis of neurological disorders.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Alzheimer's disease detection</head><p>Dementia, with Alzheimer's disease (AD) being its most common form, poses a significant global health challenge, especially among the aging population. It currently affects around 55 million people worldwide, predominantly in low-and middle-income countries, and this number is expected to increase as the global population ages. Unfortunately, effective cures remain elusive, with available treatments focusing more on symptom management than addressing the underlying causes. This underscores the critical need for early and accurate diagnosis to improve patient care. AI, mainly through advanced machine and deep learning techniques, is increasingly recognized for its potential to revolutionize the diagnosis of dementia, including AD. Neuroimaging techniques, such as Magnetic Resonance Imaging (MRI) and amyloid Positron Emission Tomography (PET) scans, have been identified as promising tools for early detection. MRI provides detailed images of the brain, enabling the identification of brain atrophy patterns characteristic of AD. At the same time, amyloid PET scans offer insights into the pathophysiology of the disease by detecting amyloid plaques in the brain, a hallmark of AD.</p><p>Our research has advanced the application of Convolutional Neural Network (CNN) models for the automated diagnosis of AD, using the strengths of both MRI and PET scans <ref type="bibr" target="#b0">[1]</ref>. We examined these neuroimaging techniques' efficacy in uni-modal and multi-modal setups, underscoring the advantage of integrating data from diverse modalities to refine diagnostic precision. Additionally, we incorporated an explainable AI method to address the demand for transparency in medical AI applications, offering insights into the AI-driven diagnostic process and contributing to a deeper understanding of AD's un- Specifically, our investigation has yielded several key insights, highlighting the potential of multi-modal imaging strategies. By analyzing the classification results from various model configurations on the OASIS-3 benchmark dataset, we discovered that models utilizing 3D inputs consistently outperformed those using 2D inputs, likely due to the richer spatial information available in 3D scans. Moreover, regardless of whether in 2D or 3D, MRI scans significantly surpassed amyloid PET scans in diagnostic performance, emphasizing MRI's inherent value in AD detection within our study. However, multi-modal strategies, particularly our "fusion" model (shown in Fig. <ref type="figure" target="#fig_0">1</ref>), demonstrated a clear advantage, achieving up to 95% accuracy. This underlines the complementary nature of MRI and PET scans in AD diagnosis.</p><p>Our adoption of the Grad-CAM technique further allowed us to pinpoint the brain regions most relevant for classification, offering valuable insights into the neuroanatomical underpinnings of AD. This supports the validity of our models and enhances our understanding of AD's neuropathology.</p><p>In another exploratory study <ref type="bibr" target="#b1">[2]</ref>, we delved into Diffusion Tensor Imaging (DTI), a sophisticated MRI technique that assesses the integrity of white matter fiber tracts in the brain. We explored fractional anisotropy (FA), a DTI metric that measures the uniformity of water diffusion directionality, which exhibits notable changes in AD patients, suggesting its utility as a diagnostic indicator.</p><p>We introduced a dual-stage deep learning approach, combining unsupervised and supervised techniques. Initially, a 3D convolutional autoencoder was employed to extract low-dimensional representations from FA images in an unsupervised manner. These representations were then used to train a supervised 3D CNN for AD detection. This innovative strategy demonstrated encouraging outcomes on the OASIS-3 dataset and lessened the dependence on extensively annotated datasets, setting the stage for more autonomous and quantitative AD detection in clinical practice. Our future endeavors will aim to assess this method across broader and more varied datasets to affirm its diagnostic validity further.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Brain tumor segmentation</head><p>Brain tumor segmentation from MRI scans is crucial for accurate diagnosis, treatment planning, and patient monitoring. Recent strides in deep learning, particularly with CNNs, have significantly advanced the automation and precision of tumor segmentation. Nevertheless, these models' "black-box" nature raises challenges in explainability-a vital aspect of clinician trust and decisionmaking.</p><p>Graph Neural Networks (GNNs) have recently gained attention as a novel approach to medical image analysis, offering an alternative that might bridge the gap between accuracy and explainability. By conceptualizing brain images as graphs-where nodes represent voxels or regions of interest, and edges depict spatial relationships-GNNs use relational dependencies to achieve fine segmentation. This capability to capture both local and global contexts through message passing between nodes positions GNNs as a promising tool for achieving high precision in brain tumor segmentation and providing a pathway to model understandability.</p><p>In a recent study <ref type="bibr" target="#b2">[3]</ref>, we analyzed GNN models for seg-menting brain tumors, focusing on their explainability.</p><p>Using GNNExplainer, we aimed to improve the transparency of GNN models, making their decision-making processes accessible and understandable to clinicians. Our exploration highlighted the effectiveness of GNNs in medical imaging. It also laid the foundation for future research, suggesting potential synergies between GNNs and CNNs, such as integrating GNNs with 3D U-Net architectures, to refine segmentation results further. In addition, collaboration with medical experts to examine critical features identified by GNNExplainer could further solidify the role of GNNs in clinical practice, combining accuracy and explainability in brain tumor management.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Acoustic signals</head><p>The analysis of vocal characteristics from speech samples is an effective approach to identifying conditions associated with mental diseases, notably bipolar disorder (BD). Our research focused on extracting acoustic features from patients with BD using a specialized mobile application, developed at the Department of Affective Disorders, Institute of Psychiatry and Neurology in Warsaw, Poland, under the project "Smartphone-based diagnostics of phase changes in the course of bipolar disorder". BD manifests through fluctuating mood states, including euthymia, depression, mixed states, and mania, traditionally diagnosed through regular consultations using standard psychiatric tools like the Hamilton Depression Rating Scale (HDRS) and the Young Mania Rating Scale (YMRS). These instruments allow healthcare providers to detect symptoms and evaluate the intensity of depressive and manic episodes, facilitating precise diagnoses and the development of customized treatment strategies.</p><p>Our research examined several critical dimensions of data related to BD, specifically focusing on the importance of continuous monitoring to track temporal fluctuations. We addressed the challenge of missing labels while also handling the uncertainty in labeling due to the inherent ambiguity and variability in data classification. Moreover, we worked on generating readily understandable explanations of BD state classifications leveraging the availability of multi-layered information.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Monitoring bipolar disorder states</head><p>The specialized application captures acoustic features daily, but patient assessments are less frequent, resulting in a scarcity of labeled data. This gap leaves many acoustic features without clear annotations of the patient's condition.</p><p>We explored semi-supervised learning algorithms to harness the geometric data properties and the predefined knowledge of patient states. These algorithms have shown promise in enhancing classification accuracy, even with limited labeled data <ref type="bibr" target="#b3">[4]</ref>.</p><p>We introduced a novel algorithm, the Dynamic Incremental Semi-Supervised Fuzzy C-Means (DISSFCM), designed to monitor BD states while considering the temporal acquisition of acoustic features. DISSFCM, an extension of the Semi-Supervised Fuzzy C-Means (SS-FCM) algorithm, analyses data chunks sequentially in near real-time, maintaining historical data insights without extensive storage. It adapts to new information, refining the classification through an increased cluster count representing the patient's condition states. This method has proven effective in predicting episodes of health and illness with as little as 25% labeled data <ref type="bibr" target="#b4">[5]</ref>.</p><p>DISSFCM operates on labeled prototypes, summarizing data clusters for each segment. It generates membership matrices, clarifying each data point's cluster association and facilitating outcome explanation. Initially, we applied visual analytics for interpretation <ref type="bibr" target="#b5">[6]</ref>, advancing to natural language explanations or linguistic summaries, which translate complex data relations into understandable sentences <ref type="bibr" target="#b6">[7]</ref>. For example, we could deduce that "Most calls in the state of hypomania have low loudness compared to the state of euthymia". This approach segments acoustic features into semantic categories-loudness, pitch, spectrum, and voice quality-guided by psychiatric expertise. Our experiments have demonstrated the practical application of linguistic summaries as informative granules for smartphone-based BD monitoring. They offer clear, insightful linguistic descriptions, making the complex data and sparse psychiatric evaluations comprehensible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Explaining bipolar disorder states</head><p>We designed a versatile, multi-task neural network to leverage the detailed symptom information captured during patient assessments. This network is trained to generate several outputs, each aligning with the various levels of labels obtained from intermediate assessment stages. These intermediate outputs fulfill dual roles: they enhance the model's overall predictive accuracy and provide insights into classifying mid-level labels. Our architecture, designed to handle data with a hierarchical class structure, is a crucial component of PLENARY (ex-Plaining bLack-box modEls in Natural lAnguage thRough fuzzY linguistic summaries) <ref type="bibr" target="#b7">[8]</ref>. PLENARY aims to categorize tabular data across different class levels and render the model's explanations into natural language, employing fuzzy linguistic summaries for clarity.</p><p>In collaboration with a neuropsychiatrist, we identified ten critical symptoms as intermediate labels, including anxiety, decreased activity, mood changes, disorganization, and sleep disorders, among others. The model's outcomes and explanations focus on the patient's state and these specific symptoms. For instance, we found that "Among records that contribute positively to predicting mania, most of them have spectral-related features at low level" and "Among records that contribute against predicting decreased activity, most of them have qualityrelated features at low level".</p><p>Through rigorous experimental evaluation, we have demonstrated that augmenting model explanations with fuzzy linguistic summarization-especially those derived from SHAP analyses-significantly enhances understanding of the model's predictions. This approach effectively combines domain-specific knowledge with technical insight, providing a comprehensive and accessible explanation framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Vital parameters</head><p>Our research into vital parameter monitoring leverages AI to anticipate important diseases, equipping patients and physicians with critical insights for preemptive healthcare management. Herein, we detail our efforts in remote vital parameter estimation and creating eXplainable AI (XAI) models to support medical diagnosis. These models use vital signs data to aid medical professionals in the early detection of cardiovascular diseases and stress-related conditions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Contact-less monitoring of vital parameters</head><p>Our endeavors in vital parameter monitoring have been concentrated on heart rate, breathing rate, blood oxygen saturation (SpO 2 ), and systolic and diastolic blood pressure-key indicators for cardiovascular health. Traditional methods, like ECG, require direct skin contact, often necessitating cumbersome wearable devices. To overcome the limitations and discomfort of contact-based monitoring, advancements have been made toward developing photoplethysmography (PPG) techniques that operate using camera-based systems. However, these can be expensive and not user-friendly for daily home use. Addressing these challenges, our lab has developed an innovative, cost-effective approach for monitoring cardiovascular parameters that seamlessly integrates into everyday living environments <ref type="bibr" target="#b8">[9]</ref>. This system employs a non-invasive, contactless device consisting of a transparent mirror equipped with a camera that identifies the user's face and uses remote photoplethysmography (rPPG) to analyze video frames. The prototype of the smart mirror is shown in 2(a). This method calculates vital parameters like blood oxygen saturation, heart rate, and breathing rate and includes a novel technique for automatic lip color detection through clustering-based color quantization. With this new method, we aim to relieve individuals from the discomfort of traditional contactbased monitoring, making it a more convenient and userfriendly option for daily home use.</p><p>Our methodological pipeline (shown in Fig. <ref type="figure" target="#fig_1">2</ref>(b)) initiates with the detection of the subject's face, focusing specifically on the forehead as the region of interest (ROI) for signal extraction. The rPPG signal is then processed using Independent Component Analysis (ICA) and Fast Fourier Transform (FFT) to estimate HR and BR. At the same time, SpO 2 measurements are derived by applying the Beer-Lambert law. For lip color detection, the system identifies the lip ROI and determines the dominant color using clustering methods. Our contactless approach has not only demonstrated measurement accuracy within acceptable ranges for both stationary and minimally moving subjects, but it has also shown superior performance compared to traditional contact devices, instilling confidence in its reliability and accuracy.</p><p>Further enhancements included the addition of new ROIs and a face-tracking feature to accommodate head movements, improving usability on mobile devices <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. This comprehensive framework is adaptable to any camera-equipped device, leading to the creation of a smartphone application that facilitates easy, widespread monitoring of vital health parameters <ref type="bibr" target="#b11">[12]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Cardiovascular risk assessment</head><p>While traditional machine learning algorithms have significantly aided physicians in diagnosing symptoms early to prevent disease progression, their often opaque nature presents a challenge. These "black box" models deliver accurate predictions but lack an intuitive explanation for their results. This makes them less practical in fields where end-users are non-technical professionals, notably in healthcare.</p><p>Our work concentrates on advancing XAI models, which are mainly aimed at supporting medical decisions in cardiovascular disease (CVD) assessment. CVDs are a primary global health concern, responsible for approximately 17.9 million deaths annually,<ref type="foot" target="#foot_0">1</ref> spanning conditions such as coronary heart disease and stroke. Given the multifactorial causes of CVDs, including lifestyle and genetic predispositions, early intervention and continuous monitoring of vital signs are crucial to prevention.</p><p>In our efforts, we have developed a fuzzy rule-based system to assist clinicians in evaluating cardiovascular risks with greater interpretability <ref type="bibr" target="#b12">[13]</ref>. This system utilizes IF-THEN rules, a natural language format that simplifies understanding and application, incorporating patient data like heart rate and blood oxygen saturation to estimate CVD risk. Developed in collaboration with medical experts, this model prioritizes accuracy while ensuring user-friendly interpretability, offering a slight trade-off in precision for much greater transparency.</p><p>To bridge the gap between data-driven precision and expert intuition, we explored neuro-fuzzy systems, which automate the generation of fuzzy rule-based models from data, streamlining the otherwise manual and laborintensive process of rule formation. Our research demonstrates that models created through neuro-fuzzy systems maintain accuracy and significantly enhance interpretability, outperforming manually designed models in cardiovascular risk prediction <ref type="bibr" target="#b13">[14]</ref>.</p><p>Expanding beyond cardiovascular health, we have applied neuro-fuzzy systems to diagnose hypertension and stress, focusing on minimizing complexity for clearer understanding. We have balanced accuracy and interpretability by employing feature selection to refine the number of relevant indicators and fuzzy rules, making these models highly practical for real-world medical applications <ref type="bibr" target="#b14">[15]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Diagram illustrating the proposed multi-modal CNN architecture designed for AD detection, which simultaneously processes 3D MRI and PET scan inputs for enhanced diagnostic accuracy.</figDesc><graphic coords="2,130.96,84.20,333.33,189.18" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: (a) Prototype of the smart mirror developed in our lab and (b) methodological pipelines for vital sign measurement.</figDesc><graphic coords="5,255.97,90.24,250.01,99.07" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.who.int/health-topics/cardiovascular-diseases</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments G.C. and G.Z. acknowledge the support from the FAIR -Future AI Research (PE00000013) project, Spoke 6 -Symbiotic AI (CUP H97G22000210007), under the NRRP MUR program funded by NextGenerationEU. Ga.C. acknowledges funding from the European Union PON project Ricerca e Innovazione 2014-2020, D.M. 1062/2021. All authors are members of the INdAM GNCS research group. Ga.C, G.C, and G.V. are members of the CITEL -Centro Interdipartimentale della ricerca in Telemedicina, University of Bari Aldo Moro.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Automated detection of Alzheimer&apos;s disease: a multi-modal approach with 3D MRI and amyloid PET</title>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Esposito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Lella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Montanaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Vessio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific Reports</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">5210</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Combining Unsupervised and Supervised Deep Learning for Alzheimer&apos;s Disease Detection by Fractional Anisotropy Imaging</title>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Lella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Placidi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Polsinelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Vessio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), IEEE</title>
				<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="511" to="516" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">From Voxels to Insights: Exploring the Effectiveness and Transparency of Graph Neural Networks in Brain Tumor Segmentation</title>
		<author>
			<persName><forename type="first">D</forename><surname>Amendola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Vessio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ternational Joint Conference on Neural Networks (IJCNN 2024)</title>
				<imprint>
			<publisher>IEEE</publisher>
		</imprint>
	</monogr>
	<note>to appear</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Semi-Supervised vs. Supervised Learning for Mental Health Monitoring: A Case Study on Bipolar Disorder</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Hryniewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Leite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Opara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Radziszewska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kaczmarek-Majer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Applied Mathematics and Computer Science</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="419" to="428" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Dynamic incremental semi-supervised fuzzy clustering for bipolar disorder episode prediction</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Galetta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kaczmarek-Majer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Discovery Science</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="79" to="93" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Intelligent analysis of data streams about phone calls for bipolar disorder monitoring</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kaczmarek-Majer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Hryniewicz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Explaining smartphone-based acoustic data in bipolar disorder: Semi-supervised fuzzy clustering and relative linguistic summaries</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kaczmarek-Majer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Hryniewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dominiak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<biblScope unit="volume">588</biblScope>
			<biblScope unit="page" from="174" to="195" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kaczmarek-Majer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dominiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Hryniewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kamińska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Vessio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<biblScope unit="volume">614</biblScope>
			<biblScope unit="page" from="374" to="399" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Contact-less real-time monitoring of cardiovascular risk using video imaging and fuzzy inference rules</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Pasquadibisceglie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">9</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A mHealth solution for contact-less self-monitoring of blood oxygen saturation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Symposium on Computers and Communications (ISCC), IEEE</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Evaluating the robustness of a contact-less mHealth solution for personal and remote monitoring of blood oxygen saturation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Ambient Intelligence and Humanized Computing</title>
		<imprint>
			<biblScope unit="page" from="1" to="10" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A mobile app for contactless measurement of vital signs through remote Photoplethysmography</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nisio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Pasquadibisceglie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE international conference on systems, man, and cybernetics (SMC)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="2675" to="2680" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A fuzzy rulebased decision support system for cardiovascular risk assessment</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Castiello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Pasquadibisceglie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Fuzzy Logic and Applications: WILF 2018</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="97" to="108" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Balancing accuracy and interpretability through neurofuzzy models for cardiovascular risk assessment</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Kaymak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Symposium Series on Computational Intelligence (SSCI)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Interpretable Neuro-Fuzzy Models for Stress Prediction</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Castellano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference of the European Society for Fuzzy Logic and Technology</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="630" to="641" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
