=Paper=
{{Paper
|id=Vol-3762/576
|storemode=property
|title=Comparison of Machine Learning approaches for Stress Detection from Wearable Sensors Data
|pdfUrl=https://ceur-ws.org/Vol-3762/576.pdf
|volume=Vol-3762
|authors=Michela Quadrini,Denise Falcone,Gianluca Gerard
|dblpUrl=https://dblp.org/rec/conf/ital-ia/QuadriniFG24
}}
==Comparison of Machine Learning approaches for Stress Detection from Wearable Sensors Data==
Comparison of Machine Learning approaches for Stress
Detection from Wearable Sensors Data
Michela Quadrini1,*,β , Denise Falcone1 and Gianluca Gerard2
1
School of Science and Technology, University of Camerino, Via Madonna delle Carceri, 9, Camerino, 62032, Italy
2
Sorint.Tek, 17 Zanica Grassobbio, BG, 24050 Italy
Abstract
Stress is a prevalent and growing phenomenon in the modern world potentially leading to significant repercussions on
both physical and mental health. The analysis of physiological signals, collected from wearable sensors, has emerged as a
promising approach to predicting and managing stress. Methods based on machine learning techniques have been defined
in the literature and achieved promising results by using handcrafted features extracted from the signal. However, there
is no consensus on the list of features, while deep learning approaches that overcomes the problem require significant
computational power and a large amount of data. In this paper, we present a comprehensive view of the most common
representative machine learning algorithms applied to the stress detection domain by giving a reference point for both
academia and industry professionals in this application field. This study considers fragments of signals without extracting
any features and uses a public dataset, WESAD, that contains high-resolution physiological, including blood volume pulse,
electrocardiogram and electromyogram. The data collected from 15 subjects during a lab study are heterogeneous and
characterized by different frequencies and noises due to some devices. After preprocessing, we assess the performance of ten
machine learning algorithms belonging to four models (tree, ensemble, linear and neighbours) on the WESAD by facing the
problem as binary (stress/no-stress) and multiclass (baseline, stress, and amusement) classifications. Our results, evaluated in
terms of classical metrics, show that Random Forest outperforms the others in binary and multi-class approaches.
Keywords
Physiological Signals, Binary and multi-class classification, Wearable Sensor Data, time series
1. Introduction tonomic Nervous System, allow us to detect and monitor
stress. Hovsepian et al. [4] pioneered the stress detection
Stress is a non-specific body reaction to any demand by using physiological signals. Both faced the problem
upon it. Its effects influence overall behaviour, well-being, as a binary classification problem, whereas Gjoreski et
and potential personal and professional successes [1]. al. [5] aimed at distinguishing different levels of stress
Chronic stress may give rise to significant physical and (no stress versus low stress versus high stress). Such
mental health issues, such as cancer, cardiovascular dis- bioignals can be captured non-invasively by wearable
ease, depression, and diabetes. It is an increasingly preva- devices, such as smartphones and smartwatches, com-
lent and pervasive phenomenon in the modern world: monly used among people. Such devices can monitor
more than 50% of all work-related ill health cases in some physiological parameters, such as Blood Volume
2020/21 are due to stress [2]. Assessments based on psy- Pulse (BVP), Electrodermal Activity (EDA), temperature
chologically designed questions, such as the Perceived (TEMP), and heart rate (HR) etc. In the scenario of stress
Stress Scale (PSS) [3], are frequently used to detect stress. detection, machine learning and deep learning method-
However, these methods may be time-consuming, psy- ologies achieve promising results by analyzing these data.
chologically invasive and lack reliability. Therefore, the These approaches include support vector machines, ran-
definition of non-invasive approaches for rapid and accu- dom forest and k-nearest neighbours and use handcrafted
rate stress detection influences the quality and wellness features extracted from the pre-processed signal in order
of peopleβs lives: managing stress before it causes health to reduce the data noises [6]. Moreover, no consensus
issues is fundamental. In the literature, it has been demon- on the list of features to extract from physiological data
strated that physiological signals, a response to the Au- has been reached [7]. To solve the problem, advanced
Ital-IA 2024: 4th National Conference on Artificial Intelligence, orga-
deep learning approaches have been applied since they
nized by CINI, May 29-30, 2024, Naples, Italy have the ability to automatically comprehend patterns
*
Corresponding author. and, thus extract features. Nevertheless, these require
β
These authors contributed equally. significant computational power and a large amount of
$ michela.quadrini@unicam.it (M. Quadrini); data. The appropriate machine learning algorithm choice
denise.facone@studenti.unicam.it (D. Falcone); for a particular problem task is not trivial: no single clas-
michela.quadrini@unicam.it (G. Gerard)
sifier works best across all possible scenarios, as stated
0000-0003-0539-0290 (M. Quadrini); 0000-0003-0539-0290
(G. Gerard) by no free lunch theorem states [8]. To the best of our
Β© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License knowledge, no scientific work compares machine learn-
Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
ing methods for stress detection on the same datasets
without feature extraction or dimensionality reduction.
In this paper, we present a comprehensive view of
the most common representative machine learning algo-
rithms applied to the stress detection domain by giving Figure 1: The two protocol versions used to collect data
a reference point for both academia and industry pro-
fessionals in this application field. In the analysis, we
consider fragments of signals without extracting any fea- lar disorders. Furthermore, the females subjects were
tures due to the nature of the problem: stress determines not pregnant. The dataset includes blood volume pulse
nonspecific human responses and the feature selection (BVP), electrocardiogram (ECG), electrodermal activity
depends on the subject and do not can be generalized. (EDA), electromyogram (EMG), respiration (RESP), body
Such signal fragments contain samples of all the physio- temperature (TEMP), and three-axis acceleration (ACC).
logical parameters measured. After appropriate resam- ECG, EDA, EMG, RESP, TEMP and ACC were recorded
pling and noise reduction, these values are linearized by a chest-worn device (RespiBan) and sampled at 700
and constitute the input of the considered ML model by Hz, whereas a wrist-worn device (Empatica E4) recorded
following the neural network approach. This study uses BVP (sampled at 64 Hz), EDA (at 4 Hz), TEMP (at 4 Hz),
the WESAD [9] dataset that is public and stores 12 phys- and ACC (at 32 Hz). The dataset comprises 14 time series,
iological signals, such as blood volume pulse and electro- each spanning approximately 2 hours, total experimental
cardiogram, collected from 15 subjects during a lab study. duration. The experiments were conducted to capture
After preprocessing (consisting of resampling, outlier re- three distinct affective states: baseline, stress, and amuse-
moval, and normalization), we determine a dataset of ment with durations of 20 minutes, 392 seconds and 7
samples that are signal fragments obtained using the slid- minutes, respectively. They also included two meditation
ing window approach. Over these entries, we evaluate periods. To capture the data during the experiment, a
the most common and popular methods widely in various particular protocol, depicted in Figure 1, has been used. It
application areas. We consider eight machine learning consists of two different versions, where amusement and
algorithms, i.e, Decision Tree (DT), Random Forest (RF), stressful conditions are interchanged between different
Adaboost (AB), Extratree (ExT), Passive Aggressive Clas- subjects to avoid the effects of order.
sifier (PA), Logistic Regression (LR), K-kneighbors (NKE)
and Nearest Centrod (NC). We face the binary (stress/no-
2.2. Preprocessing
stress) and multi-class (baseline, stress, and amusement)
problem classifications. The results, evaluated in terms The varied sampling frequencies in WESAD, as detailed
of classical metrics, show that RF outperforms the others in Section 2.1, necessitated a harmonization step. We
in binary and multi-class approach. We also compare the resampled all data to match the 700Hz frequency of the
results obtained with the ones in the literature [9]. RespiBAN. Therefore, the resampling is applied only to
The paper is organized as follows. Section 2 describes the time series detected by Empatica E4 using Fourier
the materials and the methods used in this study. The method as an unsampling technique.
pipeline of the approach used in the study with the main After the resampling, we remove the outliers due to oc-
results are described in Section 3. The paper ends with casional anomalous peaks in some signals, which may be
some conclusion and future work, Section 4. attributed to instrumental errors or measurement noise.
We removed the anomalies from each time series by us-
ing a Hampel filter, discussed in [10]. Such a filter uses
2. MATERIALS AND METHODS 1-minute sliding windows as input and calculates the
mean (π) and standard deviation (π) of the values within
This work proposes a comparative evaluation of ML ap-
the corresponding interval. Observations higher than the
proaches to understand the best approach for real-time
threshold of 3π from the mean within the respective win-
analytics. For this study, we consider the WESAD dataset.
dow are classified as outliers (following Pearsonβs rule)
and are substituted with the nearest chronological value.
2.1. Dataset This strategy ensures that outlier substitution doesnβt
WESAD is a public dataset designed for stress and affec- introduce significant high-frequency variations.
After outliers removal, we normalize all signals in
tive detection. It is a high-quality multimodal dataset the interval [β1, 1] to treat all inputs equally.Let π =
storing physiological and movement data of 15 subjects {π₯1 , π₯2 , . . . , π₯π } be the considered time series with π
(12 male and 3 female) during a controlled lab experi- components, where each component corresponds to a
ment [9]. All the participants were not heavy smokers biophysical signal. Each of them are rescaled to the in-
and did not suffer from chronic mental or cardiovascu-
represents a classification or decision. The root of the tree
corresponds to the best predictor. Usually, a DT is pruned
by combining the adjacent nodes to avoid overfitting.
2.4.2. Ensemble models
Ensemble learning is a kind of model that makes predic-
Figure 2: Label distributions of datasets created for multi- tions considering and combining a number of different
class and binary classification. models. By such a combination, an ensemble learning
tends to be more flexible and less data sensitive.
terval [β1, 1] by applying the mean normalization: Random Forest Random Forest is an ensemble model
by Breiman [14] for both classification and regression.
(π₯π β πππ₯(π)) + ((π₯π β πππ(π)) It constructs a set of decision trees during training and
π₯
Λπ =
πππ₯(π) β πππ(π) determines the prediction by selecting the most com-
mon class in the classification problem or calculating the
where πππ₯(π) and πππ(π) is the maximum and mean/average prediction in the regression problem of
minimum value among each component of π, respec- the classes output by individual trees. This model com-
tively. Therefore, the input is a the scaled time series, bines the bagging approach with the random selection
πΛ = {π₯ Λ1 , π₯ Λ π }.
Λ2 , . . . , π₯ of features to ensure the uncorrelation among the deci-
sion trees of the forest. Feature randomness generates a
2.3. Dataset Entry random subset of features by ensuring low correlation
among decision trees. In bagging, the decision trees de-
After the data preprocessing phase, we create two
pend on trees created from a different bootstrap sample,
datasets: one for binary classification and the other for
i.e., samples that may appear more than once in the en-
multiclass. All entries are obtained by applying the slid-
tries of the training dataset. Differently from decision
ing window technique to preprocessed signals. Specifi-
trees that consider all the possible feature splits, random
cally, the entries consist of time series fragments charac-
forests only select a subset of those features.
terized by only an emotional state (or label) obtained by
a slide of 60 seconds and a stride of 30 seconds, according
AdaBoost AdaBoost, Adaptive Boosting, is an ensem-
to the study in [11]. To create the multiclass dataset, we
ble models developed by Yoav Freund et al. [15]. It em-
consider parts of the time series associated with stress,
ploys an iterative approach to improve poor classifiers
Baseline and Amusement, as described in Section 2.1. For
by learning from their errors. Unlike the random forest
the binary classification, both the Baseline and Amuse-
that uses parallel ensembling, Adaboost uses βsequential
ment states were aggregated under a single βnon-stressβ
ensemblingβ. Therefore, it is not possible to parallelize
label. The labels distribution of the two datasets are
jobs on a multiprocessor machine like Random Forest. It
shown in Fig. 2.
creates a classifier by combining many poorly perform-
ing classifiers to obtain a good classifier of high accuracy.
2.4. Machine Learning Algorithms Such resulting classifier is accomplished with sequen-
In this section, we describe some machine learning clas- tial weight adjustments, individual voting powers and a
sification techniques. Interested readers can refer to [12] weighted sum of the final algorithm classifiers.
for a complete treatment of machine learning approaches.
Extremely Randomized Trees Extremely Random-
2.4.1. Decision Tree ized Trees, introduced in [16], are ensembling methods
that perform regression or classification. It creates a
A DT is a non-parametric supervised learning algorithm large number of unpruned decision trees from the train-
for classification and regression in the form of a tree ing dataset and uses majority voting to select the decision
structure [13]. It predicts the value of a target variable trees for the classification. Different from Random Forest,
by learning simple decision rules inferred from the data it uses the entire dataset to train decision trees. Moreover,
features. The method exploits the βdivide et imperaβ it randomly selects the values at which to split a feature
approach to learning: it learns from data with a set of and create child nodes to ensure sufficient differences
if-then-else decision rules. The depth directly correlates between individual decision trees.
with the complexity of these decision rules. The output
is a tree comprising decision nodes and leaf nodes: a
decision node has two or more branches, and a leaf node
2.4.3. Linear Models each class (target label). The training data is divided into
clusters based on their class labels, and then the centroid
Logistic Regression Logistic Regression, introduced
is computed for each data cluster. Each centroid is simply
in [17], is a supervised learning algorithm mainly used
the mean value of each of the input variables. Such a
for classification tasks where the aim is to estimate the
centroid represents the "model": given new examples, the
probability of an instance belonging to a specific class
algorithm assigns the label by computing the distance
based on the values of the input features. The method
between a given data and each centroid.
uses the sigmoid function to map any real-valued num-
ber into a value between 0 and 1. More specifically, it
calculates a weighted sum of the input features, applies 2.5. Metrics
the logistic function to this sum, and then classifies the We evaluate the performance and effectiveness of the ap-
input as belonging to one of the two classes based on a proaches by using Accuracy (π΄ππ), Precision (π ), Recall
chosen threshold. (π
), and F-measure (πΉ1 ), defined as follows
Passive Aggressive The passive-aggressive algorithm, π΄ππ =
ππ + ππ
introduced in [18], is one of the few "online learning ππ + ππ + πΉπ + πΉπ
algorithms": the input data comes in sequential order,
ππ
and the model is updated step-by-step. It is useful in π =
applications that receive data as a continuous flow and π π + πΉπ
need to adapt to change rapidly or autonomously or if π π
π
=
you have limited computing resources. The algorithm ππ + πΉπ
is based on based on Passive and Aggressive approches. π Β·π
If the prediction is correct, keep the model and do not πΉ1 = 2 Β·
π +π
make any changes (passive), while If the prediction is
incorrect, make changes to the model. where π π represents the number of true positive, πΉ π
denotes the number of false negative, πΉ π represents the
number of false positive, π π denotes the number of true
2.4.4. Neighbors-based Models
negative.
Supervised neighbors-based models can be applied for
classification and regression. The principle behind near-
est neighbor methods is to find a predefined number of
3. RESULTS
training samples closest in distance to the new point, and The work aims to compare various machine learning algo-
predict the label from these. rithms to detect stress from signals captured by wearable
devices. The workflow is described in Section 3.1, while
K-Nearest Neighbors The k-nearest neighbours al- the results of the experiments are described in Section 3.2.
gorithm, introduced by Fix and Hodges in 1951 [19] and
expanded by [20], is a non-parametric supervised learn-
ing method for classification and regression. K-nearest 3.1. Methodology
neighbours algorithm exploits proximity to make classifi- Our pipeline, depicted in Fig. 3, is implemented in Python
cations or predictions about the grouping of an individual using the scikit-learn package for the machine learning
data point. KNN searches for the k-nearest labelled train- approaches and SciPy for data manipulation and analy-
ing data by using the distance metric and attributes the sis. In particular, some methods of the SciPy library is
label which appears the most to the new observation. In used in the data preprocessing phase. The method resam-
our study, we use the Minkowski distance as a metric. ple permits the resampling of signals. In our approach,
The input consists of the k closest training examples in a all signals are resampled at 700 Hz. About the outlier
data set, whereas the output depends on the task, classi- remotion, the Hampel filter is implemented using the
fication or regression. Such output is a class membership βrollingβ, βmeanβ, βstdβ, βfillnaβ, βmaskβ, and βinterpolateβ
or the property value for the entry, respectively. methods from the Pandas library. The βMinMaxScalerβ
class of the scikit-learn package is used to perform data
Nearest Centroid Nearest Centroids, defined in [21], normalization. The machine learning methods Decision
is arguably the simplest classifier. It operates on an intu- Tree, Random Forest , K-Nearest Neighbors and Logis-
itive principle: it takes data samples as input and classifies tic Regression are implemented via the tree, ensemble,
them into the class of training examples whose centroid neighbors and linear model modules, respectively. The
(a geometric centre of a data distribution) is closest to it. method K-Folds is used to split the dataset into π con-
The algorithm assumes that the centroids are distinct for secutive folds without shuffling and then each fold is
Binary Classification
Accuracy F1-score
DT 83.60 Β± 1.08 80.83 Β± 1.13
RF 74.97 Β± 1.11 64.08 Β±1.68
KNN 74.20 69.14
Figure 3: Pipeline used for the method comparison Multiclass Classification
Accuracy F1-score
DT 63.56 Β± 1.73 58.05 Β±1.61
RF 74.97 Β± 1.11 64.08 Β± 1.68
then used once as a validation while the π β 1 remain- KNN 56.14 48.70
ing folds form the training set. The code used in this Table 2
manuscript are available from the corresponding author Average value with metrics with their standard deviation re-
upon reasonable request. lated to the binary and multiclass classification by extraction
features from signals [9]
3.2. Experiments
Given the small number of subjects involved in the ex-
algorithm delivers superior performance. The accuracy
periment, we consider the Leave-One-Subject-Out Cross-
and F1-score is reported in Table 2.
Validation (LOSOCV), i.e., an approach that utilizes each
Comparing the results, we note that the methods per-
subject as a βtestβ set and the remaining 14 as a βtrainingβ
forms better using signal values than signal features.
set. The experiments have been performed considering
the decision tree, random forest, K-Nearest Neighbors
and logistic regress as machine learning methods. For all 4. CONCLUSIONS AND FUTURE
experiments, we use the default parameters.
We evaluate such experiments by considering Accu- WORK
racy, Precision, Recall and F1-Score as metrics. Tables
In this work, we have compared various classical ma-
1 shows the average values with the standard deviation
chine learning algorithms. We have used a public dataset,
of the considered metrics obtained for binary and multi-
WESAD, to perform our study. Analyzing the results, we
class classification, respectively. Appendix A reports the
have noted the best results have been archived by the
values for each experiment.
random forest algorithm. This evidence is in line with the
Binary Classification results proposed in the literature [9]. We have observed
DT
Accuracy
0.869 Β± 0.150
Precision
0.924 Β± 0.105
Recall
0.868 Β± 0.209
F1-Score
0.882 Β± 0.160
that classifications based on the signal values outcome
RF 0.920 Β± 0.103 0.944 Β± 0.100 0.945 Β± 0.092 0.940 Β± 0.076 ones that consider signal features.
AB
ExT
0.846 Β± 0.154
0.909 Β± 0.109
0.883 Β± 0, 143
0.943 Β± 0, 089
0.907 Β± 0.128
0.915 Β± 0.119
0.885 Β± 0.112
0.925 Β± 0.092
In future work, we intend to conduct additional exper-
LR 0.822 Β± 0.232 0.843 Β± 0.208 0.925 Β± 0.199 0.871 Β± 0.186 iments to discern the most relevant physiological signals.
PA 0.823 Β± 0.225 0.842 Β± 0.200 0.934 Β± 0.187 0.874 Β± 0.173 It represents another fundamental aspect of detecting
KNN
NC
0.845 Β± 0.193
0.929 Β± 0.100
0.939 Β± 0.118
0.953 Β± 0.097
0.816 Β± 0.251
0.949 Β± 0.083
0.851 Β± 0.203
0, 945 Β± 0.075
stress for real-time analysis using wearable sensors and
Multiclass Classification smartphones. In this case, the aim is to store the min-
DT
Accuracy
0.629 Β± 0.222
Precision
0.658 Β± 0.195
Recall
0.629 Β± 0.222
F1-Score
0.599 Β± 0.233
imum information to be non-invasive and reduce the
RF 0.707 Β± 0.171 0.663 Β± 0.157 0.707 Β± 0.171 0.664 Β± 0.173 space while maintaining high model performance. We
ExT
KNNe
0, 687 Β± 0.166
0.570 Β± 0.239
0.642 Β± 0.152
0.615 Β± 0.249
0.687 Β± 0.165
0.570 Β± 0.239
0.645 Β± 0.165
0.563 Β± 0.242
also intend to consider and employ deep learning ap-
LR 0.623 Β± 0.241 0.703 Β± 0.208 0.623 Β± 0.242 0.588 Β± 0.249 proaches, such as graph convolution networks or recur-
NC 0.680 Β± 0.219 0.685 Β± 0.242 0, 680 Β± 0.220 0.662 Β± 0.228
rent neural networks, motivated by the results obtained
Table 1 in other scenarios [22, 23]. Moreover, we also intend to
Average value with metrics with their standard deviation re- study the role of the length of the sliding windows from
lated to the binary and multiclass classification a theoretical perspective by taking into account various
entropy-based methods that have produced evaluable out-
The Random Forest model outpaces its counterparts in comes in the scenario of protein-protein interaction site
both binary and multiclass classification scenarios. For prediction [24]. Another crucial future investigation is
the RF model, the obtained accuracy stands at 92% (bi- to explore and define approaches to extract and describe
nary) and 70% (multiclass). Corresponding F1-scores are the correlation that sliding windows represent. Other
88.2% and 60% , respectively. While multiclass classifica- representations, like arc-annotated sequences, strings
tion offers insights for emotion detection via wearables, and simplicial complexes, will be explored. We will ex-
there remains room for improvement. Comparing re- plore other representations like arc-annotated sequences
sults from Schmidt et al.βs benchmark on the WESAD for the analysis and comparison of time utilizing tools
dataset [9], which utilized standardized machine learn- like [25] and strings or simplicial complexes, which al-
ing techniques and features, our study finds that the RF low applying techniques from formal methods to identify
patterns [26] or verify properties [27]. [13] J. R. Quinlan, Induction of decision trees, Machine
learning 1 (1986) 81β106.
Acknowledgements. This work has been funded by [14] L. Breiman, Random forests, Machine learning 45
the European Union - NextGenerationEU under the Ital- (2001) 5β32.
ian Ministry of University and Research (MUR) National [15] Y. Freund, R. E. Schapire, et al., Experiments with
Innovation Ecosystem grant ECS00000041 - VITALITY - a new boosting algorithm, in: icml, volume 96,
CUP J13C22000430001 Citeseer, 1996, pp. 148β156.
[16] P. Geurts, D. Ernst, L. Wehenkel, Extremely ran-
domized trees, Machine learning 63 (2006) 3β42.
References [17] D. R. Cox, The regression analysis of binary se-
quences, Journal of the Royal Statistical Society
[1] B. S. McEwen, Protective and damaging effects of Series B: Statistical Methodology 20 (1958) 215β232.
stress mediators, New England journal of medicine [18] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz,
338 (1998) 171β179. Y. Singer, Online passive aggressive algorithms
[2] Health and Safety Executive, HSE on work-related (2006).
stress. 2021, http://www.hse.gov.uk/statistics/ [19] E. Fix, J. L. Hodges, Discriminatory analysis. non-
causdis/-ffstress/index.htm, ???? Accessed on parametric discrimination: Consistency proper-
March 7, 2022. ties, International Statistical Review/Revue Inter-
[3] E.-H. Lee, Review of the psychometric evidence of nationale de Statistique 57 (1989) 238β247.
the perceived stress scale, Asian nursing research [20] T. Cover, P. Hart, Nearest neighbor pattern classifi-
6 (2012) 121β127. cation, IEEE transactions on information theory 13
[4] K. Hovsepian, M. AlβAbsi, E. Ertin, T. Kamarck, (1967) 21β27.
M. Nakajima, S. Kumar, cstress: towards a gold [21] R. Tibshirani, T. Hastie, B. Narasimhan, G. Chu,
standard for continuous stress assessment in the Diagnosis of multiple cancer types by shrunken
mobile environment, in: Proceedings of the 2015 centroids of gene expression, Proceedings of the
ACM international joint conference on pervasive National Academy of Sciences 99 (2002) 6567β6572.
and ubiquitous computing, 2015, pp. 493β504. [22] M. Quadrini, S. Daberdaku, C. Ferrari, Hierarchical
[5] M. Gjoreski, M. LuΕ‘trek, M. Gams, H. Gjoreski, representation and graph convolutional networks
Monitoring stress with a wrist device using con- for the prediction of proteinβprotein interaction
text, Journal of biomedical informatics 73 (2017) sites, in: Machine Learning, Optimization, and
159β170. Data Science: 6th International Conference, LOD
[6] L. Shu, J. Xie, M. Yang, Z. Li, Z. Li, D. Liao, X. Xu, 2020, Siena, Italy, July 19β23, 2020, Revised Selected
X. Yang, A review of emotion recognition using Papers, Part II 6, Springer, 2020, pp. 409β420.
physiological signals, Sensors 18 (2018) 2074. [23] M. Quadrini, S. Daberdaku, C. Ferrari, Hierarchi-
[7] R. Li, Z. Liu, Stress detection using deep neural cal representation for ppi sites prediction, BMC
networks, BMC Medical Informatics and Decision bioinformatics 23 (2022) 96.
Making 20 (2020) 1β10. [24] M. Quadrini, M. Cavallin, S. Daberdaku, C. Ferrari,
[8] D. H. Wolpert, The lack of a priori distinctions Prosps: protein sites prediction based on sequence
between learning algorithms, Neural computation fragments, in: International Conference on Ma-
8 (1996) 1341β1390. chine Learning, Optimization, and Data Science,
[9] P. Schmidt, A. Reiss, R. Duerichen, C. Marberger, Springer, 2021, pp. 568β580.
K. Van Laerhoven, Introducing wesad, a multimodal [25] M. Quadrini, L. Tesei, E. Merelli, Aspralign: a tool
dataset for wearable stress and affect detection, in: for the alignment of rna secondary structures with
Proceedings of the 20th ACM international confer- arbitrary pseudoknots, Bioinformatics 36 (2020)
ence on multimodal interaction, 2018, pp. 400β408. 3578β3579.
[10] J. Astola, P. Kuosmanen, Fundamentals of nonlinear [26] M. Quadrini, E. Merelli, R. Piergallini, Loop gram-
digital filtering, CRC press, 2020. mars to identify rna structural patterns., in: Bioin-
[11] M. Quadrini, S. Daberdaku, A. Blanda, A. Capuc- formatics, 2019, pp. 302β309.
cio, L. Bellanova, G. Gerard, Stress detection from [27] M. Loreti, M. Quadrini, A spatial logic for simplicial
wearable sensor data using gramian angular fields models, Logical Methods in Computer Science 19
and cnn, in: International Conference on Discovery (2023).
Science, Springer, 2022, pp. 173β183.
[12] S. Shalev-Shwartz, S. Ben-David, Understanding
machine learning: From theory to algorithms, Cam-
bridge university press, 2014.