=Paper=
{{Paper
|id=Vol-3793/paper15
|storemode=property
|title=Interpretable Vital Sign Forecasting with Model
Agnostic Attention Maps
|pdfUrl=https://ceur-ws.org/Vol-3793/paper_15.pdf
|volume=Vol-3793
|authors=Yuwei Liu,Chen Dan,Anubhav Bhatti,Bingjie Shen,Divij Gupta,Suraj Parmar,San Lee
|dblpUrl=https://dblp.org/rec/conf/xai/Liu0BSGPL24
}}
==Interpretable Vital Sign Forecasting with Model
Agnostic Attention Maps==
Interpretable Vital Sign Forecasting with Model
Agnostic Attention Maps
Yuwei Liu1 , Chen Dan1 , Anubhav Bhatti1 , Bingjie Shen1 , Divij Gupta1 , Suraj Parmar1
and San Lee1
1
SpassMed Inc., Toronto, Ontario, Canada
Abstract
Sepsis is a leading cause of mortality in intensive care units (ICUs), representing a substantial medical
challenge. The complexity of analyzing diverse vital signs to predict sepsis further aggravates this issue.
While deep learning techniques have been advanced for early sepsis prediction, their βblack-boxβ nature
obscures the internal logic, impairing interpretability in critical settings like ICUs. This paper introduces
a framework that combines a deep learning model with an attention mechanism that highlights the
critical time steps in the forecasting process, thus improving model interpretability and supporting
clinical decision-making. We show that the attention mechanism could be adapted to various black box
time series forecasting models such as N-HiTS and N-BEATS. Our method preserves the accuracy of
conventional deep learning models while enhancing interpretability through attention-weight-generated
heatmaps. We evaluated our model on the eICU-CRD dataset, focusing on forecasting vital signs for
sepsis patients. We assessed its performance using mean squared error (MSE) and dynamic time warping
(DTW) metrics. We explored the attention maps of N-HiTS and N-BEATS, examining the differences in
their performance and identifying crucial factors influencing vital sign forecasting.
Keywords
Time Series Forecasting, Deep Learning, Interpretable Machine Learning, Attention Map, Vital Signs,
Sepsis, Explainable AI
1. Introduction
Sepsis is a life-threatening condition that occurs when the immune system of the body responds
incorrectly to an infection and causes rapid organ dysfunction and failure [1]. A meta-analysis
conducted on articles published in PubMed and the Cochrane Database revealed that the average
30-day mortality rate for sepsis was 24.4%, and the average 90-day mortality rate was 32.2%
between 2009 and 2019 [2]. While sepsis has been acknowledged for a long time, its clinical
definition did not emerge until the late 20π‘β century [3]. In 1991, a consensus conference
posited that sepsis arises from the individualβs inflammatory response to infection, marked
by systemic inflammatory response syndrome (SIRS), emphasizing the human response to
invading organisms. This syndrome is characterized by variations in temperature, heart rate
(HR), respiratory rate (RR), blood pressure (BP), and white blood cell (WBC) count [4]. In 2016,
the definition of sepsis was revised to multiple organ dysfunction syndrome (MODS) [5]. Systolic
Late-breaking work, Demos and Doctoral Consortium, colocated with The 2nd World Conference on eXplainable Artificial
Intelligence: July 17β19, 2024, Valletta, Malta
$ yuwei.liu@spassmed.ca (Y. Liu); chen.dan@spassmed.ca (C. Dan); anubhav.bhatti@spassmed.ca (A. Bhatti);
bingjie.shen@spassmed.ca (B. Shen); divij.gupta@spassmed.ca (D. Gupta); suraj.parmar@spassmed.ca (S. Parmar);
sanlee@spassmed.ca (S. Lee)
Β© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
blood pressure (SBP) and RR abnormalities indicate organ dysfunction [6]. Thus, creating precise
models for forecasting vital signs becomes essential in predicting sepsis [7]. Accurate vital sign
predictions can promptly aid clinicians in identifying and intervening in sepsis cases, potentially
saving lives and improving the intensive care unit (ICU) patient outcomes.
The growth in explainable artificial intelligence (XAI) research is mainly attributed to the rapid
growth in the popularity of deep learning with widespread healthcare applications. However,
most models developed using these technologies are considered βblack-boxesβ by experts due to
their intricate, non-linear structures that are challenging for non-experts to understand [8]. The
proposed research contributes to the following two aspects: (1) Adding an attention mechanism
to show the relationship between input time steps and forecasted results; (2) Providing analysis
and interpretation of the findings derived from the attention map.
1.1. Literature Review
In recent years, the significance of model explainability has been widely recognized, leading to
the integration of an increasing number of explainable methods into data-driven models [9].
Prior research has demonstrated the development of deep learning neural networks incorporat-
ing attention mechanisms, resulting in interpretable models with strong performance within the
medical field. Kaji et al. demonstrated that integrating an attention mechanism into the LSTM
network, trained with Electronic Health Record (EHR) data, not only improves the daily sepsis
onset predictionβs Area Under the Receiver Operating Characteristic Curve (AUROC) score
to 0.876 but also highlights critical time points for prediction [10]. An attention-based gated
recurrent unit (GRU) was developed by Shickel et al. Self-attention was applied to focus on
significant time steps when predicting in-hospital mortality [11]. Choi et al. proposed reverse
time attention (RETAIN), processing EHR data in reverse order, achieving an Area Under the
ROC Curve (AUC) of 0.87 in heart failure prediction. It adds interpretability using a two-level
neural attention model [12].
While previous XAI research integrating deep learning models with interpretable modules has
excelled in time series classification, attention mechanisms in interpretable time series forecast-
ing remain underexplored. Our approach aims to explore attention mechanism interpretability
in time series forecasting.
2. Method
In this section, we begin by detailing the information of the eICU Collaborative Research
Database (eICU-CRD) [13], followed by an outline of the composition of our input data. Sub-
sequently, we dive into the specifics of the attention mechanism and the frameworks of our
forecasting models.
2.1. Dataset Description and Data Preprocessing
The eICU-CRD data is a publicly accessible repository containing data from over 200,000 ICU
admissions across 208 hospitals in the United States between 2014 and 2015 [13]. This com-
prehensive dataset comprises diverse patient information, including demographics, diagnoses,
medications, and laboratory results. Our research focuses on the βdiagnosisβ and βvitalAperiodicβ
tables, from which we extract dynamic physiological data such as temperature, HR, and BP
at 5-minute intervals. The core of our study revolves around forecasting two crucial dynamic
variables: HR and mean blood pressure (MBP), derived from SBP and diastolic blood pressure
(DBP) measurements. Following the works of [14, 15], we create one or more groups within a
9-hour time window for each patient to predict vital signs for the subsequent 3 hours based on
the preceding 6 hours of data. Data preprocessing involves imputing missing values, filtering
outliers, and scaling using domain-specific knowledge. Clinically reasonable boundaries for
each critical vital sign were set using this specialized knowledge: HR ranged from 0 to 300 bpm,
MBP from 0 to 190 mmHg, and RR from 0 to 100 bpm [16].
2.2. Experiment Setup
The dataset is divided into training, validation, and test sets in an 80:10:10 ratio. Within these
intervals, the initial 6 hours consist of 72 time steps, while the subsequent 3 hours encompass
36 time points. The forecasting model integrates either HR alone or HR combined with RR as
covariates to forecast MBP or conversely. Training of the model occurs over the first 72 time
steps, followed by predictions for the remaining 36 time steps. Ultimately, model performance is
assessed through Mean Squared Error (MSE) and Dynamic Time Warping (DTW) evaluations.
2.3. Deep Learning Forecasting Model
Based on the forecasting performance of the N-HiTS and N-BEATS model [17, 18, 19], as well as
the idea proposed by Pantiskas et al. [20] we aim to address their inherent lack of interpretability
and understand why the model has different performances. To achieve this, we implemented
an attention mechanism that can be applied to the N-HiTS and N-BEATS architecture, which
may also be applied to other black-box deep learning models. The N-HiTS and N-BEATS model
consists of a series of stacks, each responsible for learning residual values from the preceding
stack.
Within each stack are blocks comprising several fully connected layers, which generate
backward (πππ ) and forward (πππ ) expansion coefficients according to Equation 1 , where βπ,4
represents the output of the fourth fully connected layer in the basic block, and πΏπππππ denotes
Figure 1: Structure of our implementation. Adding the attention layer at the top of stacks, getting the
results from the output.
a linear projection layer [17]:
πππ = Linearππ (βπ,4 ), πππ = Linearππ (βπ,4 ), (1)
Additionally, each block includes backward (πππ ) and forward (πππ ) basis layers that produce
backcast and forecast outputs as per Equation 2, where π¦ΜοΈπ and π₯
ΜοΈπ denote forecast and backcast
outputs, respectively:
dim(πππ ) dim(πππ )
π π
βοΈ βοΈ
π¦ΜοΈπ = ππ,π vπ,π , π₯
ΜοΈπ = π π
ππ,π vπ,π . (2)
π=1 π=1
π
Here, π£π,π and π£π,π
π represent forecast and backcast basis vectors. Notably, for N-HiTS, it has a
max-pooling layer (Equation 3) before passing the values to the fully connected layer, which is
applied to enable multi-rate signal sampling for the ππ‘β basic block [17]:
(π)
π¦π‘βπΏ:π‘,π = MaxPool (π¦π‘βπΏ:π‘,π , ππ ) , (3)
where ππ is the kernel size of the MaxPool layer.
Subsequently, inspired by Pantiskas et al. [20] idea, we introduced an attention mechanism
to explore the relationship between learned information and original inputs after obtaining the
residuals from the final stack. The forecasted result is utilized to construct the Query (Q), while
the original input forms the basis for the Value (V) and Key (K) [20]. The resulting output is
computed as follows:
ππΎ π
ππ *π» = π· Β· π = π ππ π‘πππ₯( β )π (4)
πΏ
πΎ π *1*πΏ = πΌ π *1*πΏ Β· ππΎ
π *πΏ*πΏ
+ ππ
πΎ
*1*πΏ
(5)
π π *1*πΏ = πΌ π *1*πΏ Β· πππ *πΏ*πΏ (6)
and π is the number of input multi time seires, π» is the forecasting horizon length, and πΏ
is the history input horizon. As shown in Figure 1, after the attention layer, a normalizer is
applied, and skip connections are employed to mitigate the vanishing gradient issue. Finally, a
fully connected layer is utilized to generate the forecasted results.
2.4. Interpretable Attention Map
To illustrate the attention map for a specific item, we computed [20]:
π΄π»*πΏ*π = π·π»*πΏ Β· πππ (ππ£π *πΏ*πΏ )π (7)
Here, π΄π»*πΏ*π denotes the attention map, where π΄π»*πΏ π represents the ππ‘β series in the
multivariate time series. Each row π in π΄π»*πΏ
π signifies the relationship between the π π‘β forecasted
data point and the historical input of length πΏ.
This computation enables the visualization of how the model attends to different historical
inputs when forecasting specific data points across the multivariate time series.
Table 1
Performance of forecasting models on forecasting MBP and HR. Here, covariates (W C) for MBP are HR
& RR, and covariates for HR are MBP & RR. * The MSE values are scaled by 1πβ4 for better representation.
β
The DTW values are scaled by 1πβ3 for better representation.
Models Cov. MBP (MSE*) MBP (DTWβ ) HR (MSE*) HR (DTWβ )
Persistence [19] - 24.55 34.50 7.35 17.52
N-HiTS [19] WC 18.46 18.70 7.37 13.12
N-HiTS [19] W/o C 18.02 20.46 7.22 13.97
N-BEATS [19] WC 19.79 19.37 8.73 14.36
N-BEATS [19] W/o C 18.52 17.63 7.48 10.71
TFT [19] WC 18.89 25.93 7.71 16.12
TFT [19] W/o C 19.45 25.65 8.12 16.65
N-BEATS with Attention WC 21.86 21.07 8.04 14.32
N-BEATS with Attention W/o C 18.71 18.03 8.40 11.33
N-HiTS with Attention WC 18.78 20.44 7.24 13.32
N-HiTS with Attention W/o C 19.73 20.42 6.97 12.24
3. Results and Discussion
3.1. Forecasting Benchmarks
Here, table 1 shows the results using different deep learning time series forecasting models.
We compared N-HiTS [17], N-BEATS [18], Temporal Fusion Transformer (TFT) [21], which are
computed by Bhatti et al. [19] using MSE and DTW as the evaluation metrics.
The results indicate that the N-HiTS model, both with and without an attention mechanism,
consistently outperforms other models across MBP and HR predictions when considering MSE.
Similarly, the N-BEATS model also performs well both with and without attention mechanisms.
Furthermore, the TFT model demonstrates competitive performance, especially when con-
sidering MSE. But in the previous paper by Bhatti et al. [19], the forecasting result of TFT is
relatively smooth and doesnβt show fluctuations.
In conclusion, the N-HiTS model, when augmented with an attention mechanism, emerges
as a robust choice for forecasting MBP and HR, showcasing its efficacy in capturing complex
temporal patterns. However, further exploration and experimentation are warranted to optimize
model performance, particularly regarding temporal alignment and covariate incorporation.
3.2. Interpretability Analysis
In the heatmap provided (Fig 2a, Fig 2b), darker colors indicate higher attention weights at
specific time points, which correspondingly have a greater influence on prediction outcomes.
Conversely, lighter colors suggest a lesser impact. The "N-HiTS + Attention" in Fig 2a demon-
strates that areas after the 20π‘β time point exhibit darker shades compared to earlier sections.
Notably, significant changes or peaks at certain points (like the 35π‘β , 54π‘β , and 63ππ points)
increasingly darken, highlighting their crucial role in shaping the prediction. This pattern sug-
gests that N-HiTS places a stronger emphasis on data after the 20π‘β points, effectively capturing
(a) N-HiTS Attention distribution
(b) N-BEATS Attention distribution.
Figure 2: N-HiTS & N-BEATS with attention using covariates to forecast MBP after minmax filter.
both data fluctuations and overall trends. As a result, the predictions closely align with the
actual data and accurately reflect downward trends.
On the other hand, the predictions from N-BEATS do not closely follow the downward trend
of the actual data and display considerable fluctuation. This modelβs attention map reveals that
N-BEATS in Fig 2b assigns larger weights to almost every rise and fall (such as at the 3ππ , 10π‘β ,
and 29π‘β points), but without considering if itβs worth to focus on the trend, which contributes
to less effective information capture. Moreover, it appears that N-BEATS prioritizes data from
the initial 1-2 hours more than N-BEATS, contributing to less stable prediction outcomes.
Both models indicate that the initial 1-3 hours are crucial for prediction, suggesting that
medical staff should focus on interventions during this period. Significant changes occurring
up to three hours prior also substantially impact the predictions.
Figure 3: N-HiTS forecasting results with attention using covariates after minmax filter
4. Conclusion
In this paper, we presented an interpretable time series forecasting algorithm that combines
black-box deep learning models (N-HiTS & NBEATS) with a general attention mechanism. This
approach allows us to observe how the deep learning algorithm assigns importance to inputs
while transparently generating each step of its output. Upon applying this advanced architecture
to the eICU-CRD dataset, our findings demonstrate that the attention mechanism can enhance
interpretability in deep learning time series forecasting models with minimal reduction or even
no change in accuracy. By visualizing attention distributions, clinicians can identify which
vital signs and historical data points are most influential in predicting sepsis. Furthermore, our
model-agnostic attention mechanism is applicable to various deep learning forecasting models.
References
[1] F. GΓΌl, M. K. ArslantaΕ, Δ°. Cinel, A. Kumar, Changing definitions of sepsis, Turkish journal
of anaesthesiology and reanimation 45 (2017) 129.
[2] M. Bauer, H. Gerlach, T. Vogelmann, F. Preissing, J. Stiefel, D. Adam, Mortality in sepsis
and septic shock in europe, north america and australia between 2009 and 2019βresults
from a systematic review and meta-analysis, Critical Care 24 (2020) 1β9.
[3] J. E. Gotts, M. A. Matthay, Sepsis: pathophysiology and clinical management, Bmj 353
(2016).
[4] J.-L. Vincent, S. M. Opal, J. C. Marshall, K. J. Tracey, Sepsis definitions: time for change,
The Lancet 381 (2013) 774β775.
[5] Z. Cheng, S. T. Abrams, J. Toh, S. S. Wang, Z. Wang, Q. Yu, W. Yu, C.-H. Toh, G. Wang, The
critical roles and mechanisms of immune cell death in sepsis, Frontiers in immunology 11
(2020) 1918.
[6] M. Singer, C. S. Deutschman, C. W. Seymour, M. Shankar-Hari, D. Annane, M. Bauer,
R. Bellomo, G. R. Bernard, J.-D. Chiche, C. M. Coopersmith, et al., The third international
consensus definitions for sepsis and septic shock (sepsis-3), Jama 315 (2016) 801β810.
[7] B. Behinaein, A. Bhatti, D. Rodenburg, P. Hungler, A. Etemad, A transformer architecture
for stress detection from ecg, in: Proceedings of the 2021 ACM International Symposium
on Wearable Computers, 2021, pp. 132β134.
[8] G. Vilone, L. Longo, Notions of explainability and evaluation approaches for explainable
artificial intelligence, Information Fusion 76 (2021) 89β106.
[9] L. Longo, R. Goebel, F. Lecue, P. Kieseberg, A. Holzinger, Explainable artificial intelligence:
Concepts, applications, research challenges and visions, in: International cross-domain
conference for machine learning and knowledge extraction, Springer, 2020, pp. 1β16.
[10] D. A. Kaji, J. R. Zech, J. S. Kim, S. K. Cho, N. S. Dangayach, A. B. Costa, E. K. Oermann, An
attention based deep learning model of clinical events in the intensive care unit, PloS one
14 (2019) e0211057.
[11] B. Shickel, T. J. Loftus, L. Adhikari, T. Ozrazgat-Baslanti, A. Bihorac, P. Rashidi, Deepsofa: a
continuous acuity score for critically ill patients using clinically interpretable deep learning,
Scientific reports 9 (2019) 1879.
[12] E. Choi, M. T. Bahadori, J. Sun, J. Kulas, A. Schuetz, W. Stewart, Retain: An interpretable
predictive model for healthcare using reverse time attention mechanism, Advances in
neural information processing systems 29 (2016).
[13] T. J. Pollard, A. E. Johnson, J. D. Raffa, L. A. Celi, R. G. Mark, O. Badawi, The eicu
collaborative research database, a freely available multi-center database for critical care
research, Scientific data 5 (2018) 1β13.
[14] A. Bhatti, N. Thangavelu, M. Hassan, C. Kim, S. Lee, Y. Kim, J. Y. Kim, Interpreting
forecasted vital signs using n-beats in sepsis patients, arXiv preprint arXiv:2306.14016
(2023).
[15] H. M. OβHalloran, K. Kwong, R. A. Veldhoen, D. M. Maslove, Characterizing the patients,
hospitals, and data quality of the eicu collaborative research database, Critical Care
Medicine 48 (2020) 1737β1743.
[16] S. Parmar, T. Shan, S. Lee, Y. Kim, J. Y. Kim, Extending machine learning-based early
sepsis detection to different demographics, in: 2024 IEEE First International Conference
on Artificial Intelligence for Medicine, Health and Care (AIMHC), IEEE, 2024, pp. 70β71.
[17] C. Challu, K. G. Olivares, B. N. Oreshkin, F. G. Ramirez, M. M. Canseco, A. Dubrawski,
Nhits: Neural hierarchical interpolation for time series forecasting, in: Proceedings of the
AAAI Conference on Artificial Intelligence, volume 37, 2023, pp. 6989β6997.
[18] B. N. Oreshkin, D. Carpov, N. Chapados, Y. Bengio, N-beats: Neural basis expansion
analysis for interpretable time series forecasting, 2020. arXiv:1905.10437.
[19] A. Bhatti, Y. Liu, C. Dan, B. Shen, S. Lee, Y. Kim, J. Y. Kim, Vital sign forecasting for sepsis
patients in icus, arXiv preprint arXiv:2311.04770 (2023).
[20] L. Pantiskas, K. Verstoep, H. Bal, Interpretable multivariate time series forecasting with
temporal attention convolutional neural networks, in: 2020 IEEE Symposium Series on
Computational Intelligence (SSCI), IEEE, 2020, pp. 1687β1694.
[21] B. Lim, S. Γ. ArΔ±k, N. Loeff, T. Pfister, Temporal fusion transformers for interpretable
multi-horizon time series forecasting, International Journal of Forecasting 37 (2021)
1748β1764.