<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Ensemble deep learning for blood pressure estimation using facial videos</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Wei Liu</string-name>
          <email>liuw2@ihpc.a-star.edu.sg</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bingjie Wu</string-name>
          <email>wu_bingjie@ihpc.a-star.edu.sg</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Menghan Zhou</string-name>
          <email>zhou_menghan@ihpc.a-star.edu.sg</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xingjian Zheng</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xingyao Wang</string-name>
          <email>wang_xingyao@ihpc.a-star.edu.sg</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yiping Xie</string-name>
          <email>yipingx1123@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chaoqi Luo</string-name>
          <email>chaoqiluo7@gmail.com</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Liangli Zhen</string-name>
          <email>llzhen@outlook.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>South Korea</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>College of Computer Science and Software Engineering, Shenzhen University</institution>
          ,
          <addr-line>Shenzhen</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of High Performance Computing, Agency for Science, Technology and Research (A</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>STAR)</institution>
          ,
          <country country="SG">Singapore</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>School of Electrical Engineering, Southwest Jiaotong University</institution>
          ,
          <addr-line>Chengdu</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Blood pressure (BP) estimation is a standard and critical component of routine health assessment, especially for cardiac disease patients. Traditional methods typically require direct contact with the patient, which can cause discomfort and inconvenience. Remote photoplethysmography (rPPG) that enables non-contact measurement of the blood volume pulse using trivial cues from facial videos has drawn attention to measure vital signs. This paper presents an ensemble deep learning approach for estimating BP remotely using facial videos. Specifically, to address the vulnerabilities and biases in deep learning models for BP measurement, we emphasize both the accuracy of individual models and the diversity within the ensemble. We utilize advanced deep learning architectures to construct several regression models incorporating convolutional neural networks and transformer blocks, which learn the spatiotemporal relationships between diferent frames and locations. These trained models are then combined to measure BP readings. Additionally, to enhance the system's robustness under varying lighting conditions, data augmentation techniques are employed to generate more training data. The proposed method is tested on an unseen dataset and the average root of mean squared error (RMSE) is 12.95 mmHg, ranking 1st in the 3rd Vision-based Remote Physiological Signal Sensing (RePSS) Challenge.</p>
      </abstract>
      <kwd-group>
        <kwd>Blood pressure measurement</kwd>
        <kwd>remote photoplethysmography</kwd>
        <kwd>deep learning</kwd>
        <kwd>ensemble learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Blood pressure (BP) measurement is a fundamental diagnostic tool in medical practice, serving
as a crucial indicator of cardiovascular health. For instance, elevated BP, or hypertension, is
a significant risk factor for cardiovascular diseases, including stroke, heart attack, and renal
failure, making accurate and timely measurement vital for early detection and management [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
The golden standard of continuous BP monitoring is invasive arterial pressure monitoring,
which is mainly adopted for primary care [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In addition, there are traditional noninvasive BP
measurement methods rely on cufs, but it increases discomfort for patients receiving long-term
or frequent monitoring and discouraging real-time measurement [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Currently, cufless BP monitor methods have been explored for real-time measurement, and
providing convenience and comfort. There are two main methods: pulse transit time (PTT) and
pulse wave analysis (PWA) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. PTT requires two simultaneous physiological signals to calculate,
such as electrocardiography (ECG), phonocardiography (PCG) and seismocardiography (SCG).
Compared to PTT, PWA exclusively extracts features from PPG to estimate BP. In recent years,
machine learning and deep learning are also employed to establish the mapping relations
between PPG and BP [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ].
      </p>
      <p>
        Note that these methods are contact-based and require specific devices like smart watches
to make the measurement. Over the past years, remote PPG (rPPG) techniques have been
developed for vital sign measurements, especially for heart rate (HR) estimation [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8, 9, 10, 11</xref>
        ].
Compared to PPG techniques, rPPG-based methods are contactless and can work with digital
cameras which are easily accessible nowadays. Other than HR estimation, rPPG techniques
have also been applied to make BP estimation from facial videos [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ]. While rPPG provides
a convenient and cost-efective method for BP estimation, its accuracy can be easily afected
by factors such as lighting conditions, skin tones, and motion blur, making rPPG-based BP
measurement extremely challenging.
      </p>
      <p>This paper proposes to achieve rPPG-based BP measurement with ensemble deep learning
using facial videos. Specifically, to address the vulnerabilities and biases in deep learning models
for BP measurement, we prioritize the accuracy of individual models and the diversity within
the ensemble. We construct individual regression models by adding a regression head to
CNNand transformer-based backbones. For training each model, we use not only the original RGB
images but also features obtained by transforming the color space from RGB to YUV. To enhance
the models’ performance under varying lighting conditions, data augmentation techniques are
employed. Finally, an aggregator is used to combine the outputs from these individual models.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <sec id="sec-3-1">
        <title>2.1. Invasive BP Monitoring</title>
        <p>
          Invasive BP estimation can provide continuous and accurate monitoring and is there essential in
certain clinical settings, particularly for patients under critical care or during surgery [
          <xref ref-type="bibr" rid="ref14 ref2">14, 2</xref>
          ]. This
method involves the insertion of a catheter into a suitable artery, commonly the radial or femoral
artery [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The catheter is connected to a pressure transducer, which converts the mechanical
pressure exerted by the blood into an electrical signal that can be continuously displayed and
monitored. In general, invasive methods provide accurate and continuous monitoring of BP but
are only used in certain circumstances due to the significant discomforts to patients.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Cuf-based BP Estimation</title>
        <p>
          Cuf-based BP measurement is the most common non-invasive method used in both clinical
and home settings to assess ABP [
          <xref ref-type="bibr" rid="ref3">16, 3</xref>
          ]. This technique utilizes a sphygmomanometer, which
includes a cuf that is wrapped around the upper arm and inflated to constrict blood flow.
As the cuf deflates, measurements are taken either manually by auscultation—listening to
the Korotkof sounds through a stethoscope—or automatically by oscillometric monitors that
detect blood flow vibrations [ 17]. Cuf-based methods provide the convenience of quick and
easy readings and has been extensively validated for clinical use. However, they impose light
discomforts to patients and accuracy can be easily afected by factors like cuf size, arm position,
and patient movement.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>2.3. PPG-based BP Estimation</title>
        <p>
          PPG-based BP estimation is getting more widely used as the emergence of deep learning
algorithms and PPG sensors that can be placed on the finger, earlobe, or over the wrist [ 18].
Variations in light absorption during the cardiac cycle are measured, providing information about
the blood flow, heart rate, and other cardiovascular attributes. By analyzing these variations,
algorithms can estimate systolic and diastolic BP values [
          <xref ref-type="bibr" rid="ref6">6, 19</xref>
          ]. PPG-based methods ofer ease
of use, the potential for continuous monitoring, and the absence of discomfort of cuf methods.
However, the accuracy is sensitive to motion artifacts and changes in sensor placement.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>2.4. The rPPG-based BP Estimation</title>
        <p>Recently, the rPPG-based methods ofer a non-contact way for BP estimation by using video
cameras to detect blood volume changes in facial skin [20]. This technology, which can be
implemented with standard RGB cameras found in common devices like smartphones and
tablets, captures subtle changes in light reflection of the skin due to pulsating blood flows [ 21,
22, 23]. The rPPG-based methods are non-invasive and use widely-accessible cameras, making
it potentially cost-efective and convenient for regular BP checks. However, the accuracy can
be compromised by factors such as motion and variable lighting conditions, posing challenges
for its use in dynamic or uncontrolled environments.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Methodology</title>
      <p>The overall framework of our ensemble deep learning method is illustrated in Fig. 1, from which
we can see that there are multiple regression models. To import diversity, multiple models
are trained using diferent input feature vectors, backbones, or random seeds. The outputs of
individual models are then fused with an aggregator.</p>
      <sec id="sec-4-1">
        <title>3.1. Data Preprocessing</title>
        <p>A short clip is extracted from the original full video and then partitioned into frames. It is worth
pointing out that we select the clip closest to the time when BP is measured to mitigate the
impact of BP fluctuation during video taking. If the video is recorded before BP measurement,
the last part the video is selected and vice versa for videos taken after BP measurement. The face
region of each frame is then cropped and resized to 128 × 128. To improve model performance
in diferent lighting conditions, data augmentation technique is applied during the training
process.</p>
        <p>
          It has been demonstrated in [
          <xref ref-type="bibr" rid="ref11">11, 24</xref>
          ] that alternative color spaces derived from RGB videos
are beneficial for better representation of HR signal. Other than original RGB images, we also


Yes
No
        </p>
        <p>YUV
conversion</p>
        <p>Head
Regressor 2, seed 1</p>
        <p>PhysFormer
…
Regressor 4N-1, seed N
Regressor 4N, seed N</p>
        <p>Aggregator</p>
        <p>Aggregated
SBP/DBP
RGB
frames
where  ,  , and  represent the red, green, and blue color components of an image, respectively.
 represents the luminance component, while  and  represent the chrominance components,
capturing the color information minus the brightness.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Network Structure</title>
        <sec id="sec-4-2-1">
          <title>3.2.1. Backbones</title>
          <p>
            We utilize two state-of-the-art models as the backbone for our BP estimation model, including a
3D CNN model named PhysNet [25] and a transformer-based model named PhysFormer [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
The output of two backbones are both the estimated PPG signal which has been used to recover
ABP [
            <xref ref-type="bibr" rid="ref6">6, 26, 27</xref>
            ]. Therefore, we keep all the layers of the backbones so that the output of the
backbone remain as the PPG signal. The output of the backbone is a 1D signal that has the same
length as the number of input frames. The details of the backbones can be found in [
            <xref ref-type="bibr" rid="ref8">25, 8</xref>
            ].
          </p>
        </sec>
        <sec id="sec-4-2-2">
          <title>3.2.2. Regression head</title>
          <p>We stack a regression head with one hidden layer on top of the backbone and the regression
head has two output nodes corresponding to SBP and DBP, respectively. The regression head
can be formulated as
h =  ( W(1)x + b(1))
y = W(2)h + b(2)
where  is the standard Sigmoid function. W and b are the weights and biases, respectively. x
is the output signal from the backbone. h denotes the vector at the hidden layer. y denotes the
output vector consisting of DBP and SBP.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. Loss Function</title>
        <p>The average RMSE of SBP and DBP is used as the loss function to train our models, defined as
 = 0.5 ×</p>
        <p>∑
=1 (</p>
        <p>−    )2
√

+ 0.5 ×
∑

=1 (  − 
 2
 )
√


where   and   are the ground-truth DBP and SBP results of the  ℎ sample, respectively.    and
  are the predicted DBP and SBP of the  ℎ sample, respectively.  is the number of samples.</p>
      </sec>
      <sec id="sec-4-4">
        <title>3.4. Aggregation</title>
        <p>As mentioned above, multiple individual models are trained with diferent input features (RGB
or YUV), backbones (PhysNet or PhysFormer) and random seeds to introduce diversity to our
ensemble method. Ensemble learning is used to aggregate the outputs of individual models. For
each sample, we remove the top- and bottom- results and then calculate the average of the
rest outputs as


 ens =
 ens =
1
1
 − 2
 − 2
 −
∑ 
=+1
 −
∑ 
=+1




where  ens and  ens are the aggregated prediction of DBP and SBP, respectively. 
represent the predicted DBP and SBP of the  ℎ model when they are arranged in ascending
  and  

order.  is the number of individual models. The top- and bottom- values are neglected.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Experimental Study</title>
      <sec id="sec-5-1">
        <title>4.1. Experimental Setup</title>
        <p>The proposed method is tested using PyTorch on a server equipped with Intel(R) Xeon(R)
Gold 6430 CPU and RTX4090 GPU. The models are trained for 150 epochs using AdamW
optimizer [28] with learning rate  = 1 × 10 −5 and  ℎ
_ = 1 × 10
−5. The value of  in
Equ. (4) is set as 3.
(2)
(3)
(4)</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Datasets</title>
        <p>Two datasets are used for model training and validation, including the VV-medium dataset [29]
and our private dataset. A brief summary of these two datasets is reported in Table 1 and
distribution are illustrated in Fig. 2. VV-medium dataset [29] has more videos than BP label
because each BP label corresponds to multiple videos. It is shown that BP of VV-medium
dataset [29] is more diversely distributed compared to our dataset.</p>
        <p>For testing, the OBF Database – Oulu BioFace Database [30, 31] consisting of 100 subjects
and 200 facial videos with DBP/SBP labels is used for evaluation. Note that for testing, we only
have access to the facial videos and have no access to the ground-truth BP labels.</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Evaluation Metrics</title>
        <p>Three metrics, including the root of mean squared error (RMSE), mean absolute error (MAE)
and Pearson correlation coeficient  , are used to evaluate model performance on the validation
dataset, which are defined as
MAE = 1
 =
√


∑

=1 (  −   )2</p>
        <p>=1</p>
        <p>∑ |  −   |
∑</p>
        <p>=1 (  − )(  − )
√∑=1 (  − ) 2 ∑

=1 (  − ) 2
where  and  are the ground-truth and predicted SBP/DBP, respectively.  is the number of
samples.  and  indicate the average values of ground-truth and prediction, respectively.</p>
        <p>For testing, the average RMSE of DBP and SBP is used to evaluate model performance ,
calculated as</p>
        <p>RMSEavg = 0.5 × RMSE + 0.5 × RMSE
(5)
(6)
where RMSE and RMSE are the RMSE of DBP and SBP, respectively.</p>
      </sec>
      <sec id="sec-5-4">
        <title>4.4. Experimental Results</title>
        <p>We randomly split the available dataset into training dataset (80%) and validation dataset
(20%). The learning curve one of our individual models is shown in Fig. 3 and it shows that
the model converges well in 150 epochs. The scatter plots on validation set are illustrated in
strongly correlated and the errors of most samples are within ±10 mmHg.</p>
        <p>The RMSE of DBP and SBP are 8.93 mmHg and 11.03 mmHg, respectively. The MAE and
RMSE of SBP are larger than those of DBP because SBP range is larger and more diversely
distributed, which can be seen from Fig. 2. On the testing dataset, the average RMSE is 12.95
mmHg and a comparison is reported in Table 2. One can see that our method outperforms
competing methods by more than 0.5 mmHg.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion</title>
      <p>This paper presented an ensemble deep learning method for BP estimation using facial videos.
To improve the diversity of models to ensemble learning, multiple models are built with diferent
backbones and input feature vectors. Besides, data augmentation technique is used to improve
model performance under diferent lighting conditions. The outputs of individual models are
fused with an aggregator. Our method is tested on an unseen dataset in the RePSS challenge,
and the average RMSE of SBP and DBP is 12.95 mmHg, which outperforms all the peer methods
and indicates the efectiveness of our proposed method.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Acknowledgement</title>
      <p>This work is supported by A*STAR Gap project Face AI (Phase 1) under project No.
SC36/19000801-A042 and A*STAR Career Development Fund under grant No. C233312006.
an observational study, Critical care 18 (2014) 1–11.
[16] P. Palatini, R. Asmar, Cuf challenges in blood pressure measurement, The Journal of</p>
      <p>Clinical Hypertension 20 (2018) 1100–1103.
[17] M. Forouzanfar, H. R. Dajani, V. Z. Groza, M. Bolic, S. Rajan, I. Batkin, Oscillometric blood
pressure estimation: past, present, and future, IEEE reviews in biomedical engineering 8
(2015) 44–63.
[18] D. Castaneda, A. Esparza, M. Ghamari, C. Soltanpur, H. Nazeran, A review on wearable
photoplethysmography sensors and their potential future applications in health care,
International journal of biosensors &amp; bioelectronics 4 (2018) 195.
[19] M. Panwar, A. Gautam, D. Biswas, A. Acharyya, Pp-net: A deep learning framework
for ppg-based blood pressure and heart rate estimation, IEEE Sensors Journal 20 (2020)
10000–10011.
[20] Y. Lu, C. Wang, M. Q.-H. Meng, Video-based contactless blood pressure estimation: A
review, in: 2020 IEEE International Conference on Real-time Computing and Robotics
(RCAR), IEEE, 2020, pp. 62–67.
[21] F. Schrumpf, P. Frenzel, C. Aust, G. Osterhof, M. Fuchs, Assessment of deep learning based
blood pressure prediction from ppg and rppg signals, in: Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 2021, pp. 3820–3830.
[22] B.-F. Wu, B.-J. Wu, B.-R. Tsai, C.-P. Hsu, A facial-image-based blood pressure measurement
system without calibration, IEEE Transactions on Instrumentation and Measurement 71
(2022) 1–13.
[23] Y. Chen, J. Zhuang, B. Li, Y. Zhang, X. Zheng, Remote blood pressure estimation via the
spatiotemporal mapping of facial videos, Sensors 23 (2023) 2963.
[24] X. Niu, S. Shan, H. Han, X. Chen, Rhythmnet: End-to-end heart rate estimation from face
via spatial-temporal representation, IEEE Transactions on Image Processing 29 (2019)
2409–2423.
[25] Z. Yu, X. Li, G. Zhao, Remote photoplethysmograph signal measurement from facial
videos using spatio-temporal networks, in: Proceedings of the British Machine Vision
Conference, 2019.
[26] M. A. Mehrabadi, S. A. H. Aqajari, A. H. A. Zargari, N. Dutt, A. M. Rahmani, Novel blood
pressure waveform reconstruction from photoplethysmography using cycle generative
adversarial networks, in: 2022 44th Annual International Conference of the IEEE Engineering
in Medicine &amp; Biology Society (EMBC), IEEE, 2022, pp. 1906–1909.
[27] L. N. Harfiya, C.-C. Chang, Y.-H. Li, Continuous blood pressure estimation using exclusively
photopletysmography by lstm-based signal-to-signal translation, Sensors 21 (2021) 2952.
[28] I. Loshchilov, F. Hutter, Decoupled weight decay regularization, arXiv preprint
arXiv:1711.05101 (2017).
[29] P.-J. Toye, Vital videos: A dataset of videos with ppg and blood pressure ground truths,
arXiv preprint arXiv:2306.11891 (2023).
[30] X. Li, I. Alikhani, J. Shi, T. Seppanen, J. Junttila, K. Majamaa-Voltti, M. Tulppo, G. Zhao, The
obf database: A large face video database for remote physiological signal measurement
and atrial fibrillation detection, in: 2018 13th IEEE international conference on automatic
face &amp; gesture recognition (FG 2018), IEEE, 2018, pp. 242–249.
[31] Z. Yu, W. Peng, X. Li, X. Hong, G. Zhao, Remote heart rate measurement from highly
compressed facial videos: an end-to-end deep learning solution with video enhancement,
in: Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp.
151–160.
[32] Z. Sun, The 3rd repss track 2, 2024. URL: https://kaggle.com/competitions/the-3rd-repss-t2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Fuchs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Whelton</surname>
          </string-name>
          ,
          <article-title>High blood pressure and cardiovascular disease</article-title>
          ,
          <source>Hypertension</source>
          <volume>75</volume>
          (
          <year>2020</year>
          )
          <fpage>285</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Saugel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kouz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Meidert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schulte-Uentrop</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Romagnoli</surname>
          </string-name>
          ,
          <article-title>How to measure blood pressure using an arterial catheter: a systematic 5-step approach</article-title>
          ,
          <source>Critical Care</source>
          <volume>24</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Picone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Schultz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Otahal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Aakhus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Al-Jumaily</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Black</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. J.</given-names>
            <surname>Bos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Chambers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-H.</given-names>
            <surname>Chen</surname>
          </string-name>
          , H.-M. Cheng, et al.,
          <article-title>Accuracy of cuf-measured blood pressure: systematic reviews and meta-analyses</article-title>
          ,
          <source>Journal of the American College of Cardiology</source>
          <volume>70</volume>
          (
          <year>2017</year>
          )
          <fpage>572</fpage>
          -
          <lpage>586</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mukkamala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-O.</given-names>
            <surname>Hahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chandrasekhar</surname>
          </string-name>
          ,
          <article-title>Photoplethysmography in noninvasive blood pressure monitoring</article-title>
          , in: Photoplethysmography, Elsevier,
          <year>2022</year>
          , pp.
          <fpage>359</fpage>
          -
          <lpage>400</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Konstantinidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Iliakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tatakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Thomopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dimitriadis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tousoulis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tsioufis</surname>
          </string-name>
          ,
          <article-title>Wearable blood pressure measurement devices and new approaches in hypertension management: the digital era</article-title>
          ,
          <source>Journal of human hypertension 36</source>
          (
          <year>2022</year>
          )
          <fpage>945</fpage>
          -
          <lpage>951</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ibtehaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mahmud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Chowdhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khandakar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Salman</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Ayari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Tahir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <article-title>Ppg2abp: Translating photoplethysmogram (ppg) signals to arterial blood pressure (abp) waveforms</article-title>
          ,
          <source>Bioengineering</source>
          <volume>9</volume>
          (
          <year>2022</year>
          )
          <fpage>692</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Vardhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vedanth</surname>
          </string-name>
          , G. Poojah,
          <string-name>
            <given-names>K.</given-names>
            <surname>Abhishek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vijayaraghavan</surname>
          </string-name>
          ,
          <article-title>Bp-net: Eficient deep learning for continuous arterial blood pressure estimation using photoplethysmogram</article-title>
          ,
          <source>in: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1495</fpage>
          -
          <lpage>1500</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H.</given-names>
            <surname>Torr</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . Zhao,
          <article-title>Physformer: Facial video-based physiological measurement with temporal diference transformer</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>4186</fpage>
          -
          <lpage>4196</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , P. Torr,
          <string-name>
            <surname>G</surname>
          </string-name>
          . Zhao, Physformer++:
          <article-title>Facial video-based physiological measurement with slowfast temporal diference transformer</article-title>
          ,
          <source>International Journal of Computer Vision</source>
          <volume>131</volume>
          (
          <year>2023</year>
          )
          <fpage>1307</fpage>
          -
          <lpage>1330</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>McDuf</surname>
          </string-name>
          ,
          <article-title>Eficientphys: Enabling simple, fast and accurate camera-based cardiac measurement</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF winter conference on applications of computer vision</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>5008</fpage>
          -
          <lpage>5017</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Tranphys: Spatiotemporal masked transformer steered remote photoplethysmography estimation</article-title>
          ,
          <source>IEEE Transactions on Circuits and Systems for Video Technology</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barszczyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vempala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. P.</given-names>
            <surname>Zheng</surname>
          </string-name>
          , G. Fu,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-P.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <article-title>Smartphone-based blood pressure measurement using transdermal optical imaging technology</article-title>
          ,
          <source>Circulation: Cardiovascular Imaging</source>
          <volume>12</volume>
          (
          <year>2019</year>
          )
          <article-title>e008857</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Q. Wu,</surname>
          </string-name>
          <article-title>The noninvasive blood pressure measurement based on facial images processing</article-title>
          ,
          <source>IEEE Sensors Journal</source>
          <volume>19</volume>
          (
          <year>2019</year>
          )
          <fpage>10624</fpage>
          -
          <lpage>10634</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Li-wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Saeed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Talmor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Malhotra</surname>
          </string-name>
          ,
          <article-title>Methods of blood pressure measurement in the icu</article-title>
          ,
          <source>Critical care medicine 41</source>
          (
          <year>2013</year>
          )
          <fpage>34</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Romagnoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ricci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Quattrone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tofani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Tujjar</surname>
          </string-name>
          , G. Villa,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. R. De Gaudio</surname>
          </string-name>
          ,
          <article-title>Accuracy of invasive arterial pressure monitoring in cardiovascular patients:</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>