<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Video-based remote blood pressure measurement using convolutional networks and random forest</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Wei Zhuo</string-name>
          <email>weizhuo@njust.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jianjun Qian</string-name>
          <email>csjqian@njust.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hang Shao</string-name>
          <email>shaohang@njust.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lei Luo</string-name>
          <email>cslluo@njust.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jian Yang</string-name>
          <email>csjyang@njust.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>PCA Lab, School of Computer Science and Engineering, Nanjing University of Science and Technology</institution>
          ,
          <addr-line>Nanjing, 210094</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Blood pressure (BP) is an important vital sign that is highly correlated with human health. With the development and maturity of remote photoplethysmograpy (rPPG) technology, the analysis of facial video makes it possible to measure BP in a non-contact way. In this paper, we propose a network for remote BP measurement, named RBP-CNN. Specifically, we first extract blood volume pulse (BVP), heart rate (HR), age and body mass index (BMI) from the facial video and analyze their correlation with BP during which we find a close correlation between diastolic blood pressure (DBP) and systolic blood pressure (SBP). Then, RBP-CNN is designed based on residual convolution, local and global attention mechanism, to extract the implicit BP-related features, which are hard to be discovered and manually extracted. Finally, we use the ensemble learning algorithm random forest (RF) to fuse these features to measure BP and verify our method by RF's feature importance. Our approach is trained and tested on 322 and 200 samples provided by Track 2 of the challenge respectively, and it achieves the root mean squared error (RMSE) of 13.48281 which ranks second in the final leaderboard. The codes are publicly available at https://github.com/zhuowei123/3rd-RePSS-track2.git</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;RePSS</kwd>
        <kwd>remote photoplethysmograpy</kwd>
        <kwd>ensemble learning</kwd>
        <kwd>blood pressure estimation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Blood pressure (BP) is an important vital sign in diagnosing certain cardiovascular diseases
such as hypertension [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. There are two kinds of BP in the human body, namely diastolic
blood pressure (DBP) and systolic blood pressure (SBP), which represent the pressure of blood
on blood vessels during contraction and relaxation of the heart. In real life, BP is usually
measured by contact detection instruments or wearable medical devices. Auscultation is the
most traditional method of BP measurement which can well determine the BP state at the
time of measurement but it’s often influenced by the experience of the auscultator and the
environment, resulting in measurement errors. Although cuf oscillography can overcome some
shortcomings of auscultation, the inflatable cuf tends to bring uncomfortable experience to the
personnel being tested. Therefore, it is of great significance to study convenient and accurate
non-contact BP as well as other physiological signals measurement for health monitoring.
In order to solve the discomfort and inconvenience caused by contact measuring equipment.
Remote photoplethysmography (rPPG) [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7 ref8">4, 5, 6, 7, 8</xref>
        ] methods are developing fast in recent
years, which aim to measure heart activity remotely without any contact, makes non-contact
physiological signal measurement possible. To study more robust computer vision algorithms
and biomedical signal processing methods for extracting physiological signals from facial videos,
the 3rd Vision-based Remote Physiological Signal Sensing (RePSS) workshop will be held in
conjunction with the International Joint Conference on Artificial Intelligence (IJCAI 2024).
There are two tracks in The 3rd RePSS challenge, and the task of the Track2 is facial video-based
BP measurement.
      </p>
      <p>
        BP is closely related to various physiological signals, among which the pulse transit time (PTT)
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] is the most representative. To be specific, PTT refers to the time for a pulse to travel between
two diferent body parts. According to whether PTT is used, contactless BP measurement
methods can be divided into PTT-based methods and None-PTT methods. Both types of
methods have their limitations, PTT-based methods have high requirements for video frame
rate, content and stability. The None-PTT methods are vulnerable to individual diferences.
      </p>
      <p>Considering the feasibility of BP measurement based on facial videos, our approach focus on
the analysis of physiological signals in the facial video. To be specific, blood volume pulse (BVP),
heart rate (HR), body mass index (BMI) and age are extracted from the facial video and frame.
Then, RBP-CNN captures the high-dimensional features from BVP to measure the dynamic
information in the facial video. Finally, the ensemble algorithm random forest (RF) is used for
feature fusion and BP measuring.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        PTT-based methods Remote BP measurement is sensitive to head shaking [
        <xref ref-type="bibr" rid="ref10">10, 11</xref>
        ].
Noncontact PTT based BP measurements often require video of multiple parts or other signal
support to improve robustness. For example, Fan et al. [12] extract a palm-to-face PTT from
the video and feeds it into a physical BP model. Wu et al. [13] employ PTT from two face
regions and fuses heart rate variability (HRV), BMI and BVP into a multi-modal model for BP
measurement.
      </p>
      <p>None-PTT methods Diferent from the PTT-based methods, the None-PTT methods measure
BP by fusing physiological signals such as BVP, HR, HRV, BMI, and age. Zhou et al. [14] input
the peaks and troughs of the BVP waveform into a linear regression model to predict BP. Rong
et al. [15] extract 26 features from BVP for BP estimation, and train them through four machine
learning algorithms. In addition to BVP features, Luo et al. [16] take 29 meta-features(room
temperature, subjects’ages, weight, etc.) into account to estimate BP.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>Our method can be divided into two stages: RBP-CNN training to extract the BP-related feature
from BVP and RF training for multi-feature fusion and BP measuring. In this section we will
detail each of these stages in turn.</p>
      <sec id="sec-3-1">
        <title>3.1. RBP-CNN for BVP feature extraction</title>
        <p>As shown in Figure 1, the first stage of our method is BVP feature extraction using RBP-CNN.
In this part, we’ll introduce in the following order: BVP extraction, principle and structure of
RBP-CNN, and loss function.</p>
        <p>Nowadays, there are already many excellent unsupervised [17, 18, 19, 20, 21] and supervised
[22, 23, 24, 25] methods that can represent BVP signals from facial videos. Robust pulse rate
from chrominance-based rppg (CHROM) [18] is a traditional and efective unsupervised method,
which is used in our BVP extraction. The extracted BVP signal is a one-dimensional time series
that changes with time. In previous studies, BVP signal are often used to measure physiological
signals such as HR and HRV. However, we believe that in addition to these important reference
indicators in medicine, there are high-dimensional features related to BP implied in BVP.</p>
        <p>We design RBP-CNN based on residual convolution, local and global attention mechanism
to learn features of BVP signals. ResNet [26] has a strong feature representation ability with
residual connections and it is widely used in time series analysis. The local attention mechanism
is used to focus on local regions within sequence data dynamically and selectively. In the context
of time series data, the local attention mechanism enables the model to adjust its attention based
on specific parts of the input sequence, allowing for more efective capture of key information
within the sequence. The global attention mechanism is employed to consider information from
the entire input sequence when making predictions or feature representation. When processing
time series data, the global attention mechanism enables the model to weight all time steps
equally, allowing it to capture global patterns and relationships within the sequence.</p>
        <p>As illustrated in Figure 1, RBP-CNN consists of three 1D residual blocks (depicted in the left
part of Figure 1), local attention, global attention and two fully connected layers. We feed BVP
signal and BP into RBP-CNN, and BVP is first mapped to high-dimensional feature space through
three residual blocks. Then, the weight of each time step of BVP is adjusted and weighed by the
local attention and global attention mechanism. Finally, the dimension is reduced by two fully
( | ;  ) =  ︀( ; pred ,  n2oise I︀) ,
where  is the label,  is the input,  is the regressor’s parameter, pred is the regressor’s
prediction and  noise is the scale of an i.i.d error term 
bal ( | ), which leads to a distribution mismatch. By Bayes’ rule we get:
task, we train on an imbalanced distribution train ( | ) and test on a balanced distribution
︀( 0,  n2oise I︀) . In the imbalanced regression
connected layers, and the BP is predicted.</p>
        <p>It’s worth noting that BP measurement based on multiple physiological signals is essentially
an imbalanced regression [27, 28, 29] task. Because it can be easily observed that most training
samples of BP regression concentrate on adults and middle-aged people, while the samples of
children and elderly people are fewer, so the labels are imbalanced. To cope with this problem,
balanced mean squared error (BMSE) [30] is proposed, which addresses the label imbalance
from a statistical perspective, below we give a brief introduction to its principle.</p>
        <p>The pred regression can be modeled as a Gaussian distribution, and the mean squared
training the MSE regression model is equivalent to modeling the distribution.
error(MSE) is equivalent to the negative log-likelihood loss of this distribution ( | ;  ). So,
train ( | )
bal ( | )</p>
        <p>train ()
∝ bal ()
the BMSE loss is defined as:
Finally, for a regressor’s prediction train , and a training label distribution prior train ( | ),
(1)
(2)
(3)
(4)
Equation 2 shows that the ratio of train ( | ) and bal ( | ) is proportional to train (), so
for less distributed labels, that is, for lower train (), regressor using MSE will underestimate
on rare labels. BMSE assumes that train ( | ) and bal ( | ) have same label conditional
distribution. Then train ( | ) can always be expressed by bal ( | ) and train () as:
train ( | ) = ∫︀ bal (′ | ) · train (′) ′</p>
        <p>.</p>
        <p>bal ( | ) · train ()
 = − log train ( | ;  )
= − log ∫︀ bal (′ | ;  ) · train (′) ′</p>
        <p>bal ( | ;  ) · train ()
∼= − log  ︀( ; pred ,  n2oise I</p>
        <p>︀)
+ log
∫︁

 ︀( ′; pred ,  n2oise  · train (︀ ′)︀ ′,
︀)
where ∼= hides a constant term − log train (). It can be noted that the calculation of BMSE loss
involves the calculation of a double integral. For simplicity, we use its batch-based Monte-Carlo
(BMC) approximate implementation for the loss calculation of RBP-CNN.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Multi-feature fusion with Random Forest</title>
        <p>As demonstrated in Figure 2, the second stage of our method is multi-feature fusion and BP
prediction based on RF. In this part, we will introduce our scheme in the order of feature
extraction, feature correlation analysis, and feature fusion.</p>
        <p>In medicine, the primary cause of hypertension is arteriosclerosis, and the most directly
associated factors with arteriosclerosis are age and BMI, that’s why hypertension is more
prevalent in obese and middle-aged to elderly populations. Additionally, these individuals
are also prone to show abnormalities in HR. Therefore, we take care of age, BMI, and HR as
important features for BP measurement. Specifically, we input any frame from the facial video
into pre-trained models [31] to estimate age and BMI. Heart rate is calculated from BVP signal
by fourier transform.</p>
        <p>It can be seen from Figure 3 that the DBP’s pearson correlation coeficients with age, BMI,
HR are 0.301, 0.238 and 0.133 respectively (p&lt;0.001) among which DBP is moderately correlated
with age and weakly correlated with BMI and HR. As shown in Figure 4, the pearson correlation
coeficients of SBP with age, BMI and DBP are 0.562, 0.286 (p&lt;0.001) and 0.704 (p&lt;0.05), indicating
that they have moderate, weak and strong correlations with SBP respectively. Commonly used
physiological information such as age, BMI and HR have been used in the previous works.
Researchers have focused on the relationship between them and BP, however, few people pay
attention to the internal correlation between DBP and SBP. We notice it and utilise DBP in SBP
prediction.</p>
        <p>Multi-feature fusion is realized by RF [32], which is a widely used ensemble algorithm. RF is
composed of multiple decision trees, and the final prediction result is determined by the voting
results of each decision tree. In regression task, the output of each decision tree is a continuous
value, and the average of the output results of all decision trees is taken as the final result.
RF can deal with high-dimensional and imbalanced datasets, and has the advantages of high
accuracy and robustness. At the same time, we can also evaluate the importance of features,
which is helpful for us to verify the efectiveness of features through experiments.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <p>In the 3rd RePSS challenge Track2, we use 322 samples from Vital Videos for training, of which
162 samples are used for training RBP-CNN and 160 for training the random forest. 200 label
unknowned samples from OBF are used for testing.</p>
      <sec id="sec-4-1">
        <title>4.1. Datasets</title>
        <p>Vital Videos (VV) [33] is a public dataset of videos with PPG and BP ground truths, which in
total contains information about 900 diferent participants. For each participant, 2 or 3 30s
uncompressed video are collected, along with personal information (gender, age, skin color),
PPG, HR, blood oxygen saturation and BP. The dataset includes roughly equal numbers of males
and females, as well as participants of all ages, skin color in diferent locations, ensuring a
variety of diferent background and lighting conditions.</p>
        <p>OBF [34] is a large face video database for remote physiological signal measurement and
atrial fibrillation (AF) detection. It contains data from 100 healthy individuals as well as six AF
patients. For each participant, multi-modal data (RGB videos, NIR video, ECG, BVP, RF) are
recorded simultaneously during two phases, each lasting 5 minutes. For healthy participants
and patients with AF, the first and second phases are resting state, post-exercise HR increase,
before and after cardioversion treatment respectively.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Training Procedure</title>
        <p>For the RBP-CNN training stage, the BVP is extracted from the corresponding facial video, after
which the HR is calculated. Subsequently, the BVP signals and BP labels are fed into RBP-CNN
for training. After that, the last fully connected layer of RBP-CNN is removed and it becomes
a BVP feature extractor. We then feed BVP into the feature extractor to capture BP-related
features. At last, both BVP feature and other physiological information are used to build the RF
training set, and then fit the DBP and SBP through the RF regression model.</p>
        <p>For the RF regressor training stage, two regressors (DBP regressor and SBP regressor) are
trained. The BVP and HR are calculated from the facial video. At the same time, the first frame
of each sample’s facial video is used for BMI estimation (age is available in VV). Lastly, the
features learned from BVP, age, BMI, HR, DBP (SBP regressor training only) are fused to train
DBP and SBP RF regressors.</p>
        <p>There are 507 samples from 250 participants for training initially. To improve performance
across datasets, we use the evaluated age to approximate the distribution of the training set to
the test set distribution. Finally, 322 samples from 160 participants are selected, among which
162 and 160 samples are used for training RBP-CNN and RF regressor.</p>
        <p>The RBP-CNN model is implemented based on pytorch framework and trained on a NIVIDA
GeForce GTX 1650 GPU for 200 epochs with a learning rate of 0.001. The RF regressor is
implemented based on sklearn and trained on lntel (R) Core (TM) i5-9300H CPU and the
n_estimators, max_depth, criterion are set to 1000, 6, absolute_error respectively.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Evaluation Metric</title>
        <p>In the training process of RBP-CNN model, we use mean absolute error (MAE) and MSE as the
model evaluation metric. For the actual test of Track 2, The root mean squared error (RMSE) of
the ground truth DBP, SBP with the submitted ones are calculated successively, and then they
are averaged as the final race score.</p>
        <p>2 = 0.5
√︃
∑︀=1 ( − ′ )2</p>
        <p>+ 0.5
√︃
∑︀=1 ( − ′ )2</p>
        <p>(5)
where  is the ground truth SBP of the ith test sample, ′ is the submitted SBP of the ith test
sample. Similarly,  and ′ are the ground truth DBP and submitted one of the ith test sample.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Results</title>
        <p>As shown in Table 1, our team (PCA_Vital) achieves second place on the 3nd RePSS Challenge
Track 2. The final score of our submitted BP prediction is 13.48281, which is behind first place
0.53023 mmHg and ahead of the third place 0.11026 mmHg but significantly ahead of the fourth
place.</p>
        <p>Figure 5 shows the top 20 feature importance of DBP and SBP RF regressors. It can be
observed that for both DBP and SBP regressors, BMI and age rank among the top three feature
importance. Notably, the feature importance of DBP in SBP RF regressor comes to 0.52, which
is obviously ahead of other features. It confirms the strong correlation between DBP and SBP.
At the same time, BVP features also have a great contribution in BP measurement. Through
calculation, the cumulative importance of BVP features reaches 0.56 and 0.30 in the DBP and
SBP RF regressors, respectively, verifying the efectiveness of RBP-CNN based BVP feature
extraction.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper presents a video-based remote BP measurement scheme via convolutional network
and RF feature fusion. We combine residual convolution, local and global attention mechanisms
to design RBP-CNN for learning the implicit BP-related information in BVP spatially and
temporally. Subsequently, we capture BMI, age, HR from facial video and analyze their correlation
with BP. In this process, we find a strong correlation between DBP and SBP. At last, we use
RF to fuse these features to achieve BP measurement and verify the rationality of our method
by using the feature importance of RF. Our method achieves second place on the 3nd RePSS
Challenge Track 2, and we believe we can do better in the future.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgments</title>
      <p>This work is supported by the National Natural Science Foundation of China under
Grant62176124, Grant 62276135, and Grant 62361166670.
[11] Jeong, C. I., and Finkelstein, J, Introducing Contactless Blood Pressure Assessment Using a</p>
      <p>High Speed Video Camera. Journal of Medical Systems 40 (2016) 1–10.
[12] Fan, X., Ye, Q., Yang, X., and Choudhury, D. S, Robust blood pressure estimation using
an RGB camera. Journal of Ambient Intelligence and Humanized Computing 11 (2020)
4329–4336.
[13] Wu, F. B., Wu, J. B., Tsai, R. B., and Hsu, P. C, A facial-image-based blood pressure
measurement system without calibration. IEEE Transactions on Instrumentation and
Measurement 71 (2022) 1–13.
[14] Zhou, Y., Ni, H., Zhang, Q., and Wu, Q, The noninvasive blood pressure measurement
based on facial images processing. IEEE Sensors Journal 19 (2019) 10624–10634.
[15] Rong, M., and Li, K, A blood pressure prediction method based on imaging
photoplethysmography in combination with machine learning. Biomedical Signal Processing and
Control 64 (2021) 102328.
[16] Luo, H., Yang, D., Barszczyk, A., Vempala, N., Wei, J., Wu, J. S., ... and Feng, P. Z,
Smartphonebased blood pressure measurement using transdermal optical imaging technology.
Circulation: Cardiovascular Imaging 12 (2019) e008857.
[17] Poh, Z. M., McDuf, J. D., and Picard, W. R, Advancements in noncontact, multiparameter
physiological measurements using a webcam. IEEE transactions on biomedical engineering
58 (2010) 7–11.
[18] De Haan, G., and Jeanne, V, Robust pulse rate from chrominance-based rPPG. IEEE
transactions on biomedical engineering 60 (2013) 2878–2886.
[19] X. Li, J. Chen, G. Zhao and M. Pietikäinen, Remote Heart Rate Measurement from Face
Videos under Realistic Situations, 2014 IEEE Conference on Computer Vision and Pattern
Recognition, Columbus, OH, USA, 2014, pp. 4264–4271
[20] Wang, W., Den Brinker, C. A., Stuijk, S., and De Haan, G, Algorithmic principles of remote</p>
      <p>PPG. IEEE Transactions on Biomedical Engineering 64 (2016) 1479–1491.
[21] Casado, A. C., and López, B. M, Face2PPG: An unsupervised pipeline for blood volume
pulse extraction from faces. IEEE Journal of Biomedical and Health Informatics 27 (2023)
5530–5541.
[22] Chen, W., and McDuf, D, Deepphys: Video-based physiological measurement using
convolutional attention networks. In Proceedings of the european conference on computer
vision (ECCV), 2018, pp. 349–365.
[23] Liu, X., Fromm, J., Patel, S., and McDuf, D, Multi-task temporal shift attention networks
for on-device contactless vitals measurement. Advances in Neural Information Processing
Systems 33 (2020) 19400–19411.
[24] Liu, X., Hill, B., Jiang, Z., Patel, S., and McDuf, D, Eficientphys: Enabling simple, fast
and accurate camera-based cardiac measurement. In Proceedings of the IEEE/CVF winter
conference on applications of computer vision, 2023, pp. 5008–5017.
[25] Yu, Z., Shen, Y., Shi, J., Zhao, H., Torr, H. P., and Zhao, G, Physformer: Facial video-based
physiological measurement with temporal diference transformer. In Proceedings of the
IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 4186–4196.
[26] He, K., Zhang, X., Ren, S., and Sun, J, Deep residual learning for image recognition. In
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp.
770–778.
[27] Branco, P., Torgo, L., and Ribeiro, P. R, SMOGN: a pre-processing approach for imbalanced
regression. In First international workshop on learning with imbalanced domains: Theory
and applications. PMLR, 2017, pp. 36–50.
[28] Steininger, M., Kobs, K., Davidson, P., Krause, A., and Hotho, A, Density-based weighting
for imbalanced regression. Machine Learning 110 (2021) 2187–2211.
[29] Yang, Y., Zha, K., Chen, Y., Wang, H., and Katabi, D, Delving into deep imbalanced
regression. In International conference on machine learning. PMLR, 2021, pp. 11842–11851.
[30] Ren, J., Zhang, M., Yu, C., and Liu, Z, Balanced mse for imbalanced visual regression. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
2022, pp. 7926–7935.
[31] Kuprashevich, M., and Tolstykh, I, Mivolo: Multi-input transformer for age and gender
estimation. In International Conference on Analysis of Images, Social Networks and Texts.</p>
      <p>Cham: Springer Nature Switzerland, 2023, pp. 212–226.
[32] Breiman, L, Random forests. Machine learning 45 (2001) 5–32.
[33] McDuf, D, Camera Measurement of Physiological Vital Signs. ACM Computing Surveys
55 (2023) 1–40.
[34] Li, X., Alikhani, I., Shi, J., Seppanen, T., Junttila, J., Majamaa-Voltti, K., ... and Zhao, G, The
obf database: A large face video database for remote physiological signal measurement
and atrial fibrillation detection. In 2018 13th IEEE international conference on automatic
face and gesture recognition, 2018, pp. 242–249.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vos</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flaxman</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Danaei</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shibuya</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adair-Rohani</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , ... and
          <string-name>
            <surname>Pelizzari</surname>
            ,
            <given-names>M. P,</given-names>
          </string-name>
          <article-title>A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions,</article-title>
          <year>1990</year>
          -
          <fpage>2010</fpage>
          :
          <article-title>a systematic analysis for the Global Burden of Disease Study 2010</article-title>
          .
          <source>The lancet 380</source>
          (
          <year>2012</year>
          )
          <fpage>2224</fpage>
          -
          <lpage>2260</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mensah</surname>
            ,
            <given-names>G. A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ezzati</surname>
            ,
            <given-names>M,</given-names>
          </string-name>
          <article-title>Global epidemiology, health burden and efective interventions for elevated blood pressure and hypertension</article-title>
          .
          <source>Nature Reviews Cardiology</source>
          <volume>18</volume>
          (
          <year>2021</year>
          )
          <fpage>785</fpage>
          -
          <lpage>802</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Olsen</surname>
            ,
            <given-names>H. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Angell</surname>
            ,
            <given-names>Y. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asma</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boutouyrie</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burger</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chirinos</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          , ... and
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>G. J,</given-names>
          </string-name>
          <article-title>A call to action and a lifecourse strategy to address the global burden of raised blood pressure on current and future generations: the Lancet Commission on hypertension</article-title>
          .
          <source>The Lancet</source>
          <volume>388</volume>
          (
          <year>2016</year>
          )
          <fpage>2665</fpage>
          -
          <lpage>2712</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Hassan</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malik</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fofi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saad</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karasfi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ali</surname>
          </string-name>
          , S. Y., and
          <string-name>
            <surname>Meriaudeau</surname>
            ,
            <given-names>F</given-names>
          </string-name>
          ,
          <article-title>Heart rate estimation using facial video: A review</article-title>
          .
          <source>Biomedical Signal Processing and Control</source>
          <volume>38</volume>
          (
          <year>2017</year>
          )
          <fpage>346</fpage>
          -
          <lpage>360</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Rouast</surname>
            ,
            <given-names>V. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adam</surname>
            ,
            <given-names>T. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiong</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornforth</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>E,</given-names>
          </string-name>
          <article-title>Remote heart rate measurement using low-cost RGB face video: a technical literature review</article-title>
          .
          <source>Frontiers of Computer Science</source>
          <volume>12</volume>
          (
          <year>2018</year>
          )
          <fpage>858</fpage>
          -
          <lpage>872</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          , J. Cheng, R. Song, Y. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ward</surname>
          </string-name>
          , and
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J. Z</given-names>
          </string-name>
          ,
          <article-title>Video-based heart rate measurement: Recent advances and future prospects</article-title>
          .
          <source>IEEE Transactions on Instrumentation and Measurement</source>
          <volume>68</volume>
          (
          <year>2018</year>
          )
          <fpage>3600</fpage>
          -
          <lpage>3615</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          , and
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>G</given-names>
          </string-name>
          , Facial
          <article-title>-video-based physiological signal measurement: Recent advances and afective applications</article-title>
          .
          <source>IEEE Signal Processing Magazine</source>
          <volume>38</volume>
          (
          <year>2021</year>
          )
          <fpage>50</fpage>
          -
          <lpage>58</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , Liu,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            , and
            <surname>Avolio</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          ,
          <article-title>Remote photoplethysmography for heart rate measurement: A review</article-title>
          .
          <source>Biomedical Signal Processing and Control</source>
          <volume>88</volume>
          (
          <year>2024</year>
          )
          <fpage>105608</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>P. R.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Argod</surname>
          </string-name>
          , Pépin,
          <string-name>
            <given-names>L. J.</given-names>
            , and
            <surname>Lévy</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. P</surname>
          </string-name>
          ,
          <article-title>Pulse transit time: an appraisal of potential clinical applications</article-title>
          .
          <source>Thorax</source>
          <volume>54</volume>
          (
          <year>1999</year>
          )
          <fpage>452</fpage>
          -
          <lpage>457</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tsow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Tao</surname>
          </string-name>
          , Noncontact Monitoring Breathing Pattern,
          <article-title>Exhalation Flow Rate and Pulse Transit Time</article-title>
          .
          <source>IEEE Transactions on Biomedical Engineering</source>
          <volume>61</volume>
          (
          <year>2014</year>
          )
          <fpage>2760</fpage>
          -
          <lpage>2767</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>