<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Dynamic Blurring Approach with EfficientNet and LSTM to Enhance Privacy in Video-Based Elderly Fall Detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ivan Ursul</string-name>
          <email>Ivanon2@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Junaid Hussain Muzamal</string-name>
          <email>junaidhocane6728@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CMIS-2024: Seventh International Workshop on Computer Modeling and Intelligent Systems</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Fast - National University of Computer and Emerging Sciences</institution>
          ,
          <addr-line>Lahore</addr-line>
          ,
          <country country="PK">Pakistan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ivan Franko National University of Lviv</institution>
          ,
          <addr-line>Lviv, Universitetska, 1, 79090</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This research paper introduces a novel approach to address privacy concerns in video-based elderly fall detection systems without compromising such technologies' efficacy and real-time response. The methodology integrates EfficientNetB0 for robust feature extraction from video sequences and Long Short-Term Memory networks for accurate fall classification. Despite achieving exemplary performance metrics, including 100% scores in accuracy, Area Under the Curve, recall, and Precision, the pervasive issue of privacy infringement in video surveillance remains a significant challenge. To tackle this, we propose a dynamic blurring technique that selectively obscures identifiable features within video frames, such as faces and distinguishing clothing, thus maintaining individual anonymity. This method ensures that the privacy of the monitored individuals is preserved while retaining the essential details necessary for the fall detection algorithm to function effectively. This paper details this privacypreserving technique and demonstrates its feasibility without detracting from the system's performance. Our findings indicate that integrating dynamic blurring into the fall detection pipeline offers a promising solution to the privacy concerns associated with video-based monitoring systems. It protects sensitive personal information while providing high care and safety. This research contributes to the broader discourse on ethical technology use in healthcare. Moreover, it emphasizes the importance of balancing advanced monitoring capabilities with the imperative of privacy preservation.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The growing demographic of the elderly population has precipitated an increased incidence of
falls [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], a leading cause of morbidity and mortality among this group [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The recent technological
solutions for fall detection have emerged as a critical component in mitigating these risks [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Among these, video-based fall detection systems have shown significant promise due to their
non-invasiveness and capability for real-time monitoring [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. However, video surveillance in
healthcare, particularly in homes and care facilities, raises significant privacy concerns [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This
research aims to find a balance between ensuring safety through surveillance and upholding the
right to privacy.
      </p>
      <p>
        There has been a significant relationship between the efficacy of video-based fall detection and
the imperative to protect individual privacy [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. While effective in identifying falls, traditional
approaches often overlook the privacy implications of constant video monitoring [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Possible
solutions include avoiding video data, implementing basic obfuscation techniques [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
compromising effectiveness, or insufficient privacy [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Previous research has proposed various
methods, including wearable devices [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13">10-13</xref>
        ] and environmental sensors [
        <xref ref-type="bibr" rid="ref14 ref15 ref16">14-16</xref>
        ], to circumvent
the associated privacy issues. However, these alternatives fall short in accuracy and real-time
response capabilities compared to video-based systems.
      </p>
      <p>In response to these challenges, this paper proposes an innovative solution that retains the
advantages of video surveillance while addressing privacy concerns. Our approach employs
dynamic blurring, selectively obscuring identifiable features within video frames. Thus,
individuals are anonymized without compromising the system’s ability to detect falls. This
0009-0002-9879-8008 (I. Ursul); 0009-0007-1598-8161 (J. H. Muzamal);
© 2024 Copyright for this paper by its authors.</p>
      <p>
        Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
method differs from existing solutions by offering a real-time, privacy-preserving mechanism
that does not detract from the system’s performance. Integrating EfficientNetB0 [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] for feature
extraction and Long Short-Term Memory (LSTM) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] networks for fall events ensures high
precision in fall detection.
      </p>
      <p>This research aims to develop a fall detection system that fulfills the need for efficient,
realtime monitoring with the imperative of privacy preservation. Our objectives include designing
and implementing a dynamic blurring technique within a video-based fall detection framework.
Moreover, we also aim to evaluate this system’s accuracy and privacy protection performance
and demonstrate its applicability in real-world settings. This research can potentially contribute
to the development of ethically responsible technological solutions in healthcare, particularly in
the context of elderly care. This work seeks to pave the way for broader acceptance by addressing
the privacy concerns associated with video-based monitoring. Moreover, deploying such systems
enhances the safety and well-being of the elderly population.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>The exploration of fall detection systems, particularly for the elderly, is an area of research that
has seen substantial evolution over time. Dean et al. [19] 2006 implemented the first real-time
fall detection system using a triaxial accelerometer. At that time, most traditional techniques
centered around simplistic, mechanical solutions and gradually transitioned towards
incorporating technology [20]. Among the earliest methods were basic alert systems, which
relied on the user to trigger an alert manually in case of a fall [21]. While pioneering for their time,
these systems were limited by their dependence on the users to activate the alarm post-fall, which
could be compromised due to injury.</p>
      <p>
        Advancements in technology brought in a new wave of methodologies, primarily categorized
into sensor-based [
        <xref ref-type="bibr" rid="ref14 ref15 ref16">14-16</xref>
        ], wearable devices [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and video surveillance systems [22],
[2325], alongside other innovative approaches. Sensor-based systems often utilize accelerometers
and gyroscopes to detect sudden movements or orientations indicative of a fall. Wearable devices,
such as smartwatches [26], integrate these sensors and offer portability. However, sensor-based
and wearable systems face challenges related to user compliance, discomfort, and the potential
for false positives due to non-fall-related abrupt movements [27]. In contrast, video surveillance
systems offer a less intrusive alternative, capturing a broader context of the individual’s
environment [28]. This method’s appeal lies in its passive nature, requiring no active input or
wearables from the monitored individuals. Despite these advantages, video-based systems have
challenges [29]. High-quality video processing demands significant computational resources, and
managing vast data volumes poses storage and efficiency concerns. Moreover, the critical issue
of privacy infringement emerges, given the intrusive nature of continuous video monitoring [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Traditional algorithms such as Support Vector Machines (SVMs) [30] and Decision Trees [31]
were widely employed in the early stages of machine learning applications for fall detection.
These methods primarily relied on handcrafted features extracted from sensor data or basic
video analytics, including motion vectors and silhouette shapes. Flow-based methods,
particularly optical flow [32], were also prominent, enabling the detection of movement patterns
by analyzing the apparent motion of objects, surfaces, and edges. While effective to a certain
extent, these approaches faced limitations in handling the high variability and complexity of
human falls. They often struggled to distinguish falls from other activities involving rapid
movements, leading to high false alarm rates [33], [34]. Additionally, their dependency on
manually crafted features restricted their adaptability, as these features might not generalize well
across different scenarios.</p>
      <p>The recent emergence of deep learning architectures like Convolutional Neural Networks
(CNNs) and Recurrent Neural Networks (RNNs) has changed many dynamics [35]. Advanced
models such as ResNet [36], LSTM [37], and YOLO especially marked a leap forward in fall
detection. CNNs, with their ability to perform automatic feature extraction, have proven
particularly adept at analyzing spatial characteristics in video frames [38]. While RNNs and
LSTMs excel in capturing temporal dependencies, it is crucial to understand the sequence of
movements leading to a fall. YOLO [39], an object detection model, brought further advancements
by enabling real-time processing. Despite their successes, the search for enhanced performance
led to exploring hybrid methods that combine multiple deep learning models. For instance,
integrating CNNs with LSTMs allows for the effective processing of video data both spatially and
temporally, offering a better understanding of fall events [40]. These hybrid approaches [41],
alongside innovative methods within deep learning frameworks, promise to address the
dynamics of fall detection [42].</p>
      <p>Recent advancements aim to address these privacy concerns while maintaining system
efficacy. Techniques such as dynamic blurring and real-time anonymization have been explored
to obscure identifiable features in video feeds. This can help safeguard individual privacy without
significantly compromising detection capabilities. Despite these efforts, there is a gap in the
literature concerning developing a system that seamlessly integrates high detection accuracy
with robust privacy protection. Our contribution to this field addresses this gap by proposing a
novel fall detection system that employs EfficientNetB0 for advanced feature extraction and
LSTM networks for accurate temporal classification, complemented by a dynamic blurring
mechanism to ensure privacy. This integrated approach promises high performance, as
evidenced by optimal accuracy, recall, and precision scores. Moreover, it introduces a viable
solution to the privacy concerns that have long shadowed video-based monitoring systems. By
achieving this delicate balance, our research paves the way for the broader acceptance of
videobased fall detection systems, ensuring the safety of the elderly population.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>This research presents a methodological framework to address the challenge of detecting falls
through video surveillance while safeguarding their privacy. The foundation of the proposed
approach is mathematical models and techniques to ensure precision, efficiency, and reliability.
The proposed method integrates state-of-the-art EfficientNetB0 for spatial feature extraction and
LSTM networks for temporal sequence analysis. Additionally, we introduce a dynamic blurring
mechanism formulated to preserve privacy by selectively obscuring identifiable features within
video frames. Figure 1. provides the overall architecture of the proposed approach.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset and Processing</title>
        <p>The dataset employed in this study was sourced from the UR Fall Detection Dataset [43],
encompassing 70 sequences, of which 30 are fall events, and 40 represent activities of daily living
(ADL). The fall events were captured using two Microsoft Kinect cameras, accompanied by
accelerometric data, whereas the ADL events were documented using a single camera (camera 0)
alongside accelerometer data. The accelerometric data was acquired through PS Move (60Hz)
and x-IMU (256Hz) devices. The dataset is structured such that each sequence comprises depth
and RGB images from both camera perspectives (parallel to the floor and ceiling-mounted),
synchronization data, and raw accelerometer readings. Each video stream is archived separately
as a sequence of PNG images. The depth data, stored in PNG16 format, necessitates rescaling to
represent depth in millimeters (D) as follows accurately:
  ( ,  ) =
 ( , )⋅ 
65535
(1)</p>
        <p>Where   ( ,  ) denotes the depth at position ( ,  ) for the ith camera,  ( ,  ) represents the
pixel value at position ( ,  ) in the PNG16 image, and   is the scale ratio for the i-th camera. The
scale ratios are defined as  0 = 6000 for fall sequences using camera 0,  1 = 3640 for fall
sequences using camera 1, and  0 = 7000 for ADL sequences using camera 0. The preprocessing
of video data involves a series of steps to prepare the frames for feature extraction. Initially, each
video is accessed frame by frame using OpenCV’s VideoCapture functionality. Subsequently, each
frame is resized to a uniform dimension of 224 × 224 pixels to align with the input requirements
of the EfficientNetB0 model. This resizing operation can be mathematically represented as a
function R that maps the original frame dimensions to the target dimensions, preserving the
aspect ratio and interpolating pixel values as necessary:</p>
        <p>
          Where w and ℎ denote the original width and height of the frame, respectively. After resizing,
the frames undergo normalization to scale the pixel values to the [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] range, facilitating more
stable and efficient model training. The normalization process for a frame F can be defined as:
 : ℝ ×ℎ×3 → ℝ224×224×3
 
=
        </p>
        <p>225</p>
        <p>This operation ensures that each pixel value in the frame is proportionally reduced to a
decimal between 0 and 1, thus standardizing the input data for subsequent processing through
the EfficientNetB0 architecture.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Feature Extraction Using EfficientNetB0</title>
        <p>The feature extraction component of our methodology is built upon the EfficientNetB0
architecture, a cutting-edge CNN known for its scalability and efficiency. EfficientNetB0 uniformly
scales the network’s depth, width, and resolution, optimizing its performance across various
constraints. EfficientNetB0 are its convolutional operations, which form the backbone of its
feature extraction capabilities. A convolutional operation on an input image or feature map can
be mathematically described as:
(2)
(3)
(4)
(5)
 
( ,  ) = ∑ = − ∑ = −  (  ,  ) ⋅   ( −  ,  −  )</p>
        <p />
        <p>Where Fout is the output feature map, Fin is the input image or feature map, K is the kernel or
filter of size (2 + 1) × (2 + 1), and ( ,  ) denotes the pixel coordinates. This operation is
applied across the entire input feature map, extracting features through the weighted summation
of pixel values</p>
        <p>within the kernel’s receptive field. EfficientNetB0 also leverages batch
normalization to enhance training stability and convergence. Batch normalization can be defined
as:
 ( ) =  (  −</p>
        <p>) + 
√ 2 +</p>
        <p>Where x is the input to the batch normalization layer, μB and σB2 are the mean and variance
of the batch, respectively, γ and β are learnable parameters of the layer, and ϵ is a small constant
added for numerical stability. Furthermore, EfficientNetB0 employs depthwise separable
convolutions, a technique that reduces computational cost without sacrificing depth or
expressivity. A depthwise separable convolution comprises two stages: depthwise and pointwise
convolution. The depthwise convolution applies a single filter per input channel, and the
pointwise convolution then combines the output channels using a 1×11×1 convolution. This can
be represented as:
(6)
(7)</p>
        <p>Where DW denotes the output of the depthwise convolution for channel c, PW is the output of
the pointwise convolution for channel c′,   is the kernel for the depthwise convolution, and K′ is
the 1×11×1 kernel for the pointwise convolution. C is the number of channels. Activation
functions such as the Swish function, defined as  ( ) =  ⋅ 
( ), are applied after
convolutional operations to introduce non-linearity, enabling the network to learn complex
features. By integrating these elements, EfficientNetB0 provides a powerful and efficient
framework.</p>
        <p>3.3. Dynamic Blurring for Privacy Preservation
we address privacy concerns in video-based monitoring by implementing dynamic blurring for
privacy preservation. This process involves the selective obfuscation of regions of interest (ROI)
within video frames, specifically targeting identifiable features of individuals to maintain
anonymity while preserving the utility of the data for fall detection. The identification of ROIs for
blurring is governed by a detection function  (  ,  ), where  
represents an input frame, and
 denotes the parameters of the detection model, which may include facial recognition, pose
estimation, or other relevant feature detection algorithms. The output of this function is a set of
bounding boxes</p>
        <p>= { 1,  2, . . . ,   }, where each   specifies the coordinates and dimensions of an
ROI within the frame. The dynamic blurring is then applied to these identified ROIs using a
Gaussian blur operation, mathematically described as:
 ( ,  ,  ) = 2 1 2  − 2 2
 2+ 2
(8)
within the frame   can be represented as:</p>
        <p>Where ( ,  ) are the coordinates relative to the center of the kernel, and  is the standard
deviation, which controls the extent of blurring. The size of the kernel,  ×  , is chosen based on
the desired level of blurriness, typically set to several times the value of  to ensure that the edges
of the kernel contribute negligibly to the blur. The application of the Gaussian blur to an ROI  
 
( ,  ) = (  ∗  )( ,  ) = ∑</p>
        <p>= − ∑ = −   ( −  ,  −  ) ⋅  ( ,  ,  ) (9)
width and height of the Gaussian kernel, respectively.</p>
        <p>for all ( ,  ) within   , where ∗ denotes the convolution operation, and a and b are half the</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Temporal Analysis with LSTM</title>
        <p>The LSTM network is a specialized RNN designed to model temporal dependencies in sequence
data effectively. Its architecture is uniquely suited to address the vanishing gradient problems,
enabling it to capture long-term dependencies. An LSTM unit comprises three main gates: the
input gate (i), the forget gate (f), and the output gate (o), each responsible for regulating the flow
of information. We utilized bidirectional LSTM with the following structure, as shown in Figure</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.5. Classification Framework</title>
        <p>Following the extraction of temporal features, the next step involves classification, which is
classified between fall and non-fall events. This process typically involves passing the LSTM
output through one or more fully connected layers in a SoftMax layer for binary classification:
 = 
ℎ =  ℎ ⋅ ℎ +  ℎ
( ) =   
∑   
(10)
(11)
where ℎ is the output from the LSTM at time t,  ℎ and  ℎ are the weights and biases for the
dense layer, respectively, z is the logit, and p represents the probabilities for each class obtained
through the SoftMax function. The class with the highest probability is selected as the predicted
class for each input sequence. This framework facilitates the effective classification of video
sequences into fall or non-fall categories based on the temporal patterns identified by the LSTM
network.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.6. Training Process</title>
        <p>The training of the integrated model is underpinned by a mathematical framework that includes
the definition of a loss function, the selection of an optimization algorithm, and the application of
regularization techniques to prevent overfitting. The loss function L quantifies the discrepancy
between the predicted outputs p and the true labels y. For binary classification tasks, such as fall
detection, the binary cross-entropy loss is commonly used:
 ( ,  )

1 ∑
 =1
[  log(  ) + (1 −   ) log(1 −   )]
(15)</p>
        <p>Where N is the number of samples,   is the true label, and   is the predicted probability for
the i-th sample. The optimization of the model parameters is achieved through Stochastic
Gradient Descent (SGD), which iteratively updates the weights W based on the gradients of the
loss function:
  +1 =   −  
(16)</p>
        <p>Where  is the learning rate, and ∇L denotes the gradient of the loss function for the weights
at time t. L2 regularization and dropout techniques are applied to mitigate overfitting by adding
a penalty term to the loss function or randomly omitting units from the network during training,
respectively. The backpropagation process facilitates the computation of gradients ∇L through
the network, employing the chain rule to propagate errors from the output layer back through
the LSTM and EfficientNetB0 layers, enabling the model to learn and adjust its parameters to
minimize the loss function.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Analysis</title>
      <p>The confusion matrix in Figure 4. provides a quantitative assessment of the model’s
classification accuracy. It shows the number of true positive (TP) and true negative (TN)
predictions, along with false positive (FP) and false negative (FN) predictions. In this case, the
matrix reveals perfect classification on the test data, with all fall events being correctly identified
(6 TP) and no ADL events being misclassified as falls (0 FN), implying an exceptional level of
model performance.</p>
      <p>The Receiver Operating Characteristic (ROC) curve in Figure 5. plots the true positive rate
(TPR) against the false positive rate (FPR) at various threshold settings. The area under the curve
(AUC) in this ROC curve approaches 1, which suggests excellent model performance, with a high
true positive rate and a low false positive rate across threshold values.</p>
      <p>Figure 6 shows the model’s performance on real test videos; the model correctly predicted
this particular scenario as a ‘Fall,’ corroborated by the RGB image on the right. This clearly shows
an individual in a prone position on the floor. Moreover, the depth image reveals the successful
application of the dynamic blurring method. The individual’s features are indistinguishable, and
the privacy-preserving objective of the method is evident. The contours and the general posture
of the person are discernible, which is sufficient for fall detection purposes, but the finer details
necessary for personal identification have been effectively obfuscated. The blurring technique
implemented in the system is designed to activate upon detecting a human figure within the video
frame, applying a Gaussian blur where the person is detected. This ensures that any potentially
sensitive information is rendered non-identifiable, addressing privacy concerns paramount in
real-world applications of surveillance-based systems. The obscured depth image confirms that
the privacy-preserving measures do not impede the algorithm’s ability to detect a fall.</p>
      <p>Figure 7 shows the system’s prediction for this scene, labeled ‘ADL,’ which is validated by the
RGB image on the right. It depicts an individual in an upright position, supporting the prediction
that no fall has occurred. The prediction’s accuracy is a testament to the model’s ability to
effectively discern between falls and non-fall events. Furthermore, similar to the previous fall
scenario, the depth image demonstrates the application of the dynamic blurring technique. The
individual’s detailed features are indistinct, ensuring privacy is maintained. Despite the blurring,
essential characteristics for ADL recognition, such as the vertical orientation of the body and the
absence of unusual postures associated with falls, are preserved and remain detectable by the
system.</p>
      <p>The analysis of the presented results underscores the robustness and reliability of the
implemented fall detection model. This is evidenced by the convergence of the accuracy and loss
metrics, the unequivocal classification outcomes depicted in the confusion matrix, and the
favorable diagnostic characteristics portrayed by the ROC curve. These results collectively affirm
the model’s efficacy in accurately detecting fall events. It preserves the privacy of individuals
through dynamic blurring, as no identifiable features are discernible in the depth visualizations.</p>
      <p>As shown in Table 1, the comparative analysis of fall detection methodologies yields a
substantive understanding of the advancements and varying efficacies of diverse approaches in
this research domain. The table encapsulates the True Positive Rate (TPR), True Negative Rate
(TNR), and overall Accuracy, serving as pivotal metrics for the assessment of each method. The
model presented by Eltahir et al. [40] manifests a commendable balance between sensitivity and
specificity, with a TPR of 95.88% and a TNR of 97.02%, culminating in an accuracy of 97.56%.
Chan Su’s model slightly improves sensitivity at 98.07% and specificity at 99.03%, with an
analogous accuracy of 98.06%. These two models set a robust baseline in fall detection,
evidencing high efficacy. The single-stream models using RGB and Optical Flow (OF) data
individually attain a TPR of 100%, indicative of their flawless identification of fall events.
However, their specificity scores, 96.61% and 96.34%, respectively, although high, suggest a
slightly less robust capacity to classify non-fall activities accurately. This slight discrepancy is
reflected in their accuracy scores, which, while impressive at 96.99% and 96.75%, do not reach
the pinnacle of Chan Su’s model.</p>
      <p>The multi-stream approach amalgamating RGB, OF, and Pose Estimation (PE) data represents
a significant leap forward, yielding a perfect TPR and an enhanced TNR of 98.61%, leading to an
accuracy of 98.77%. This approach underscores the utility of integrating multiple data streams
for improved specificity without compromising sensitivity. EfficientNet-B0, despite a lower TPR
of 93.33%, achieves a perfect TNR of 100%. This accentuates the model’s exceptional
performance in identifying non-fall events, though it falls short of the multi-stream model’s
balanced accuracy. The improved YOLOv5s model and the single frame human binary image
approach using YOLOv5s do not disclose TPR or TNR but report accuracies of 97.2% and 96.7%,
respectively. While these figures suggest competent models, the lack of detailed TPR and TNR
data precludes a complete comparative analysis. Our proposed methodology establishes a new
benchmark, recording a flawless TPR and TNR of 100% and an unmatched accuracy of 100%.
This unprecedented performance indicates a superior ability to correctly identify fall incidents
and an unparalleled precision in confirming non-fall activities.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this research, we have successfully developed and evaluated a novel video-based fall detection
system that prioritizes privacy without compromising the real-time detection efficacy of elderly
falls. By integrating EfficientNetB0 and LSTM networks, our methodology ensures robust feature
extraction and accurate fall event classification. The introduction of dynamic blurring as a
privacy-preserving technique represents a significant advancement, allowing for anonymizing
identifiable features within video frames while maintaining the system’s operational integrity.
Our findings reveal that this approach achieves perfect accuracy, recall, precision, and area AUC
scores. It also effectively addresses the critical privacy concerns of video surveillance in sensitive
environments such as homes and elderly care facilities. Implementing dynamic blurring ensures
that the privacy of monitored individuals is safeguarded, setting a new precedent in the ethical
application of surveillance technologies in healthcare.</p>
      <p>Our future research will focus on further enhancing the adaptability and generalizability of the
system across diverse settings and populations. This includes exploring additional
privacypreserving mechanisms and integrating multimodal data sources to enrich the system’s
contextual understanding. This research contributes significantly to elderly care technology,
presenting a practical solution to the long-standing challenge of balancing effective fall detection
with stringent privacy requirements. Our work advances the technological capabilities in this
domain and addresses critical ethical considerations. This paves the way for broader acceptance
and deployment of video-based monitoring systems in healthcare settings.</p>
      <p>S. A. Carneiro, G. P. da Silva, G. V. Leite, R. Moreno, S. J. F. Guimaraes, and H. Pedrini,
“Multistream deep convolutional network using high-level features applied to fall detection in video
sequences,” in 2019 International Conference on Systems, Signals and Image Processing
(IWSSIP), IEEE, 2019, pp. 293–298. Accessed: Mar. 18, 2024.</p>
      <p>N. Kaur, S. Rani, and S. Kaur, “Real-time video surveillance based human fall detection
system using hybrid haar cascade classifier,” Multimed. Tools Appl., pp. 1–19, 2024.</p>
      <p>A. Núñez-Marcos and I. Arganda-Carreras, “Transformer-based fall detection in videos,”
Eng. Appl. Artif. Intell., vol. 132, p. 107937, 2024.</p>
      <p>J. Moore et al., “Contextualizing remote fall risk: Video data capture and implementing
ethical AI,” NPJ Digit. Med., vol. 7, no. 1, p. 61, 2024.</p>
      <p>V. Fula and P. Moreno, “Wrist-Based Fall Detection: Towards Generalization across
Datasets,” Sensors, vol. 24, no. 5, p. 1679, 2024.</p>
      <p>A. Bansal, R. Sharma, and M. Kathuria, “A Vision-Based Approach to Enhance Fall
Detection with Fine-Tuned Faster R-CNN,” in 2023 International Conference on Advanced
Computing &amp; Communication Technologies (ICACCTech), IEEE, 2023, pp. 678–684.</p>
      <p>J. Gutiérrez, V. Rodríguez, and S. Martin, “Comprehensive review of vision-based fall
detection systems,” Sensors, vol. 21, no. 3, p. 947, 2021.</p>
      <p>S. Ezatzadeh and M. R. Keyvanpour, “Fall detection for elderly in assisted environments:
Video surveillance systems and challenges,” in 2017 9th international conference on
information and knowledge technology (ikt), IEEE, 2017, pp. 93–98. Accessed: Mar. 19, 2024.</p>
      <p>I. Charfi, J. Miteran, J. Dubois, M. Atri, and R. Tourki, “Optimized spatio-temporal
descriptors for real-time fall detection: comparison of support vector machine and
Adaboostbased classification,” J. Electron. Imaging, vol. 22, no. 4, pp. 041106–041106, 2013.</p>
      <p>F.-Y. Leu, C.-Y. Ko, Y.-C. Lin, H. Susanto, and H.-C. Yu, “Fall detection and motion
classification by using decision tree on mobile phone,” in Smart Sensors Networks, Elsevier,
2017, pp. 205–237. Accessed: Mar. 19, 2024.</p>
      <p>Y.-Z. Hsieh and Y.-L. Jeng, “Development of home intelligent fall detection IoT system
based on feedback optical flow convolutional neural network,” Ieee Access, vol. 6, pp. 6048–
6057, 2017.</p>
      <p>P. Vallabh and R. Malekian, “Fall detection monitoring systems: a comprehensive review,”
J. Ambient Intell. Humaniz. Comput., vol. 9, no. 6, pp. 1809–1833, 2018.</p>
      <p>R. Igual, C. Medrano, and I. Plaza, “Challenges, issues and trends in fall detection systems,”
Biomed. Eng. OnLine, vol. 12, no. 1, p. 66, 2013, doi: 10.1186/1475-925X-12-66.</p>
      <p>Y. Fan, M. D. Levine, G. Wen, and S. Qiu, “A deep neural network for real-time detection of
falling humans in naturally occurring scenes,” Neurocomputing, vol. 260, pp. 43–58, 2017.</p>
      <p>D. Singh, M. Gupta, and R. Kumar, “BGR Images-Based Human Fall Detection Using
ResNet50 and LSTM,” in Third Congress on Intelligent Systems, vol. 608, S. Kumar, H. Sharma, K.
Balachandran, J. H. Kim, and J. C. Bansal, Eds., in Lecture Notes in Networks and Systems, vol.
608. , Singapore: Springer Nature Singapore, 2023, pp. 175–186. doi:
10.1007/978-981-199225-4_14.</p>
      <p>N. Lu, Y. Wu, L. Feng, and J. Song, “Deep learning for fall detection: Three-dimensional CNN
combined with LSTM on video kinematic data,” IEEE J. Biomed. Health Inform., vol. 23, no. 1,
pp. 314–323, 2018.</p>
      <p>C. Su, J. Wei, D. Lin, L. Kong, and Y. L. Guan, “A novel model for fall detection and action
recognition combined lightweight 3D-CNN and convolutional LSTM networks,” Pattern Anal.
Appl., vol. 27, no. 1, pp. 1–16, 2024.</p>
      <p>T. Chen, Z. Ding, and B. Li, “Elderly fall detection based on improved YOLOv5s network,”
IEEE Access, vol. 10, pp. 91273–91282, 2022.</p>
      <p>M. M. Eltahir et al., “Deep Transfer Learning-Enabled Activity Identification and Fall
Detection for Disabled People.,” Comput. Mater. Contin., vol. 75, no. 2, 2023, Accessed: Mar.
18, 2024.</p>
      <p>S. Hwang, M. Ki, S.-H. Lee, S. Park, and B.-K. Jeon, “Cut and continuous paste towards
realtime deep fall detection,” in ICASSP 2022-2022 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), IEEE, 2022, pp. 1775–1779. Accessed: Mar. 18, 2024.</p>
      <p>Y. Wang, R. Song, and X. Zhang, “Real-time human fall recognition based on deep learning
methods and single depth image with privacy requirements,” in 2022 37th Youth Academic
[37]
[38]
[39]
[40]
[41]
[42]</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W. W.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>McFaull</surname>
          </string-name>
          , and
          <string-name>
            <surname>M. D. Cusimano</surname>
          </string-name>
          , “
          <article-title>Predictors of falls and mortality among elderly adults with traumatic brain injury: a nationwide, population-based study</article-title>
          ,”
          <source>PloS One</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>4</issue>
          , p.
          <fpage>e0175868</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kehoe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yates</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Lecky</surname>
          </string-name>
          , “
          <article-title>The changing face of major trauma in the UK,” Emerg</article-title>
          . Med. J., vol.
          <volume>32</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>911</fpage>
          -
          <lpage>915</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Gharghan</surname>
          </string-name>
          and
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Hashim</surname>
          </string-name>
          , “
          <article-title>A comprehensive review of elderly fall detection using wireless communication and artificial intelligence techniques</article-title>
          ,” Measurement, p.
          <fpage>114186</fpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Egeonu</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Jia</surname>
          </string-name>
          , “
          <article-title>A systematic literature review of computer vision-based biomechanical models for physical workload estimation</article-title>
          ,” Ergonomics, pp.
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          , Jan.
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Khatiwada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-C.</given-names>
            <surname>Lin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Blobel</surname>
          </string-name>
          , “
          <article-title>Patient-Generated Health Data (PGHD): Understanding, Requirements, Challenges, and Existing Techniques for Data Security and Privacy,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Pers</surname>
          </string-name>
          . Med., vol.
          <volume>14</volume>
          , no.
          <issue>3</issue>
          , p.
          <fpage>282</fpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Qaraqe</surname>
          </string-name>
          et al.,
          <article-title>“PublicVision: A Secure Smart Surveillance System for Crowd Behavior Recognition,” IEEE Access</article-title>
          , vol.
          <volume>12</volume>
          , pp.
          <fpage>26474</fpage>
          -
          <lpage>26491</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rajagopalan</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Litvan</surname>
          </string-name>
          , and T.-P. Jung, “
          <article-title>Fall prediction and prevention systems: recent trends, challenges</article-title>
          , and future research directions,” Sensors, vol.
          <volume>17</volume>
          , no.
          <issue>11</issue>
          , p.
          <fpage>2509</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dhall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Khan</surname>
          </string-name>
          , “
          <article-title>Motion and region aware adversarial learning for fall detection with thermal imaging,” in 2020 25th international conference on pattern recognition (ICPR)</article-title>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>6321</fpage>
          -
          <lpage>6328</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Climent-Pérez</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Florez-Revuelta</surname>
          </string-name>
          ,
          <article-title>“A review on visual privacy preservation techniques for active and assisted living,” Multimed</article-title>
          .
          <source>Tools Appl.</source>
          , vol.
          <volume>83</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>14715</fpage>
          -
          <lpage>14755</lpage>
          , Jul.
          <year>2023</year>
          , doi: 10.1007/s11042-023-15775-2.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pierleoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Belli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Palma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pellegrini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pernini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Valenti</surname>
          </string-name>
          ,
          <article-title>“A high reliability wearable device for elderly fall detection,” IEEE Sens</article-title>
          . J., vol.
          <volume>15</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>4544</fpage>
          -
          <lpage>4553</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Alrasheedy</surname>
            ,
            <given-names>M. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muniyandi</surname>
            ,
            <given-names>R. C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fauzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2022</year>
          ,
          <article-title>October)</article-title>
          .
          <article-title>Text-Based Emotion Detection and Applications: A Literature Review</article-title>
          .
          <source>In 2022 International Conference on Cyber Resilience (ICCR)</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Kwong</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muzamal</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2022</year>
          , November).
          <article-title>Privacy Pro: Spam Calls Detection Using Voice Signature Analysis and Behavior-Based Filtering</article-title>
          .
          <source>In 2022 17th International Conference on Emerging Technologies (ICET)</source>
          (pp.
          <fpage>184</fpage>
          -
          <lpage>189</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>E.</given-names>
            <surname>Torti</surname>
          </string-name>
          et al.,
          <article-title>“Embedded real-time fall detection with deep learning on wearable devices,” in 2018 21st euromicro conference on digital system design (DSD)</article-title>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>405</fpage>
          -
          <lpage>412</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Delahoz</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Labrador</surname>
          </string-name>
          , “
          <article-title>Survey on fall detection and fall prevention using wearable and external sensors</article-title>
          ,
          <source>” Sensors</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>10</issue>
          , pp.
          <fpage>19806</fpage>
          -
          <lpage>19842</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nooruddin</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Islam</surname>
            ,
            <given-names>F. A.</given-names>
          </string-name>
          <string-name>
            <surname>Sharna</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Alhetari</surname>
            , and
            <given-names>M. N.</given-names>
          </string-name>
          <string-name>
            <surname>Kabir</surname>
          </string-name>
          , “
          <article-title>Sensor-based fall detection systems: a review,”</article-title>
          <string-name>
            <given-names>J. Ambient</given-names>
            <surname>Intell</surname>
          </string-name>
          .
          <source>Humaniz. Comput.</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>2735</fpage>
          -
          <lpage>2751</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. U.</given-names>
            <surname>Rehman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yongchareon</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P. H. J.</given-names>
            <surname>Chong</surname>
          </string-name>
          , “
          <article-title>Sensor technologies for fall detection systems: A review,” IEEE Sens</article-title>
          . J., vol.
          <volume>20</volume>
          , no.
          <issue>13</issue>
          , pp.
          <fpage>6889</fpage>
          -
          <lpage>6919</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Koonce</surname>
          </string-name>
          , “EfficientNet,” in
          <source>Convolutional Neural Networks with Swift for Tensorflow</source>
          , Berkeley, CA: Apress,
          <year>2021</year>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4842</fpage>
          -6168-2_
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          <article-title>, “Long Short-Term Memory,” in Supervised Sequence Labelling with Recurrent Neural Networks</article-title>
          , vol.
          <volume>385</volume>
          , in Studies in
          <source>Computational Intelligence</source>
          , vol.
          <volume>385</volume>
          . , Berlin, Heidelberg: Springer Berlin Heidelberg,
          <year>2012</year>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>45</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -24797-
          <issue>2</issue>
          _4.
          <string-name>
            <surname>D. M. Karantonis</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          <string-name>
            <surname>Narayanan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mathie</surname>
            ,
            <given-names>N. H.</given-names>
          </string-name>
          <string-name>
            <surname>Lovell</surname>
            , and
            <given-names>B. G.</given-names>
          </string-name>
          <string-name>
            <surname>Celler</surname>
          </string-name>
          , “
          <article-title>Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring</article-title>
          ,
          <source>” IEEE Trans. Inf. Technol. Biomed.</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>156</fpage>
          -
          <lpage>167</lpage>
          ,
          <year>2006</year>
          . A.
          <string-name>
            <surname>K. Bourke</surname>
            ,
            <given-names>J. V.</given-names>
          </string-name>
          <article-title>O'brien, and</article-title>
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Lyons</surname>
          </string-name>
          , “
          <article-title>Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm,” Gait Posture</article-title>
          , vol.
          <volume>26</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>194</fpage>
          -
          <lpage>199</lpage>
          ,
          <year>2007</year>
          . D. A.
          <string-name>
            <surname>Ganz</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>P. G.</given-names>
          </string-name>
          <string-name>
            <surname>Shekelle</surname>
            , and
            <given-names>L. Z.</given-names>
          </string-name>
          <string-name>
            <surname>Rubenstein</surname>
          </string-name>
          , “
          <article-title>Will my patient fall?,”</article-title>
          <string-name>
            <surname>Jama</surname>
          </string-name>
          , vol.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          297, no.
          <issue>1</issue>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>Annual Conference of Chinese Association of Automation (YAC)</source>
          , IEEE,
          <year>2022</year>
          , pp.
          <fpage>1548</fpage>
          -
          <lpage>1553</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>Accessed: Mar. 18</source>
          ,
          <year>2024</year>
          .
          <string-name>
            <given-names>B.</given-names>
            <surname>Kwolek</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kepski</surname>
          </string-name>
          , “
          <article-title>Human fall detection on embedded platform using depth maps and wireless accelerometer,” Comput. Methods Programs Biomed</article-title>
          ., vol.
          <volume>117</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>489</fpage>
          -
          <lpage>501</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>