<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Dynamic Blurring Approach with EfficientNet and LSTM to Enhance Privacy in Video-Based Elderly Fall Detection</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ivan</forename><surname>Ursul</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Ivan Franko National University of Lviv</orgName>
								<address>
									<addrLine>Universitetska</addrLine>
									<postCode>1, 79090</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Junaid</forename><surname>Hussain Muzamal</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">National University of Computer and Emerging Sciences</orgName>
								<address>
									<settlement>Lahore</settlement>
									<country key="PK">Pakistan</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Dynamic Blurring Approach with EfficientNet and LSTM to Enhance Privacy in Video-Based Elderly Fall Detection</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">9C63DF28C990A0629AB99B6CBFE8382A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:23+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Elderly Fall Detection</term>
					<term>Privacy Preservation</term>
					<term>Video Surveillance</term>
					<term>EfficientNetB0</term>
					<term>Long Short-Term Memory (LSTM)</term>
					<term>Dynamic Blurring</term>
					<term>Real-Time Monitoring</term>
					<term>Feature Extraction 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This research paper introduces a novel approach to address privacy concerns in video-based elderly fall detection systems without compromising such technologies' efficacy and real-time response. The methodology integrates EfficientNetB0 for robust feature extraction from video sequences and Long Short-Term Memory networks for accurate fall classification. Despite achieving exemplary performance metrics, including 100% scores in accuracy, Area Under the Curve, recall, and Precision, the pervasive issue of privacy infringement in video surveillance remains a significant challenge. To tackle this, we propose a dynamic blurring technique that selectively obscures identifiable features within video frames, such as faces and distinguishing clothing, thus maintaining individual anonymity. This method ensures that the privacy of the monitored individuals is preserved while retaining the essential details necessary for the fall detection algorithm to function effectively. This paper details this privacypreserving technique and demonstrates its feasibility without detracting from the system's performance. Our findings indicate that integrating dynamic blurring into the fall detection pipeline offers a promising solution to the privacy concerns associated with video-based monitoring systems. It protects sensitive personal information while providing high care and safety. This research contributes to the broader discourse on ethical technology use in healthcare. Moreover, it emphasizes the importance of balancing advanced monitoring capabilities with the imperative of privacy preservation.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The growing demographic of the elderly population has precipitated an increased incidence of falls <ref type="bibr" target="#b0">[1]</ref>, a leading cause of morbidity and mortality among this group <ref type="bibr" target="#b1">[2]</ref>. The recent technological solutions for fall detection have emerged as a critical component in mitigating these risks <ref type="bibr" target="#b2">[3]</ref>. Among these, video-based fall detection systems have shown significant promise due to their non-invasiveness and capability for real-time monitoring <ref type="bibr" target="#b3">[4]</ref>. However, video surveillance in healthcare, particularly in homes and care facilities, raises significant privacy concerns <ref type="bibr" target="#b4">[5]</ref>. This research aims to find a balance between ensuring safety through surveillance and upholding the right to privacy.</p><p>There has been a significant relationship between the efficacy of video-based fall detection and the imperative to protect individual privacy <ref type="bibr" target="#b5">[6]</ref>. While effective in identifying falls, traditional approaches often overlook the privacy implications of constant video monitoring <ref type="bibr" target="#b6">[7]</ref>. Possible solutions include avoiding video data, implementing basic obfuscation techniques <ref type="bibr" target="#b7">[8]</ref>, compromising effectiveness, or insufficient privacy <ref type="bibr" target="#b8">[9]</ref>. Previous research has proposed various methods, including wearable devices <ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref> and environmental sensors <ref type="bibr" target="#b13">[14]</ref><ref type="bibr" target="#b14">[15]</ref><ref type="bibr" target="#b15">[16]</ref>, to circumvent the associated privacy issues. However, these alternatives fall short in accuracy and real-time response capabilities compared to video-based systems.</p><p>In response to these challenges, this paper proposes an innovative solution that retains the advantages of video surveillance while addressing privacy concerns. Our approach employs dynamic blurring, selectively obscuring identifiable features within video frames. Thus, individuals are anonymized without compromising the system's ability to detect falls. This method differs from existing solutions by offering a real-time, privacy-preserving mechanism that does not detract from the system's performance. Integrating EfficientNetB0 <ref type="bibr" target="#b16">[17]</ref> for feature extraction and Long Short-Term Memory (LSTM) <ref type="bibr" target="#b17">[18]</ref> networks for fall events ensures high precision in fall detection.</p><p>This research aims to develop a fall detection system that fulfills the need for efficient, realtime monitoring with the imperative of privacy preservation. Our objectives include designing and implementing a dynamic blurring technique within a video-based fall detection framework. Moreover, we also aim to evaluate this system's accuracy and privacy protection performance and demonstrate its applicability in real-world settings. This research can potentially contribute to the development of ethically responsible technological solutions in healthcare, particularly in the context of elderly care. This work seeks to pave the way for broader acceptance by addressing the privacy concerns associated with video-based monitoring. Moreover, deploying such systems enhances the safety and well-being of the elderly population.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature Review</head><p>The exploration of fall detection systems, particularly for the elderly, is an area of research that has seen substantial evolution over time. Dean et al. <ref type="bibr" target="#b18">[19]</ref> 2006 implemented the first real-time fall detection system using a triaxial accelerometer. At that time, most traditional techniques centered around simplistic, mechanical solutions and gradually transitioned towards incorporating technology <ref type="bibr" target="#b19">[20]</ref>. Among the earliest methods were basic alert systems, which relied on the user to trigger an alert manually in case of a fall <ref type="bibr" target="#b20">[21]</ref>. While pioneering for their time, these systems were limited by their dependence on the users to activate the alarm post-fall, which could be compromised due to injury.</p><p>Advancements in technology brought in a new wave of methodologies, primarily categorized into sensor-based <ref type="bibr" target="#b13">[14]</ref><ref type="bibr" target="#b14">[15]</ref><ref type="bibr" target="#b15">[16]</ref>, wearable devices <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b11">[12]</ref>, and video surveillance systems <ref type="bibr" target="#b21">[22]</ref>, <ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref><ref type="bibr" target="#b24">[25]</ref>, alongside other innovative approaches. Sensor-based systems often utilize accelerometers and gyroscopes to detect sudden movements or orientations indicative of a fall. Wearable devices, such as smartwatches <ref type="bibr" target="#b25">[26]</ref>, integrate these sensors and offer portability. However, sensor-based and wearable systems face challenges related to user compliance, discomfort, and the potential for false positives due to non-fall-related abrupt movements <ref type="bibr" target="#b26">[27]</ref>. In contrast, video surveillance systems offer a less intrusive alternative, capturing a broader context of the individual's environment <ref type="bibr" target="#b27">[28]</ref>. This method's appeal lies in its passive nature, requiring no active input or wearables from the monitored individuals. Despite these advantages, video-based systems have challenges <ref type="bibr" target="#b28">[29]</ref>. High-quality video processing demands significant computational resources, and managing vast data volumes poses storage and efficiency concerns. Moreover, the critical issue of privacy infringement emerges, given the intrusive nature of continuous video monitoring <ref type="bibr" target="#b8">[9]</ref>.</p><p>Traditional algorithms such as Support Vector Machines (SVMs) <ref type="bibr" target="#b29">[30]</ref> and Decision Trees <ref type="bibr" target="#b30">[31]</ref> were widely employed in the early stages of machine learning applications for fall detection. These methods primarily relied on handcrafted features extracted from sensor data or basic video analytics, including motion vectors and silhouette shapes. Flow-based methods, particularly optical flow <ref type="bibr" target="#b31">[32]</ref>, were also prominent, enabling the detection of movement patterns by analyzing the apparent motion of objects, surfaces, and edges. While effective to a certain extent, these approaches faced limitations in handling the high variability and complexity of human falls. They often struggled to distinguish falls from other activities involving rapid movements, leading to high false alarm rates <ref type="bibr" target="#b32">[33]</ref>, <ref type="bibr" target="#b33">[34]</ref>. Additionally, their dependency on manually crafted features restricted their adaptability, as these features might not generalize well across different scenarios.</p><p>The recent emergence of deep learning architectures like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has changed many dynamics <ref type="bibr" target="#b34">[35]</ref>. Advanced models such as ResNet <ref type="bibr" target="#b35">[36]</ref>, LSTM <ref type="bibr" target="#b36">[37]</ref>, and YOLO especially marked a leap forward in fall detection. CNNs, with their ability to perform automatic feature extraction, have proven particularly adept at analyzing spatial characteristics in video frames <ref type="bibr" target="#b37">[38]</ref>. While RNNs and LSTMs excel in capturing temporal dependencies, it is crucial to understand the sequence of movements leading to a fall. YOLO <ref type="bibr" target="#b38">[39]</ref>, an object detection model, brought further advancements by enabling real-time processing. Despite their successes, the search for enhanced performance led to exploring hybrid methods that combine multiple deep learning models. For instance, integrating CNNs with LSTMs allows for the effective processing of video data both spatially and temporally, offering a better understanding of fall events <ref type="bibr" target="#b39">[40]</ref>. These hybrid approaches <ref type="bibr" target="#b40">[41]</ref>, alongside innovative methods within deep learning frameworks, promise to address the dynamics of fall detection <ref type="bibr" target="#b41">[42]</ref>.</p><p>Recent advancements aim to address these privacy concerns while maintaining system efficacy. Techniques such as dynamic blurring and real-time anonymization have been explored to obscure identifiable features in video feeds. This can help safeguard individual privacy without significantly compromising detection capabilities. Despite these efforts, there is a gap in the literature concerning developing a system that seamlessly integrates high detection accuracy with robust privacy protection. Our contribution to this field addresses this gap by proposing a novel fall detection system that employs EfficientNetB0 for advanced feature extraction and LSTM networks for accurate temporal classification, complemented by a dynamic blurring mechanism to ensure privacy. This integrated approach promises high performance, as evidenced by optimal accuracy, recall, and precision scores. Moreover, it introduces a viable solution to the privacy concerns that have long shadowed video-based monitoring systems. By achieving this delicate balance, our research paves the way for the broader acceptance of videobased fall detection systems, ensuring the safety of the elderly population.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>This research presents a methodological framework to address the challenge of detecting falls through video surveillance while safeguarding their privacy. The foundation of the proposed approach is mathematical models and techniques to ensure precision, efficiency, and reliability. The proposed method integrates state-of-the-art EfficientNetB0 for spatial feature extraction and LSTM networks for temporal sequence analysis. Additionally, we introduce a dynamic blurring mechanism formulated to preserve privacy by selectively obscuring identifiable features within video frames. Figure <ref type="figure" target="#fig_0">1</ref>. provides the overall architecture of the proposed approach. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Dataset and Processing</head><p>The dataset employed in this study was sourced from the UR Fall Detection Dataset <ref type="bibr" target="#b42">[43]</ref>, encompassing 70 sequences, of which 30 are fall events, and 40 represent activities of daily living (ADL). The fall events were captured using two Microsoft Kinect cameras, accompanied by accelerometric data, whereas the ADL events were documented using a single camera (camera 0) alongside accelerometer data. The accelerometric data was acquired through PS Move (60Hz) and x-IMU (256Hz) devices. The dataset is structured such that each sequence comprises depth and RGB images from both camera perspectives (parallel to the floor and ceiling-mounted), synchronization data, and raw accelerometer readings. Each video stream is archived separately as a sequence of PNG images. The depth data, stored in PNG16 format, necessitates rescaling to represent depth in millimeters (D) as follows accurately:</p><formula xml:id="formula_0">𝐷 𝑖 (𝑥, 𝑦) = 𝑉(𝑥,𝑦)⋅𝑆 𝑖 65535 (1)</formula><p>Where 𝐷 𝑖 (𝑥, 𝑦) denotes the depth at position (𝑥, 𝑦) for the ith camera, 𝑉(𝑥, 𝑦) represents the pixel value at position (𝑥, 𝑦) in the PNG16 image, and 𝑆 𝑖 is the scale ratio for the i-th camera. The scale ratios are defined as 𝑆 0 = 6000 for fall sequences using camera 0, 𝑆 1 = 3640 for fall sequences using camera 1, and 𝑆 0 = 7000 for ADL sequences using camera 0. The preprocessing of video data involves a series of steps to prepare the frames for feature extraction. Initially, each video is accessed frame by frame using OpenCV's VideoCapture functionality. Subsequently, each frame is resized to a uniform dimension of 224 × 224 pixels to align with the input requirements of the EfficientNetB0 model. This resizing operation can be mathematically represented as a function R that maps the original frame dimensions to the target dimensions, preserving the aspect ratio and interpolating pixel values as necessary:</p><formula xml:id="formula_1">𝑅: ℝ 𝑤×ℎ×3 → ℝ 224×224×3<label>(2)</label></formula><p>Where w and ℎ denote the original width and height of the frame, respectively. After resizing, the frames undergo normalization to scale the pixel values to the [0, 1] range, facilitating more stable and efficient model training. The normalization process for a frame F can be defined as:</p><formula xml:id="formula_2">𝐹 𝑛𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒𝑑 = 𝐹 225<label>(3)</label></formula><p>This operation ensures that each pixel value in the frame is proportionally reduced to a decimal between 0 and 1, thus standardizing the input data for subsequent processing through the EfficientNetB0 architecture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Feature Extraction Using EfficientNetB0</head><p>The feature extraction component of our methodology is built upon the EfficientNetB0 architecture, a cutting-edge CNN known for its scalability and efficiency. EfficientNetB0 uniformly scales the network's depth, width, and resolution, optimizing its performance across various constraints. EfficientNetB0 are its convolutional operations, which form the backbone of its feature extraction capabilities. A convolutional operation on an input image or feature map can be mathematically described as:</p><formula xml:id="formula_3">𝐹 𝑜𝑢𝑡 (𝑥, 𝑦) = ∑ ∑ 𝐾 ( 𝑏 𝑗= −𝑏 𝑎 𝑖= −𝑎 𝑖, 𝑗) ⋅ 𝐹 𝑖𝑛 (𝑥 − 𝑖, 𝑦 − 𝑗)<label>(4)</label></formula><p>Where Fout is the output feature map, Fin is the input image or feature map, K is the kernel or filter of size (2𝑎 + 1) × (2𝑏 + 1), and (𝑥, 𝑦) denotes the pixel coordinates. This operation is applied across the entire input feature map, extracting features through the weighted summation of pixel values within the kernel's receptive field. EfficientNetB0 also leverages batch normalization to enhance training stability and convergence. Batch normalization can be defined as:</p><formula xml:id="formula_4">𝐵𝑁(𝑥) = 𝛾( 𝑥−𝜇𝐵 √𝜎 2 𝐵 +𝜖 ) + 𝛽<label>(5)</label></formula><p>Where x is the input to the batch normalization layer, μB and σB2 are the mean and variance of the batch, respectively, γ and β are learnable parameters of the layer, and ϵ is a small constant added for numerical stability. Furthermore, EfficientNetB0 employs depthwise separable convolutions, a technique that reduces computational cost without sacrificing depth or expressivity. A depthwise separable convolution comprises two stages: depthwise and pointwise convolution. The depthwise convolution applies a single filter per input channel, and the pointwise convolution then combines the output channels using a 1×11×1 convolution. This can be represented as:</p><formula xml:id="formula_5">𝐷𝑊(𝑥, 𝑦, 𝑐) = ∑ ∑ 𝐾 𝑐 ( 𝑏 𝑗= −𝑏 𝑎 𝑖= −𝑎 𝑖, 𝑗) ⋅ 𝐹 𝑖𝑛 (𝑥 − 𝑖, 𝑦 − 𝑗, 𝑐) (6) 𝑃𝑊(𝑥, 𝑦, 𝑐) = ∑ 𝐾 ́(𝐶 ́) 𝐶 𝑐= 1 ⋅ 𝐷𝑊(𝑥, 𝑦, 𝑐)<label>(7)</label></formula><p>Where DW denotes the output of the depthwise convolution for channel c, PW is the output of the pointwise convolution for channel c′, 𝐾 𝑐 is the kernel for the depthwise convolution, and K′ is the 1×11×1 kernel for the pointwise convolution. C is the number of channels. Activation functions such as the Swish function, defined as 𝑓(𝑥) = 𝑥 ⋅ 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑥), are applied after convolutional operations to introduce non-linearity, enabling the network to learn complex features. By integrating these elements, EfficientNetB0 provides a powerful and efficient framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Dynamic Blurring for Privacy Preservation</head><p>we address privacy concerns in video-based monitoring by implementing dynamic blurring for privacy preservation. This process involves the selective obfuscation of regions of interest (ROI) within video frames, specifically targeting identifiable features of individuals to maintain anonymity while preserving the utility of the data for fall detection. The identification of ROIs for blurring is governed by a detection function 𝐷(𝐹 𝑖𝑛 , 𝜃), where 𝐹 𝑖𝑛 represents an input frame, and 𝜃 denotes the parameters of the detection model, which may include facial recognition, pose estimation, or other relevant feature detection algorithms. The output of this function is a set of bounding boxes 𝐵 = {𝑏 1 , 𝑏 2 , . . . , 𝑏 𝑛 }, where each 𝑏 𝑖 specifies the coordinates and dimensions of an ROI within the frame. The dynamic blurring is then applied to these identified ROIs using a Gaussian blur operation, mathematically described as:</p><formula xml:id="formula_6">𝐺(𝑥, 𝑦, 𝜎) = 1 2𝜋𝜎 2 𝑒 − 𝑥 2 +𝑦 2 2𝜎 2<label>(8)</label></formula><p>Where (𝑥, 𝑦) are the coordinates relative to the center of the kernel, and 𝜎 is the standard deviation, which controls the extent of blurring. The size of the kernel, 𝑘 × 𝑘, is chosen based on the desired level of blurriness, typically set to several times the value of 𝜎 to ensure that the edges of the kernel contribute negligibly to the blur. The application of the Gaussian blur to an ROI 𝑏 𝑖 within the frame 𝐹 𝑖𝑛 can be represented as:</p><formula xml:id="formula_7">𝐹 𝑏𝑙𝑢𝑟𝑟𝑒𝑑 (𝑥, 𝑦) = (𝐹 𝑖𝑛 * 𝐺)(𝑥, 𝑦) = ∑ ∑ 𝐹 𝑖𝑛 𝑏 𝑛= −𝑏 𝑎 𝑚= −𝑎 (𝑥 − 𝑚, 𝑦 − 𝑛) ⋅ 𝐺(𝑚, 𝑛, 𝜎) (9)</formula><p>for all (𝑥, 𝑦) within 𝑏 𝑖 , where * denotes the convolution operation, and a and b are half the width and height of the Gaussian kernel, respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Temporal Analysis with LSTM</head><p>The LSTM network is a specialized RNN designed to model temporal dependencies in sequence data effectively. Its architecture is uniquely suited to address the vanishing gradient problems, enabling it to capture long-term dependencies. An LSTM unit comprises three main gates: the input gate (i), the forget gate (f), and the output gate (o), each responsible for regulating the flow of information. We utilized bidirectional LSTM with the following structure, as shown in Figure <ref type="figure" target="#fig_1">2</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Classification Framework</head><p>Following the extraction of temporal features, the next step involves classification, which is classified between fall and non-fall events. This process typically involves passing the LSTM output through one or more fully connected layers in a SoftMax layer for binary classification:</p><formula xml:id="formula_8">ℎ 𝑧 = 𝑊 ℎ ⋅ ℎ 𝑡 + 𝑏 ℎ (10) 𝑝 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑧) = 𝑒 𝑧 𝑖 ∑ 𝑒</formula><p>𝑧 𝑗 𝑗 <ref type="bibr" target="#b10">(11)</ref> where ℎ 𝑡 is the output from the LSTM at time t, 𝑊 ℎ and 𝑏 ℎ are the weights and biases for the dense layer, respectively, z is the logit, and p represents the probabilities for each class obtained through the SoftMax function. The class with the highest probability is selected as the predicted class for each input sequence. This framework facilitates the effective classification of video sequences into fall or non-fall categories based on the temporal patterns identified by the LSTM network.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Training Process</head><p>The training of the integrated model is underpinned by a mathematical framework that includes the definition of a loss function, the selection of an optimization algorithm, and the application of regularization techniques to prevent overfitting. The loss function L quantifies the discrepancy between the predicted outputs p and the true labels y. For binary classification tasks, such as fall detection, the binary cross-entropy loss is commonly used:</p><formula xml:id="formula_9">𝐿(𝑦, 𝑝) 1 𝑁 ∑ [𝑦 𝑖 log(𝑝 𝑖 ) + (1 − 𝑦 𝑖 ) log(1 − 𝑝 𝑖 )] 𝑁 𝑖=1 (<label>15</label></formula><formula xml:id="formula_10">)</formula><p>Where N is the number of samples, 𝑦 𝑖 is the true label, and 𝑝 𝑖 is the predicted probability for the i-th sample. The optimization of the model parameters is achieved through Stochastic Gradient Descent (SGD), which iteratively updates the weights W based on the gradients of the loss function:</p><formula xml:id="formula_11">𝑊 𝑡+1 = 𝑊 𝑡 − 𝜂𝛻𝐿 (<label>16</label></formula><formula xml:id="formula_12">)</formula><p>Where 𝜂 is the learning rate, and ∇L denotes the gradient of the loss function for the weights at time t. L2 regularization and dropout techniques are applied to mitigate overfitting by adding a penalty term to the loss function or randomly omitting units from the network during training, respectively. The backpropagation process facilitates the computation of gradients ∇L through the network, employing the chain rule to propagate errors from the output layer back through the LSTM and EfficientNetB0 layers, enabling the model to learn and adjust its parameters to minimize the loss function.   predictions, along with false positive (FP) and false negative (FN) predictions. In this case, the matrix reveals perfect classification on the test data, with all fall events being correctly identified (6 TP) and no ADL events being misclassified as falls (0 FN), implying an exceptional level of model performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results and Analysis</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4. Confusion metric of unseen test videos</head><p>The Receiver Operating Characteristic (ROC) curve in Figure <ref type="figure">5</ref>. plots the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The area under the curve (AUC) in this ROC curve approaches 1, which suggests excellent model performance, with a high true positive rate and a low false positive rate across threshold values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 5. ROC curve of the proposed model on unseen test data</head><p>Figure <ref type="figure" target="#fig_5">6</ref> shows the model's performance on real test videos; the model correctly predicted this particular scenario as a 'Fall,' corroborated by the RGB image on the right. This clearly shows an individual in a prone position on the floor. Moreover, the depth image reveals the successful application of the dynamic blurring method. The individual's features are indistinguishable, and the privacy-preserving objective of the method is evident. The contours and the general posture of the person are discernible, which is sufficient for fall detection purposes, but the finer details necessary for personal identification have been effectively obfuscated. The blurring technique implemented in the system is designed to activate upon detecting a human figure within the video frame, applying a Gaussian blur where the person is detected. This ensures that any potentially sensitive information is rendered non-identifiable, addressing privacy concerns paramount in real-world applications of surveillance-based systems. The obscured depth image confirms that the privacy-preserving measures do not impede the algorithm's ability to detect a fall.    As shown in Table <ref type="table" target="#tab_0">1</ref>, the comparative analysis of fall detection methodologies yields a substantive understanding of the advancements and varying efficacies of diverse approaches in this research domain. The table encapsulates the True Positive Rate (TPR), True Negative Rate (TNR), and overall Accuracy, serving as pivotal metrics for the assessment of each method. The model presented by Eltahir et al. <ref type="bibr" target="#b39">[40]</ref> manifests a commendable balance between sensitivity and specificity, with a TPR of 95.88% and a TNR of 97.02%, culminating in an accuracy of 97.56%. Chan Su's model slightly improves sensitivity at 98.07% and specificity at 99.03%, with an analogous accuracy of 98.06%. These two models set a robust baseline in fall detection, evidencing high efficacy. The single-stream models using RGB and Optical Flow (OF) data individually attain a TPR of 100%, indicative of their flawless identification of fall events. However, their specificity scores, 96.61% and 96.34%, respectively, although high, suggest a slightly less robust capacity to classify non-fall activities accurately. This slight discrepancy is reflected in their accuracy scores, which, while impressive at 96.99% and 96.75%, do not reach the pinnacle of Chan Su's model. The multi-stream approach amalgamating RGB, OF, and Pose Estimation (PE) data represents a significant leap forward, yielding a perfect TPR and an enhanced TNR of 98.61%, leading to an accuracy of 98.77%. This approach underscores the utility of integrating multiple data streams for improved specificity without compromising sensitivity. EfficientNet-B0, despite a lower TPR of 93.33%, achieves a perfect TNR of 100%. This accentuates the model's exceptional performance in identifying non-fall events, though it falls short of the multi-stream model's balanced accuracy. The improved YOLOv5s model and the single frame human binary image approach using YOLOv5s do not disclose TPR or TNR but report accuracies of 97.2% and 96.7%, respectively. While these figures suggest competent models, the lack of detailed TPR and TNR data precludes a complete comparative analysis. Our proposed methodology establishes a new benchmark, recording a flawless TPR and TNR of 100% and an unmatched accuracy of 100%. This unprecedented performance indicates a superior ability to correctly identify fall incidents and an unparalleled precision in confirming non-fall activities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this research, we have successfully developed and evaluated a novel video-based fall detection system that prioritizes privacy without compromising the real-time detection efficacy of elderly falls. By integrating EfficientNetB0 and LSTM networks, our methodology ensures robust feature extraction and accurate fall event classification. The introduction of dynamic blurring as a privacy-preserving technique represents a significant advancement, allowing for anonymizing identifiable features within video frames while maintaining the system's operational integrity. Our findings reveal that this approach achieves perfect accuracy, recall, precision, and area AUC scores. It also effectively addresses the critical privacy concerns of video surveillance in sensitive environments such as homes and elderly care facilities. Implementing dynamic blurring ensures that the privacy of monitored individuals is safeguarded, setting a new precedent in the ethical application of surveillance technologies in healthcare.</p><p>Our future research will focus on further enhancing the adaptability and generalizability of the system across diverse settings and populations. This includes exploring additional privacypreserving mechanisms and integrating multimodal data sources to enrich the system's contextual understanding. This research contributes significantly to elderly care technology, presenting a practical solution to the long-standing challenge of balancing effective fall detection with stringent privacy requirements. Our work advances the technological capabilities in this domain and addresses critical ethical considerations. This paves the way for broader acceptance and deployment of video-based monitoring systems in healthcare settings.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Overall Architecture of the Proposed Methodology</figDesc><graphic coords="3,155.30,446.30,284.34,160.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. The architecture used for the LSTM Network</figDesc><graphic coords="6,154.60,56.70,299.96,182.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. (Left) provides 'Model Accuracy' plots, which show the proportion of correctly classified instances (accuracy) against the number of epochs for both the training and validation datasets. It is observed that the training accuracy shows a consistent upward trend, indicating that the model is learning and improving its performance on the training data as the epochs progress. While generally following an upward trajectory, the validation accuracy exhibits some fluctuations. This could indicate the model's encounters with challenging or previously unseen data in the validation set. These expected fluctuations indicate how the model might perform when exposed to new data. It is worth noting that both the training and validation accuracies converge to high values close to 1.0, suggesting that the model has achieved a high level of proficiency in distinguishing between fall and non-fall events. Figure 3. (Right) graph shows the model's loss over the same number of epochs. For the training set, the loss decreases sharply and continues to decline steadily, which is typical behavior as the model adjusts its weights to minimize the prediction error. For the validation set, the loss decreases in tandem with the training loss but with notable spikes at specific points. These spikes often signify that the model made predictions significantly off the actual labels for some batches in the validation set. This can happen if the model encounters data points that differ from the learned patterns during training.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. (Left) provides 'Model Accuracy' plots, which show the proportion of correctly classified instances (accuracy) against the number of epochs for both the training and validation datasets. It is observed that the training accuracy shows a consistent upward trend, indicating that the model is learning and improving its performance on the training data as the epochs progress. While generally following an upward trajectory, the validation accuracy exhibits some fluctuations. This could indicate the model's encounters with challenging or previously unseen data in the validation set. These expected fluctuations indicate how the model might perform when exposed to new data. It is worth noting that both the training and validation accuracies converge to high values close to 1.0, suggesting that the model has achieved a high level of proficiency in distinguishing between fall and non-fall events. Figure 3. (Right) graph shows the model's loss over the same number of epochs. For the training set, the loss decreases sharply and continues to decline steadily, which is typical behavior as the model adjusts its weights to minimize the prediction error. For the validation set, the loss decreases in tandem with the training loss but with notable spikes at specific points. These spikes often signify that the model made predictions significantly off the actual labels for some batches in the validation set. This can happen if the model encounters data points that differ from the learned patterns during training.</figDesc><graphic coords="7,118.60,303.34,371.94,158.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. Accuracy and Loss graphs of the proposed LSTM model The confusion matrix in Figure 4. provides a quantitative assessment of the model's classification accuracy. It shows the number of true positive (TP) and true negative (TN)predictions, along with false positive (FP) and false negative (FN) predictions. In this case, the matrix reveals perfect classification on the test data, with all fall events being correctly identified (6 TP) and no ADL events being misclassified as falls (0 FN), implying an exceptional level of model performance.</figDesc><graphic coords="7,184.60,565.06,225.76,184.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 .</head><label>6</label><figDesc>Figure 6. Model evaluation on the test video from the fall folder</figDesc><graphic coords="8,137.05,501.62,320.83,132.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7</head><label>7</label><figDesc>Figure7shows the system's prediction for this scene, labeled 'ADL,' which is validated by the RGB image on the right. It depicts an individual in an upright position, supporting the prediction that no fall has occurred. The prediction's accuracy is a testament to the model's ability to effectively discern between falls and non-fall events. Furthermore, similar to the previous fall scenario, the depth image demonstrates the application of the dynamic blurring technique. The individual's detailed features are indistinct, ensuring privacy is maintained. Despite the blurring, essential characteristics for ADL recognition, such as the vertical orientation of the body and the absence of unusual postures associated with falls, are preserved and remain detectable by the system.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 7 .</head><label>7</label><figDesc>Figure 7. Model Evaluation on the test video from the ADL folderThe analysis of the presented results underscores the robustness and reliability of the implemented fall detection model. This is evidenced by the convergence of the accuracy and loss metrics, the unequivocal classification outcomes depicted in the confusion matrix, and the favorable diagnostic characteristics portrayed by the ROC curve. These results collectively affirm the model's efficacy in accurately detecting fall events. It preserves the privacy of individuals through dynamic blurring, as no identifiable features are discernible in the depth visualizations.</figDesc><graphic coords="9,132.05,69.60,330.90,135.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 8 .</head><label>8</label><figDesc>Figure 8. Comparison of Accuracy with other state-of-the-art models</figDesc><graphic coords="10,125.57,56.70,343.80,217.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Comparative analysis of results with other state-of-the-art models</figDesc><table><row><cell>Model/Study</cell><cell>TPR</cell><cell>TNR</cell><cell>Accuracy</cell></row><row><cell>Eltahir et al. [40]</cell><cell>95.88</cell><cell>97.02</cell><cell>97.56</cell></row><row><cell>Chan Su [38]</cell><cell>98.07</cell><cell>99.03</cell><cell>98.06</cell></row><row><cell>Single stream (RGB) [22]</cell><cell>100</cell><cell>96.61</cell><cell>96.99</cell></row><row><cell>Single stream (OF) [22]</cell><cell>100</cell><cell>96.34</cell><cell>96.75</cell></row><row><cell>Multi-stream (RGB+OF+PE) [22]</cell><cell>100</cell><cell>98.61</cell><cell>98.77</cell></row><row><cell>EfficientNet-B0 [41]</cell><cell>93.33</cell><cell>100</cell><cell>97.14</cell></row><row><cell>Improved YOLOv5s [39]</cell><cell>-</cell><cell>-</cell><cell>97.2</cell></row><row><cell>A single-frame human binary image with</cell><cell>-</cell><cell>-</cell><cell>96.7</cell></row><row><cell>YOLOv5s [42]</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Our Method</cell><cell>100</cell><cell>100</cell><cell>100</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Predictors of falls and mortality among elderly adults with traumatic brain injury: a nationwide, population-based study</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">S</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Mcfaull</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Cusimano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PloS One</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">e0175868</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The changing face of major trauma in the UK</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kehoe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Edwards</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lecky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Emerg. Med. J</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="911" to="915" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A comprehensive review of elderly fall detection using wireless communication and artificial intelligence techniques</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Gharghan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Hashim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Measurement</title>
		<imprint>
			<biblScope unit="page">114186</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A systematic literature review of computer vision-based biomechanical models for physical workload estimation</title>
		<author>
			<persName><forename type="first">D</forename><surname>Egeonu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Jia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ergonomics</title>
		<imprint>
			<biblScope unit="page" from="1" to="24" />
			<date type="published" when="2024-01">Jan. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Patient-Generated Health Data (PGHD): Understanding, Requirements, Challenges, and Existing Techniques for Data Security and Privacy</title>
		<author>
			<persName><forename type="first">P</forename><surname>Khatiwada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-C</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Blobel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Pers. Med</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">282</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">PublicVision: A Secure Smart Surveillance System for Crowd Behavior Recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Qaraqe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="26474" to="26491" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Fall prediction and prevention systems: recent trends, challenges, and future research directions</title>
		<author>
			<persName><forename type="first">R</forename><surname>Rajagopalan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Litvan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-P</forename><surname>Jung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page">2509</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Motion and region aware adversarial learning for fall detection with thermal imaging</title>
		<author>
			<persName><forename type="first">V</forename><surname>Mehta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dhall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Khan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2020 25th international conference on pattern recognition (ICPR)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="6321" to="6328" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A review on visual privacy preservation techniques for active and assisted living</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ravi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Climent-Pérez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Florez-Revuelta</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11042-023-15775-2</idno>
	</analytic>
	<monogr>
		<title level="j">Multimed. Tools Appl</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="14715" to="14755" />
			<date type="published" when="2023-07">Jul. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A high reliability wearable device for elderly fall detection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pierleoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Belli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Palma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pellegrini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pernini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Valenti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Sens. J</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="4544" to="4553" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Text-Based Emotion Detection and Applications: A Literature Review</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Alrasheedy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Muniyandi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fauzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Cyber Resilience (ICCR)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022-10">2022. October. 2022</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Privacy Pro: Spam Calls Detection Using Voice Signature Analysis and Behavior-Based Filtering</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kwong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Muzamal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Khan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">17th International Conference on Emerging Technologies (ICET)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022-11">2022. November. 2022</date>
			<biblScope unit="page" from="184" to="189" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Embedded real-time fall detection with deep learning on wearable devices</title>
		<author>
			<persName><forename type="first">E</forename><surname>Torti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 21st euromicro conference on digital system design (DSD)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="405" to="412" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Survey on fall detection and fall prevention using wearable and external sensors</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">S</forename><surname>Delahoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Labrador</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="19806" to="19842" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Sensor-based fall detection systems: a review</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nooruddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Islam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Sharna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Alhetari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Kabir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Ambient Intell. Humaniz. Comput</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="2735" to="2751" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Sensor technologies for fall detection systems: A review</title>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">U</forename><surname>Rehman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yongchareon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">H J</forename><surname>Chong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Sens. J</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page" from="6889" to="6919" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">EfficientNet</title>
		<author>
			<persName><forename type="first">B</forename><surname>Koonce</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-1-4842-6168-2_10</idno>
	</analytic>
	<monogr>
		<title level="m">Convolutional Neural Networks with Swift for Tensorflow</title>
				<meeting><address><addrLine>Berkeley, CA</addrLine></address></meeting>
		<imprint>
			<publisher>Apress</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="109" to="123" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Long Short-Term Memory</title>
		<author>
			<persName><forename type="first">A</forename><surname>Graves</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-24797-2_4</idno>
	</analytic>
	<monogr>
		<title level="m">Supervised Sequence Labelling with Recurrent Neural Networks</title>
		<title level="s">Studies in Computational Intelligence</title>
		<meeting><address><addrLine>Berlin, Heidelberg; Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">385</biblScope>
			<biblScope unit="page" from="37" to="45" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Karantonis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Narayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mathie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">H</forename><surname>Lovell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">G</forename><surname>Celler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Inf. Technol. Biomed</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="156" to="167" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Bourke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">V</forename><surname>O'brien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Lyons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Gait Posture</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="194" to="199" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Will my patient fall?</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Ganz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">G</forename><surname>Shekelle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Z</forename><surname>Rubenstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Jama</title>
		<imprint>
			<biblScope unit="volume">297</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="77" to="86" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Multistream deep convolutional network using high-level features applied to fall detection in video sequences</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Carneiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">P</forename><surname>Da Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">V</forename><surname>Leite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Moreno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J F</forename><surname>Guimaraes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Pedrini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 International Conference on Systems, Signals and Image Processing (IWSSIP)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019-03-18">2019. Mar. 18, 2024</date>
			<biblScope unit="page" from="293" to="298" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Real-time video surveillance based human fall detection system using hybrid haar cascade classifier</title>
		<author>
			<persName><forename type="first">N</forename><surname>Kaur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kaur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimed. Tools Appl</title>
		<imprint>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Transformer-based fall detection in videos</title>
		<author>
			<persName><forename type="first">A</forename><surname>Núñez-Marcos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Arganda-Carreras</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Eng. Appl. Artif. Intell</title>
		<imprint>
			<biblScope unit="volume">132</biblScope>
			<biblScope unit="page">107937</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Contextualizing remote fall risk: Video data capture and implementing ethical AI</title>
		<author>
			<persName><forename type="first">J</forename><surname>Moore</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">NPJ Digit. Med</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">61</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Wrist-Based Fall Detection: Towards Generalization across Datasets</title>
		<author>
			<persName><forename type="first">V</forename><surname>Fula</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Moreno</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">1679</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A Vision-Based Approach to Enhance Fall Detection with Fine-Tuned Faster R-CNN</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kathuria</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2023 International Conference on Advanced Computing &amp; Communication Technologies (ICACCTech), IEEE</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="678" to="684" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Comprehensive review of vision-based fall detection systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Gutiérrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Martin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">947</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Fall detection for elderly in assisted environments: Video surveillance systems and challenges</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ezatzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Keyvanpour</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2017 9th international conference on information and knowledge technology (ikt), IEEE</title>
				<imprint>
			<date type="published" when="2017-03-19">2017. Mar. 19, 2024</date>
			<biblScope unit="page" from="93" to="98" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboostbased classification</title>
		<author>
			<persName><forename type="first">I</forename><surname>Charfi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Miteran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Atri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tourki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Electron. Imaging</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="41106" to="041106" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Fall detection and motion classification by using decision tree on mobile phone</title>
		<author>
			<persName><forename type="first">F.-Y</forename><surname>Leu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-Y</forename><surname>Ko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-C</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Susanto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H.-C</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Smart Sensors Networks</title>
				<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2017-03-19">2017. Mar. 19, 2024</date>
			<biblScope unit="page" from="205" to="237" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Development of home intelligent fall detection IoT system based on feedback optical flow convolutional neural network</title>
		<author>
			<persName><forename type="first">Y.-Z</forename><surname>Hsieh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-L</forename><surname>Jeng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ieee Access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="6048" to="6057" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Fall detection monitoring systems: a comprehensive review</title>
		<author>
			<persName><forename type="first">P</forename><surname>Vallabh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Malekian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Ambient Intell. Humaniz. Comput</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1809" to="1833" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Challenges, issues and trends in fall detection systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Igual</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Medrano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Plaza</surname></persName>
		</author>
		<idno type="DOI">10.1186/1475-925X-12-66</idno>
	</analytic>
	<monogr>
		<title level="j">Biomed. Eng. OnLine</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">A deep neural network for real-time detection of falling humans in naturally occurring scenes</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Levine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Qiu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">260</biblScope>
			<biblScope unit="page" from="43" to="58" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">BGR Images-Based Human Fall Detection Using ResNet-50 and LSTM</title>
		<author>
			<persName><forename type="first">D</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kumar</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-19-9225-4_14</idno>
	</analytic>
	<monogr>
		<title level="m">Third Congress on Intelligent Systems</title>
		<title level="s">Lecture Notes in Networks and Systems</title>
		<editor>
			<persName><forename type="first">S</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Sharma</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Balachandran</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Kim</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Bansal</surname></persName>
		</editor>
		<meeting><address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature Singapore</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">608</biblScope>
			<biblScope unit="page" from="175" to="186" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Deep learning for fall detection: Three-dimensional CNN combined with LSTM on video kinematic data</title>
		<author>
			<persName><forename type="first">N</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Song</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE J. Biomed. Health Inform</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="314" to="323" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">A novel model for fall detection and action recognition combined lightweight 3D-CNN and convolutional LSTM networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">L</forename><surname>Guan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Anal. Appl</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="16" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Elderly fall detection based on improved YOLOv5s network</title>
		<author>
			<persName><forename type="first">T</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="91273" to="91282" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Deep Transfer Learning-Enabled Activity Identification and Fall Detection for Disabled People</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Eltahir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Comput. Mater. Contin</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2023-03-18">2023. Mar. 18, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Cut and continuous paste towards realtime deep fall detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hwang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B.-K</forename><surname>Jeon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022-03-18">2022. Mar. 18, 2024</date>
			<biblScope unit="page" from="1775" to="1779" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Real-time human fall recognition based on deep learning methods and single depth image with privacy requirements</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC), IEEE</title>
				<imprint>
			<date type="published" when="2022-03-18">2022. Mar. 18, 2024</date>
			<biblScope unit="page" from="1548" to="1553" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Human fall detection on embedded platform using depth maps and wireless accelerometer</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kwolek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kepski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Comput. Methods Programs Biomed</title>
		<imprint>
			<biblScope unit="volume">117</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="489" to="501" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
