<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modality Informativeness and Stability for Behavioural Authentication</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andraž Krašovec</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Veljko Pejović</string-name>
          <email>veljko.pejovic@fri.uni-lj.si</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>European Commission, Joint Research Centre</institution>
          ,
          <addr-line>Via Enrico Fermi 2749, 21027 Ispra (VA)</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Ljubljana, Faculty of Computer and Information Science</institution>
          ,
          <addr-line>Večna pot 113, 1000 Ljubljana</addr-line>
          ,
          <country country="SI">Slovenia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Close friends can recognise each other from subtle behaviour cues: from the way one walks, the proficiency with which they complete certain tasks, posture, and other aspects. With an ever-increasing number of sensors built into everyday objects, computers should be capable of identifying users from their behaviour reflected in the sensor-sampled signals. While behaviour-based authentication systems have recently been demonstrated, their utility is still questionable. More specifically, whether and to which extent diferent sensing modalities capture long-term persistent user behavioural traits represents an open question. In this paper we tackle this issue by analysing the informativeness and temporal invariance of data collected for the purpose of user identification in an Internet-of-Things ofice-like environment. We discover that the most informative sensors are not necessarily reflecting the most stable behavioural traits, which may have consequences for the future development of sensor-based authentication systems.</p>
      </abstract>
      <kwd-group>
        <kwd>behavioural authentication</kwd>
        <kwd>IoT environments</kwd>
        <kwd>informativeness analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The weaknesses of one-of authentication methods, such as passwords [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], as well as the
increasing privacy issues related to storing and using human biometric information [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], have
paved the way for behavioural biometric-based authentication. Rather than collecting
sensitive information (i.e. fingerprints, face images, etc.), such authentication harnesses inherent
behavioural patterns of individuals and uses them for identification. Traditionally, these
patterns could only be unpicked from data stemming from a user’s interaction with an interface
that would also serve as a sensor. Examples of early behavioural biometric-based solutions,
thus, include systems for authentication based on keyboard typing patterns and touchscreen
interactions [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ].
      </p>
      <p>
        The emergence of Internet of Things (IoT) brought approximately 30 billion sensor-enabled
devices to a wide range of environments [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. It has recently been demonstrated that, once
users are placed in an IoT-rich environment, human behavior reflected in sensor readings can
be used for authentication [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Moving beyond single device sensing brings clear scalability
and robustness benefits. For instance, authenticating users based on data coming from diverse
sensors implies that diferent aspects of behaviour could be captured when relevant – e.g.
keyboard-based authentication is only informative when a user is typing, but motion sensors
could be used when a user is walking around a room instead; moreover, with an added dimension
(i.e. sensing modality) the authentication space is broadened – e.g. two users who exhibit the
exact same typing patterns could still be discerned if their walking patterns are diferent.
      </p>
      <p>Although multimodal IoT-based authentication has been demonstrated, it is still not clear
whether and to what extent diferent sensing modalities contribute to the end goal of user
identification. Answering this question would sharpen the focus, so that sensors reflect very
general kinds of behaviour could be discarded and more informative sensors would be deployed
instead. Furthermore, behaviour-based authentication relies on (machine learning) models of
individual behaviour that have to be constructed beforehand. The models enable the actual
authentication only if the sensor data collected at the test time matches the data used for the
model training. Large discrepancies among the test and the training datasets stemming from
the variability of patterns registered by the sensors would render the system unusable. The
question of whether the behaviour reflected in a certain sensor modality is “stable” or not is yet
another open research question.</p>
      <p>In this paper we tackle the above issues and study the informativeness and temporal
persistence of IoT-based sensor data when it comes to user authentication. Our work is based on the
analysis of a previously compiled dataset containing real-world sensor data collected while 20
users performed three diferent tasks two times each. The richness of the dataset (six diferent
sensing modalities) enables us to answer the above research questions and bring the following
specific contributions to the area:
• We calculate two metrics of feature informativeness brought by diferent IoT sensors and
identify the most informative (combinations of) sensors for the task of behaviour-based
authentication;
• We examine stability (temporal invariability) of diferent sensing modalities and identify
sensor which can perform robust behaviour-based authentication across diferent usage
sessions;
• We propose and discuss alternative sensing modalities that could address the deficiencies
of the sensors used in the analysed dataset and potentially increase the reliability of
behaviour-based authentication.</p>
      <p>
        The analysis performed in this paper points towards limited informativeness and a rather
low stability of any single modality used for sensor-based authentication. Indeed, this has been
hinted before, as multimodal continuous authentication has been shown to be far more accurate
than any one-of authentication method relying on IoT sensors [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Nevertheless, we believe
that our work does not imply that opportunistic use of IoT sensors for authentication should be
discarded. Instead, we believe that more sophisticated context-aware models as well as more
diverse sensing modalities should be explored.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Behavioural biometrics based authentication focuses on employing user’s various behavioural
patterns to validate one’s identity. Modern approaches mostly rely on either sensor-rich
handheld and wearable devices, such as smartphones and smartwatches, or on exploiting
sensor-equipped IoT environments.</p>
      <p>
        Most common techniques on smart devices focus on either the user’s interaction with the
screen, which includes the analysis of navigation gestures, known as touch dynamics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] or
the keystroke dynamics [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] which study user’s typing patterns, and even predate the modern
computer [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Instead of requiring direct interaction from the user, it is suficient for a device
to be in a user’s pocket, purse, or worn on a relevant body part for approaches reliant on the
inertial measurement units (IMU) to function. This extends the applicability of authentication
with smart devices, as, for example user can unlock her smartphone while running and listening
to music, without the hassle of taking the phone out of her pocket. Most common authentication
approach dependent on IMUs, is gait dynamics, which identifies users based on their walking
patterns [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
      </p>
      <p>
        Rather than burdening the user to never forget her authentication device, IoT
environmentsbased authentication focuses on identifying the user without any concious actions from her,
such as carrying a device, or even providing her fingerprint or remembering her password.
Moving sensors from the user to the environment, IMUs still prove to be an efective modality
for authentication [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Furthermore, wireless sensing technologies such as RFID [13] and
short-mmwave radars [14] are closely tied to environmental IoT authentication.
      </p>
      <p>
        The issue with most behavioural biometrics is the monomodality of their approaches, for
example gait dynamics work well when a person is walking, while the moment she sits down, it is
impossible to recognise her based on the way she walks. To overcome this limitation, multi modal
approaches, that ensemble diferent sensing modalities into a more complete authentication
system, started emerging. An overview of modality usage in biometric authentication by
Ryu et al. [15] suggests most commonly used modalities in such systems include all of the
above modalities, frequently tied to physiological biometrics, such as fingerprints and facial
recognition. We follow the multi modal doctrine by collecting and publishing a dataset of
diferent environmental sensors [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] which is further analysed in this paper.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset</title>
      <p>
        There are not many publicly available multi-modal IoT datasets that extend beyond data from
smartphones and wearables. Therefore, we utilise a dataset we collected in our previous
research [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. It consists of data from fiteen diferent sensors, combined into five diferent
sensor modalities. Twenty one participants completed three diferent everyday ofice tasks in
an ofice-like environment. Additionally, we made the dataset publicly available 1.
      </p>
      <p>We ask the participants to perform each of the devised tasks twice. The first task includes
copying a body of text from a cardboard card to a PC and send it via email and it functions as a
keyboard typing exercise, the second task requires the participants to look up for a weather
forecast of a place of their choosing and suggest a couple tourist attractions to visit, based on
the forecast, which should stimulate diferent PC usage patterns, and the third task sends the
participants on a “treasure hunt”, which includes navigating through a series of boxes including
instructions on how to proceed and are placed around the environment. In the final box the
instructions state to produce a graph from the data that are provided alongside the instructions.</p>
      <p>The sensor we employ include; a) an accelerometer and b) a gyroscope, purposed to infer
user’s keyboard typing patterns, c) four force sensors, positioned on the corners of a plate that is
placed under the mouse and keyboard, purposed to infer user’s posture behind the computer, d)
a PC monitor tool collecting the information about CPU, memory and network usage, purposed
to infer user’s computer usage patterns, and e) six infrared sensors, positioned around the
environment, purposed to infer user’s ofice navigation patterns. The complete dataset contains
115M raw datapoints over a span of fiteen hours and forty minutes.</p>
      <p>
        With the dataset acquired, we first apply some basic filtering techniques to exclude
participants who either failed to follow the instructions, or we experienced technical dificulties with
the data collection process. We retain fiteen users that successfully completed all six (2 ×3)
tasks and do not have any missing sensor data. We process the remaining data by calculating
time- and in case of accelerometer, gyroscope and force sensors also frequency domain features.
A complete list of generated features is presented in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>In the following sections we utilise either the complete dataset (Section 4), or separate it on
task level, i.e. taking both sessions of a given task (Section 5).</p>
    </sec>
    <sec id="sec-4">
      <title>4. Sensor Modality Informativeness</title>
      <p>To better understand the inner workings of a multi-modal, environmental IoT authentication
system, we calculate and analyse the informativeness of our generated feature set. That will
enable us to better understand how to devise such systems in the future, especially in terms of
selecting best performing sensing modalities, and replacing ones that do not provide suficient
user inference power. We estimate the informativeness of each feature based on two diferent
informativeness metrics; relative mutual information and gini importance.</p>
      <sec id="sec-4-1">
        <title>4.1. Relative Mutual Information</title>
        <p>
          Our first informativeness evaluation metric is the Relative Mutual Information as defined in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
It separately calculates the mutual information between each observed feature and the user
target variable, scaled by the entropy of the users:
        </p>
        <p>where  ( ) is the user entropy and  ( | ) is the conditional user entropy of a given
feature.We plot thirteen best scored features in Figure 2. Each bar represents a feature score.
The key follows a simple pattern: xy_zzz, where x equals to a given sensor (a – accelerometer,
g – gyroscope, f – force sensor, m – memory usage), y (optional) equals to an axis (x, y, z)
for accelerometer and gyroscope, or a specific force sensor (a, b, c, d), while zzz equals to the
generated feature (me – mean, min – minimum, max – maximum, mcr – mean crossing rate, sd
– standard deviation).</p>
        <p>( ) −  ( | )</p>
        <p>( )
Feature
ax_me
gz_me
ay_me
a_me
gx_me
fa_me
m_min
fd_me
m_me
m_max
gy_me
ay_mcr
fd_sd
fb_me
az_me
0.6
ion 0.5
t
a
rom 0.4
f
n
I
lua 0.3
t
u
eM 0.2
v
it
lea 0.1
R</p>
        <p>0</p>
        <p>We observe a prevalence of time domain features, with the mean values of diferent senors
being the most informative feature in most occurrences. Regarding the sensor modalities,
accelerometer provides best scores, followed by the gyroscope, force sensors, and memory
usage. A similar trend extends beyond the displayed scores, as most frequency domain features,
as well as data gathered from the infrared sensor are found as the least informative aspects of
the gathered dataset.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Gini Importance</title>
        <p>The other metric we utilise to evaluate the informativeness of generated features is the
normalised gini importance. It is calculated by building a random forest machine learning model
and averaging each feature’s contribution to the decrease of impurity over trees, generated by
the model. We choose the information gain as the decrease of impurity criteria, with the other
available criteria – gini index, yielding similar results. The importance scores are normalised,
meaning that the scores of all features sum up to one. We note that the gini importance score
tends to favor either numerical features, or categorical features with high number of values [16].
All of our generated features are numerical, hence considered equally by this method.</p>
        <p>Top fiteen gini importance scores are displayed in Figure 3. We reuse the key to note features
in the plotted graph. Similarly to relative mutual information, features from the accelerometer
and gyroscope are ranked highest, followed by the force sensors and memory usage patterns.
Furthermore we observe the dominance of time domain features.</p>
        <p>0.1
0.08
e
c
tran 0.06
o
p
Im 0.04
ii
n
G
0.02
0</p>
        <p>Feature
ax_me
ay_me
gx_me
gz_me
a_me
fd_me
gy_me
m_me
ay_mcr
fa_me
m_max
az_me
az_mcr
a_mcr
m_min</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. User Behaviour Invariance</title>
      <p>
        One major hurdle in behaviour based authentication are the varying user behaviour patterns
that may be efected by the user’s mood, cognitive load, and comfort level. Our work so far is no
exception and the drop of accuracy in user inference when taking into account multiple sessions
is significant. For instance, in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] we achieve a 99% identification accuracy when the users are
tracked through a single session, whereas when introducing multiple sessions our accuracy
drops to 70%. Therefore we investigate the potential of diferent sensor combinations not only
in terms of raw accuracy, but also their ability to capture context-independent behavioural
patterns that remain consistent throughout variations of user’s behaviour.
      </p>
      <sec id="sec-5-1">
        <title>5.1. Linear Discriminant Analysis Projection</title>
        <p>We base our analysis on a two-dimensional linear discriminant analysis (LDA) projection, which
is a supervised dimensionality reduction technique. To capture potential inter-sensor
dependencies, we take all possible combinations of sensor pairs, as well as all possible combinations
of user triplets. With the five diferent sensor modalities, and fiteen diferent users, we end up
with (52) ∗ (135) = 4550 diferent projections.</p>
        <p>acc_gyr
acc_frc
acc_pc
acc_ir
gyr_frc
gyr_pc
gyr_ir
frc_pc
frc_ir
pc_if</p>
        <p>In Figure 4 we display an example of LDA projections for three random users and all diferent
sensor combinations. In each subfigure, circles represent projected LDA points, while pentagons
are their centroids. Each colour equals to a single user session, where the following colour pairs
correspond to the same user: (green, purple), (brown, light blue), and (yellow, dark blue). The
titles represent two sensors that were taken into account in the specific projection, divided by
an underscore (acc – accelerometer, gyr – gyroscope, frc – force sensors, pc – pc monitor, and ir
– infrared sensors). All subfigures are in the same scale, hence we are able to directly compare
the clustering capability of each sensor modality pair.</p>
        <p>Even to the naked eye, the ability to successfully cluster data of diferent users, varies with
diferent sensor modality pairs. With either accelerometer or gyroscope included, the users
clusters are clearly separable one from another, while in the absence of these two modalities,
we observe clusters starting to blend.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Silhouette Scores</title>
        <p>The convenience of visual representation of data with a 2D LDA enables us to quickly
estimate result trends. However, to empirically evaluate clustering capabilities of diferent sensor
modality pairs, we utilise the silhouette score, which is defined for each point  as:
() = ( −  )/ ( ,  )
where  is the mean intra-cluster distance to a given point, and  the mean distance to the
points of the nearest cluster our given point is not a part of. Each individual value ranges from
−1 to 1, with the latter representing the best possible value, while being closer to the former
signifies a mis-clustered point. Mean of all silhouette scores represents a mean silhouette score
of a sample i.e. data of three users and a sensor modality pair. To obtain the per-sensor modality
pair score, we average all mean silhouette scores of a given sensor modality pair.</p>
        <p>In Figure 5 we observe the final silhouette scores. In line with our assumptions from visually
observing the LDA clusters, the accelerometer and gyroscope sensor modalities yield good
results, with the combination of the two being the best overall. More surprisingly, the force
sensor modality, which was scored much lower in informativeness scores (Section 4), is almost
on a par with their silhouette scores. On the other side of the spectrum, including the infrared,
and to a lesser extent the pc monitor sensor modalities, lowers the silhouette score, as those
modalities do not contribute much to the user inference.</p>
        <p>1
0.8
e
r
cso 0.6
e
t
t
e
uo 0.4
h
li
S
0.2
0</p>
        <p>Sensor modalities
accelerometer &amp; gyroscope
accelerometer &amp; force
accelerometer &amp; pcmonitor
accelerometer &amp; infrared
gyroscope &amp; force
gyroscope &amp; pcmonitor
gyroscope &amp; infrared
force &amp; pcmonitor
force &amp; infrared
pcmonitor &amp; infrared</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Centroid Distances</title>
        <p>removing the mean and scaling to standard deviation  = −
Finally, our main goal of this section is to asses the consistency of diferent sensor modalities in
inter-session user behaviour. We evaluate this by measuring the centroid distance of the session
clusters that represent the same user performing the same task twice (two sessions). These are
the distances between neighbouring pentagons in Figure 4. Each projection is standardized by
before calculating the centroid
distances.</p>
        <p>Once more the accelerometer and gyroscope perform well. However, the best obtained
(shortest) average centroid distance is achieved by pairing the acceleromenter with the force
sensors, while the pairing with the gyroscope is not far behind. This result matches our findings
with the mean silhouette score analysis and exhibits that while force sensors are not best suited
for recognising users (as shown in Section 4), they have merit while it comes to inferring user
identity accross sessions. We elaborate on this phenomenon in the Discussion section.
0.6
0.5
e</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>
        Numerous alternatives to password-based authentication have been proposed by the research
community over the last two decades [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5, 13</xref>
        ]. Occasionally impressive results cited in
the literature, however, have recently been questioned on the grounds of very modest sample
sizes and limited insights about why an approach may work or not [17]. Testing a novel
authentication approach is extremely laborious, thus, small sample sizes are likely to persist for
a while. Nevertheless, detailed examination of the authentication algorithm’s inner workings
may shed the light on its potential to provide scalable and robust authentication over a long
period of time.
      </p>
      <p>In this paper we analysed a behaviour-based authentication technique that harnesses IoT
sensors placed in an ofice-like environment. The mutlitude of sensor modalities used in a
corresponding machine learning model poses a question of how and to what extent does a
particular modality contribute to user identification. For this, we conducted an informativeness
analysis. We find out that the relative mutual information, commonly used in behaviour
biometrics-based authentication, does not clearly identify the most informative sensors. Instead,
gini importance metric tends to clearly point out to sensors that are “closer” to a user, in our
case accelerometer and gyroscope placed on a keyboard, as sensors that are likely to enable
discernability among individuals.</p>
      <p>Human behaviour tends to vary over time. Typing speed, for instance, might be modulated
by a person’s knowledge of the typed text, tiredness, emotions, and other factors. Whether
the particular aspect of human behaviour captured by a sensor is going to persist in the same
manner over time is a question we aimed to answer in the second part of the paper. Here we first
presented the LDA projections to visually compare users’ behaviour across the same tasks
reexecuted at diferent points in time. Then, we calculated the silhouette scores to assess whether
a particular modality is likely to be informative over a longer period of time. Surprisingly, we
ifnd that a sensor that was not itself highly informative for user authentication – the force sensor
– introduces stability not observed with some more informative sensors, such as the gyroscope.
We further confirm this by calculating the centroid distances among clusters formed upon data
stemming from diferent combinations of sensors. Centroids corresponding to the same person
should not “move” across sessions, if the sensor-reflected behaviour is persistent. We find that
including the force sensor in the mix indeed leads to smaller centroid displacements.</p>
      <p>Why is behaviour reflect in the force sensor persistent? In the setup we analysed the force
sensors were placed under the front panel of a desk the users were working at. The sensors,
thus, likely capture the behaviour related to a user’s general posture at a desk – the way one
leans on the desk and the way arms are held during the typing. We postulate that unlike some
other modalities, such as the typing speed, the force sensor reflects behaviour that does not
change much between two sessions in the described experiments. While people may type faster
or slower depending on whether they are familiar with a task or not (and indeed, we noticed
that in the second repetition of the same task they often tend to type faster), they are unlikely
to change their posture due to their perception of the task.</p>
      <p>Our identification of a rather crude force sensor as a pillar of stability in behaviour-based
authentication calls for the reconsideration of sensor types used for identification in IoT
environments. Recently, wireless sensing has demonstrated impressive results: millimeter wave
radars have been used to construct detailed 3D meshes of human users [18] and the same radars
have been used for gesture recognition [19]. Guided by the above results and the fact that
the sensing range can be tuned according to the distances and the details of interest, in our
future work we aim to include wireless sensing in order to capture general postures of users
and harness these for authentication.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>In this paper we analysed real-world sensor data collected from an ofice-like IoT environment
where 20 users conducted three diferent tasks in two sessions each. We focused on identifying
the most informative sensors for behaviour-based authentication. We show that while sensors
that the users interact the most carry the highest identification potential, the behaviour they
reflect is not necessarily the most stable. Instead, we observe that the sensors reflecting
more intrinsic properties of users, such as their posture, tend to exhibit a higher temporal
invariance. Consequently, our work stresses the need for additional sensors to be included,
should mutlimodal IoT sensing become a viable authentication method in the future.
Method for Wearable IoT Devices, IEEE Internet of Things Journal 6 (2019) 820–830.
doi:1 0 . 1 1 0 9 / J I O T . 2 0 1 8 . 2 8 6 0 5 9 2 .
[13] C. Feng, J. I. E. Xiong, L. Chang, F. Wang, J. U. Wang, D. Fang, RF-Identity : Non-Intrusive</p>
      <p>Person Identification Based on Commodity RFID Devices 5 (2021) 1–23.
[14] P. Zhao, C. X. Lu, J. Wang, C. Chen, W. Wang, N. Trigoni, A. Markham, Human tracking
and identification through a millimeter wave radar, Ad Hoc Networks 116 (2021) 102475.</p>
      <p>URL: https://doi.org/10.1016/j.adhoc.2021.102475. doi:1 0 . 1 0 1 6 / j . a d h o c . 2 0 2 1 . 1 0 2 4 7 5 .
[15] R. Ryu, S. Yeom, S. H. Kim, D. Herbert, Continuous Multimodal Biometric Authentication
Schemes: A Systematic Review, IEEE Access 9 (2021) 34541–34557. doi:1 0 . 1 1 0 9 / A C C E S S .
2 0 2 1 . 3 0 6 1 5 8 9 .
[16] S. Nembrini, I. R. König, M. N. Wright, The revival of the Gini importance?, Bioinformatics
34 (2018) 3711–3718. doi:1 0 . 1 0 9 3 / b i o i n f o r m a t i c s / b t y 3 7 3 .
[17] S. Sugrim, C. Liu, J. Lindqvist, Recruit until it fails: Exploring performance limits for
identification systems, Proceedings of the ACM on Interactive, Mobile, Wearable and
Ubiquitous Technologies 3 (2019) 1–26.
[18] H. Xue, Y. Ju, C. Miao, Y. Wang, S. Wang, A. Zhang, L. Su, mmmesh: towards 3d real-time
dynamic human mesh construction using millimeter-wave, in: Proceedings of the 19th
Annual International Conference on Mobile Systems, Applications, and Services, 2021, pp.
269–282.
[19] S. Palipana, D. Salami, L. A. Leiva, S. Sigg, Pantomime: Mid-air gesture recognition with
sparse millimeter-wave radar point clouds, Proceedings of the ACM on Interactive, Mobile,
Wearable and Ubiquitous Technologies 5 (2021) 1–27.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Thompson</surname>
          </string-name>
          , Password Security:
          <string-name>
            <given-names>A Case</given-names>
            <surname>History</surname>
          </string-name>
          ,
          <source>Communications of the ACM</source>
          <volume>22</volume>
          (
          <year>1979</year>
          )
          <fpage>594</fpage>
          -
          <lpage>597</lpage>
          .
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 5 9 1 6 8 . 3 5 9 1 7 2 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gerber</surname>
          </string-name>
          ,
          <article-title>The password is dead, long live the password - A laboratory study on user perceptions of authentication schemes</article-title>
          ,
          <source>International Journal of Human Computer Studies</source>
          <volume>133</volume>
          (
          <year>2020</year>
          )
          <fpage>26</fpage>
          -
          <lpage>44</lpage>
          . URL: https://doi.org/10.1016/j.ijhcs.
          <year>2019</year>
          .
          <volume>08</volume>
          .006.
          <source>doi:1 0 . 1 0 1 6 / j . i j h c s . 2 0 1 9 . 0 8 . 0 0 6 .</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wiliem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Lovell</surname>
          </string-name>
          ,
          <article-title>Face recognition on consumer devices: Reflections on replay attacks</article-title>
          ,
          <source>IEEE Transactions on Information Forensics and Security</source>
          <volume>10</volume>
          (
          <year>2015</year>
          )
          <fpage>736</fpage>
          -
          <lpage>745</lpage>
          .
          <source>doi:1 0 . 1 1 0 9 / T I F S . 2 0</source>
          <volume>1 5 . 2 3 9 8 8 1 9 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sinha</surname>
          </string-name>
          , U. Roy,
          <article-title>User authentication: keystroke dynamics with soft biometric features, Internet of Things (IoT): Technologies, Applications, Challenges and Solutions (</article-title>
          <year>2017</year>
          )
          <fpage>99</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Frank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Biedert</surname>
          </string-name>
          , E. Ma, I. Martinovic,
          <string-name>
            <given-names>D.</given-names>
            <surname>Song</surname>
          </string-name>
          , Touchalytics:
          <article-title>On the applicability of touchscreen input as a behavioral biometric for continuous authentication</article-title>
          ,
          <source>IEEE Transactions on Information Forensics and Security</source>
          (
          <year>2013</year>
          ).
          <source>doi:1 0 . 1 1 0 9 / T I F S . 2 0</source>
          <volume>1 2 . 2 2 2 5 0 4 8 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>I. Analytics</surname>
          </string-name>
          ,
          <source>State of IoT</source>
          <year>2021</year>
          ,
          <year>2021</year>
          . URL: https://iot-analytics.
          <article-title>com/ number-connected-iot-devices/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Krašovec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pellarini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Geneiatakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Baldini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pejović</surname>
          </string-name>
          , Not Quite Yourself Today:
          <article-title>Behaviour-Based Continuous Authentication in IoT Environments</article-title>
          ,
          <source>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</source>
          <volume>4</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Khamis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hassib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Von</given-names>
            <surname>Zezschwitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bulling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alt</surname>
          </string-name>
          ,
          <article-title>GazeTouchPIN: Protecting sensitive data on mobile devices using secure multimodal authentication</article-title>
          ,
          <source>ICMI 2017 - Proceedings of the 19th ACM International Conference on Multimodal Interaction 2017- Janua</source>
          (
          <year>2017</year>
          )
          <fpage>446</fpage>
          -
          <lpage>450</lpage>
          .
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 1 3 6 7 5 5 . 3 1 3 6 8 0 9 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <article-title>Freely typed keystroke dynamics-based user authentication for mobile devices based on heterogeneous features</article-title>
          ,
          <source>Pattern Recognition</source>
          <volume>108</volume>
          (
          <year>2020</year>
          ).
          <source>doi:1 0 . 1 0</source>
          <volume>1 6</volume>
          / j . p
          <source>a t c o g . 2 0</source>
          <volume>2 0 . 1 0 7 5 5 6 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Buschek</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. De Luca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Alt</surname>
          </string-name>
          ,
          <article-title>Improving accuracy, applicability and usability of keystroke biometrics on mobile touchscreen devices</article-title>
          ,
          <source>Conference on Human Factors in Computing Systems - Proceedings 2015-April</source>
          (
          <year>2015</year>
          )
          <fpage>1393</fpage>
          -
          <lpage>1402</lpage>
          .
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 2 7 0 2 1 2 3 . 2 7 0 2 2 5 2 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Juefei-Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bhagavatula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jaech</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Savvides</surname>
          </string-name>
          ,
          <article-title>Gait-ID on the move: Pace independent human identification using cell phone accelerometer dynamics</article-title>
          ,
          <source>2012 IEEE 5th International Conference on Biometrics: Theory, Applications and Systems</source>
          ,
          <string-name>
            <surname>BTAS</surname>
          </string-name>
          <year>2012</year>
          (
          <year>2012</year>
          )
          <fpage>8</fpage>
          -
          <lpage>15</lpage>
          .
          <source>doi:1 0 . 1 1 0 9 / B T A S . 2 0</source>
          <volume>1 2 . 6 3 7 4 5 5 2 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Accelerometer-Based</surname>
          </string-name>
          Speed-Adaptive Gait Authentication
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>