=Paper= {{Paper |id=Vol-2404/paper14 |storemode=property |title=A Theoretical Model for the Human-IoT Systems Interaction |pdfUrl=https://ceur-ws.org/Vol-2404/paper14.pdf |volume=Vol-2404 |authors=Alessandro Sapienza,Rino Falcone |dblpUrl=https://dblp.org/rec/conf/woa/SapienzaF19 }} ==A Theoretical Model for the Human-IoT Systems Interaction== https://ceur-ws.org/Vol-2404/paper14.pdf
                                        Workshop "From Objects to Agents" (WOA 2019)




    A Theoretical Model for the Human-IoT Systems
                     Interaction
           Alessandro Sapienza                                                                              Rino Falcone
               ISTC-CNR,                                  Rome, Italy                                       ISTC-CNR,
               Rome, Italy                         rino.falcone@istc.cnr.it                                 Rome, Italy
      alessandro.sapienza@istc.cnr.it                     Rome, Italy                                 rino.falcone@istc.cnr.it
                                                        rino.falcone@

    Abstract— Thanks to the IoT, our life will strongly improve            with the aim of showing how it works. Being a general
in the next future. However, it is not given that the users will be        model, it can be applied to any IoT system.
able to afford all the automation it will offer or that it will be
compatible with the users’ cognitive attitudes and its actual                 The rest of the paper is organized as follows: Section II
and real goals. In this paper, we face the question of the IoT             analyzes the state of the art, pointing out the necessity of a
from the user point of view. We start analyzing which reasons              user centric design for IoT systems; Section III provides a
undermine the acceptance of IoT systems and then we propose                theoretical framework for trust, control, and feedback, also
a possible solution. The first contribution of this work is the            showing the computational model we used; Section IV
level characterization of the autonomy a user can grant to an              describes how we implemented the model in the simulation
IoT device. The second contribution is a theoretical model to              of Section V; Section VI comments on the results of the
deal with users and to stimulate users’ acceptance. By the                 simulation; Section VII concludes the work.
means of simulation, we show how the model works and we
prove that it leads the system to an optimal solution.                                       II. DISTRUST IN THE IOT
                                                                                IoT systems represent a wide variety of technologies, so
   Keywords— Trust, Internet of Things, Autonomy
                                                                           it is not easy to identify in detail the common characteristics
                      I. INTRODUCTION                                      and the feature they should possess. However, in a high-level
                                                                           vision, some key aspects often and recurrently come into
    Just in the near future, we expect to have billion of                  play.
devices connected to the Internet [1]. Here, the main novelty
is that this technology is not limited to the classic devices,                 For sure, a salient topic is that of security [6][7][8][9], in
but involves also objects that are not currently smart. We are             the way that computer science means it. A device must be
going to face unbelievable scenarios; health, education,                   secure, and the reason is clear: if we give the green light to
transport, every aspect of our lives will undergo radical                  such a pervasive technology, able to enter deeply into every
changes, even our own homes will become smart [2]. The                     aspect of our life, it is fundamental that there are no security
fridge could tell us that it is better to throw eggs away                  breaches. For instance, even our toaster could be able to steal
because they are no longer fresh, the washing machine could                money from our bank account; we need to be sure that
propose a more efficient way to wash clothes, entire                       similar scenarios will not happen. Security mainly relies on
buildings could work together to save energy or other                      encryption to solve its problems.
resources. This very principle of “connection between
                                                                               Then privacy comes into play. As the device will
things” is the basis of the Internet of Things [3].
                                                                           exchange impressive amounts of information, more than can
    While it is true that we have a multitude of extremely                 concretely be processed [10], it is not clear which
useful scenarios, there are also considerable security and                 information will be shared and with whom [11]. We need a
privacy issues [4]. Certainly, we are not talking about an                 new way to deal with privacy, since the classical approach of
apocalyptic prospective, but if in everyday life a hacker is               authentication [12] and policies cannot work properly in such
able to block our computer, just think about the damage it                 a huge and heterogeneous environment. Facing privacy is
could make if it decided to block our home doors. This                     necessary, but still not enough.
problem is further enhanced by the heterogeneity of the
                                                                               A third element is trust. Usually it is applied to identify
devices, making it more difficult to control and detect
                                                                           trustworthy devices in a network, separating them from the
security flaws. If it is already difficult to accept that an object
                                                                           malicious ones [13]. By the way, the authors of [14] provide
possesses intelligence and can interface with us, the thought
                                                                           a review of the use of trust in IoT systems. The same authors
that it can revolt against us, causing substantial damage,
                                                                           identify that trust “helps people overcome perceptions of
could make it even more difficult to spread IoT systems.
                                                                           uncertainty and risk and engages in user acceptance”. In fact,
    We then argue that a good way to address this problem is               when we have autonomous tools able to take various
through the concept of trust [5]. The key point is in fact that            different decisions and these decisions involve our own goals
users do not trust these systems; they do not know them or                 and results, we have to be worried not just about their correct
what they can do. The concept of trust comes spontaneously                 functioning (for each of these decisions) but also about their
into play. Thus, we propose a general IoT system able to                   autonomous behavior and the role it plays for our purposes.
adapt to the specific user and its disposition towards this
                                                                               All these components are valid and fundamental to an
technology, with the aim of (1) identifying the acceptance
                                                                           IoT system. However, a further point should be emphasized.
limit the user has and (2) pushing the user to accept more this
                                                                           Although an IoT device requires only the connection and
technology. After describing the theoretical model, we will
                                                                           interfacing with the outside world to be defined as such, and
introduce a possible implementation in a simulative context,




                                                                      90
                                      Workshop "From Objects to Agents" (WOA 2019)


then the possibility of being addressed and of exchanging                     To this purpose Kranz [18] studies a series of use cases in
information with the outside world, they are not independent              order to provide some guidelines for embedding interfaces
systems, but on the contrary these systems continually                    into people’s daily lives.
interact with users, they relate to them in a very strong way:
the user is at the center of everything. In fact, reasoning in                Economides [19] identifies a series of characteristics that
                                                                          an IoT system must possess in order to be accepted by users.
view of the goals that these objects possess, the common
purpose is to make life better for users, be they the                     However, he does not provide a clear methodology about
                                                                          how these characteristics should be estimated and computed.
inhabitants of a house, a city, patients/doctors in a hospital,
or the workers of a facility.                                                 What we would like is on the one hand that the system
                                                                          adapts to the user, comparing the expectations of the latter
    The user becomes the fundamental point in all of this. A
technology can be potentially perfect and very useful but, if             with its estimations. On the other hand, we would like the
                                                                          user to adapt to the system, trying to make it accepts
people do not accept it, each effort is useless and it goes out
of use. It is necessary to keep in mind how much the user is              increasing levels of autonomy. Therefore, we start proposing
                                                                          a categorization of the devices’ tasks based on the autonomy.
willing to accept an IoT system and to what extent he wants
to interface with it. We would like to focus on this last point,          In order to operate, the devices must continuously estimate
                                                                          the level of autonomy that the user grants them. Doing so,
the concept of “user acceptance”.
                                                                          the relationship between an IoT device and the user starts at a
   As Ghazizadeh [15] says “technology fundamentally                      level of complexity that the user knows and can handle,
changes a person’s role, making the system performance                    moving eventually to higher levels if the user allows it, i.e., if
progressively dependent on the integrity of this relationship.            the trust it has towards the device is sufficient. In all this it
In fact, automation will not achieve its potential if not                 becomes fundamental to identify the levels of user trust.
properly adopted by users and seamlessly integrated into a                Trust therefore becomes a key concept.
new task structure”.
                                                                                     III. TRUST, CONTROL AND FEEDBACK
    Furthermore, Miranda et al. [16] talk about Internet of
People (IoP). In fact, they reiterate that technology must be                Consider a situation in which an agent X (trustor) needs a
integrated into the users’ daily lives, which are right at the            second agent Y (trustee) to perform a task for him and must
center of the system. They focus on the fact that IoT systems             decide whether or not to rely on him. The reasons why he
must be able to adapt to the user, taking people’s context into           would like to delegate that task can be different; in general,
account and avoiding user intervention as much as possible.               X believes that delegating the task could have some utility.
Similarly, Ashraf [17] talks about autonomy in the Internet                   The cognitive agents in fact decide whether or not to rely
of things, pointing out that in this context it is necessary to           on others to carry out their tasks on the basis of the expected
minimize user intervention.                                               utility and of the trust they have in who will perform those
    Thus, the acceptance of a new technology seems to be the              tasks. As for the utility, it must be more convenient for the
key point, which is not always obvious. It is not easy for                trustor that someone else will carry out the task, otherwise he
users to understand how a complex technology like this                    will do it by himself (if he can). Here we are not interested in
reasons and works. Often it is not clear what it is able to do            dwelling on this point and for simplicity we consider that it is
and how it does it.                                                       always convenient to rely on others, that is, the expected
                                                                          utility when Y performs the task is always higher than if X
    So it is true that security, privacy, and trust work together         would have done it alone.
to increase reliance on IoT systems. However, it is necessary
to keep users in the center of this discussion.                               The key point here is that when an agent Y, cognitive or
                                                                          not, performs a task for me, if Y is to some extent an
    The reasons why the user may not grant high trust levels              autonomous agent, I do not know how Y intends to complete
are the fears that (a) the task is not carried out in the expected        his task, nor if he will actually manage to do it.
way; (b) that it is not completed at all; or (c) even that
damage is produced. These issues become more and more                         In this frame, the concepts of trust and control intertwine
complicated if we think that these devices can operate in a               in a very special way. In fact, the more we try to control, the
network that has a theoretically infinite number of nodes: we             less trust we have. Vice versa, when we trust we need less
do not know the other devices’ goals or if they will be                   control, and we can allow greater autonomy. Thus, although
reliable. We get into a very complex system, difficult to                 control is an antagonist of trust, somehow it helps trust
understand and manage.                                                    formation [20]. When, in fact, the level of trust is not enough
                                                                          for the trustor to delegate a task to the trustees, control helps
    In short, the overall picture of the functions that they              to bridge this gap. The more I trust an agent Y, the more I
perform is going to complicate a lot. As a whole, the devices             will grant him autonomy to carry out actions. But if I do not
have a computational power and a huge amount of data                      trust Y enough, I need to exercise control mechanisms over
available; they could be able to identify solutions that we had           his actions. For instance, it was shown that [21] when the
not even imagined, which, however, must ensure that these                 users’ experience with autonomous systems involves
systems will realize a state of the world coinciding with our             completely losing control of the decisions, the trust they have
expectation. What if our computer decides to shut down                    in these systems decreases. It is then necessary to lead the
because we worked too much? Surely, we talk about tasks                   user to gradually accept levels of ever-greater autonomy.
that have their usefulness, but it is not said that the concept
of utility the devices possess coincides with ours. We need to            The feedback must be provided before the operation ends, in
identify the goals we are interested in delegating to these               order to be able to modify that work. Otherwise, one can
systems and, at the same time, that they will be alble to                 actively handle the possible unforeseen event (intervention).
understand these goals.




                                                                     91
                                     Workshop "From Objects to Agents" (WOA 2019)


    In this way, the feedback is a lighter form of control (less             We need to define what a device can do, based on the
invasive), which may or may not result in the active                     current level of autonomy. Thus, the first contribution of this
involvement of the trustor. It has a fundamental role in                 work is the identification and classification of the autonomy
overcoming the borderline cases in which the trust level                 levels to which an IoT device can operate. Applying the
would not be enough to delegate a task, but the trustor                  concept of trust and control defined by Castelfranchi and
delegates it anyway thanks to this form of control. In the end,          Falcone [20], we defined 5 levels, numbered from 0 to 1.
it can result in the definitive acceptance of the task (or in its
                                                                             Level 0 requires operating according to the basic
rejection, and then results in trustor intervention and a
consequent trust decrement).                                             function; for example, a fridge will just keep things cool.
                                                                         This means that it is not going to communicate with other
A. Trust: a Multilayered Concept                                         devices and it is not going beyond its basic task. Proceeding
    Trust comes from different cognitive ingredients. The                in the metaphor of the fridge, it cannot notice that something
first one is direct experience, in which the trustor X evaluates         is missing; it does not even know what it contains.
the trustee Y exploiting the past interactions it had with Y.                Level 1 allows communicating with other agents inside
This approach has the advantage of using direct information;             and outside the environment, but just in a passive way (i.e.,
there is no intermediary (we are supposing that X is able to             giving information about the current temperature).
evaluate Y’s performance better than others). However, it
requires a certain number of interactions to produce a proper                 At level 2 a device can autonomously carry out tasks, but
evaluation and initially X should trust Y without any clues              without cooperating with other devices; again, thinking of a
(the cold start problem). Consider that this evaluation could            fridge, if a product needs a temperature below 5 degrees and
depend on many different factors, and that X is able to                  another one above 7, it can autonomously decide which
perceive their different contributions.                                  temperature to set, always keeping in mind that the main goal
                                                                         is to maximize the user’s utility.
    It is also possible to rely on second-hand information,
exploiting recommendation [22] or reputation [23]. In this                   Level 3 grants the possibility to autonomously carry out
case, there is the advantage of having a ready to use                    tasks cooperating with other devices. The cooperation is
evaluation, provided that a third agent Z in X’s social                  actually a critical element, as it involves problems like the
network knows Y and interacted with Y in the past. The                   partners’ choice, as well as recognition of merit and guilt.
disadvantage is that this evaluation introduces uncertainty              Although we are not going to cover this part, focusing just on
due to the Z’s ability and its benevolence; we need to trust Z           the device starting the interaction, it is necessary to point it
as an evaluator.                                                         out. Again, thinking of the fridge, if it is not able to go below
                                                                         a certain temperature because it is hot in the house, it can ask
    Lastly, it is possible to use some mechanisms of                     the heating system to lower the temperature of the house.
knowledge generalization, such as the categories of                      This needs a complex negotiation between two autonomous
belonging [24]. A category is a general set of agents—                   systems. They need to understand what the user priority is;
doctors, thieves, dogs, and so on—whose members have                     this is not so easy to solve. Furthermore, it must also be
common characteristics, determining their behavior or their              considered that the systems in question must be able to
ability/willingness. If I am able to associate Y to a category           communicate, using common protocols. This can happen if
and I know the average performance of the members                        the devices use a standard of communication, enabling
belonging to that category concerning the specific task                  interoperability. Smart houses are valid examples of
interesting me, I can exploit this evaluation to decide                  communication between different devices (differently from
whether to trust Y. The advantage is that I can evaluate every           smart houses, in this work there is no centralized entity. We
node of my network, even if no one knows it. The                         deal with an open system, in which the intelligence is
disadvantage is that the level of uncertainty due to this                distributed on the individual devices.).
method can be high, depending on the variability inside the
category and its granularity. A practical example in the                    Level 4, called over-help [26], gives the possibility of
context of IoT is that I could believe that the devices                  going beyond the user’s requests, proposing solutions that he
produced by a given manufacturer are better than the others              could not even imagine: the same fridge could notice from
and then I could choose to delegate my task to them.                     our temperature that we have the fever, proceeding then to
                                                                         cancel the dinner with our friends and booking a medical
    Since in this work we are not strictly interested in how to          examination. This type of interaction may be too pervasive.
produce trust evaluations but in their practical applications,
we will just rely on direct experience. This allows not                      It is easy to understand that these kinds of tasks require
introducing further uncertainty caused by the evaluation.                an increasing level of autonomy. The level 0 is the starting
                                                                         level. Basically, the device limits itself to elementary
    In this paper, trust is taken into account for two aspects.          functions, the ones it is supposed to do. Beyond that, it is not
The first is that of autonomy. Similarly to [25] (in the cited           certain that it is going to accept the next levels.
work, the authors use a wheelchair, which in this case is not
an IoT device, but an autonomous system endowed with                         A trust value is associated with each level i, with i going
smart functionalities and different autonomy levels.), where             from 0 to 4, representing the user disposition towards the
however authors are not working with IoT devices, tasks are              tasks of that level. The trust values for the autonomy are
grouped/categorized into several autonomy levels. A user,                defined as real numbers in range [0, 1].
based on his personal availability, will assign a certain initial            These trust values are related to each other: the higher
level of autonomy to a device. This level can positively or              level “i + 1” always has a trust value equal to or less than the
negatively change over time, depending on the interactions               previous one i. Moreover, we suppose that there is influence
that the user has.                                                       between them, so that when a device selects a task belonging
                                                                         to level i and this is accepted, both the trust value on level i




                                                                    92
                                     Workshop "From Objects to Agents" (WOA 2019)


and on the next level “i + 1” will increase, according to the                The trust model works in a similar way for the user. The
Formulas (1) and (2). Here the new trust value at level i,               only difference is that the user has its own constants to
newAutonomyTrustLi, is computed as the sum of the old trust              update trust: user-increment and user-penalty, defined as real
value plus the constant increment. Similarly, the new trust              numbers in range [0, 1]. Thus, to get the user’s model, it is
value on level “i + 1”, newAutonomyTrustLi+1, is computed                just necessary to replace increment with user-increment and
as the sum of the old trust value plus half of the constant              penalty with user-penalty in Formulas (1)–(6).
increment.
                                                                                                IV. THE MODEL
    Note that “i + 1” exists only if i is smaller than 4; when i
is equal to 4, Formula (2) is not taken into consideration.                  In the realized model a single user U is located in a given
                                                                         environment and interacts with a predefined series of IoT
                                                                         devices, which can perform different kinds of action. The
                                                                         basic idea is that the devices will exploit the interaction with
                                                                         the user U in order to increase the autonomy U grants them.
    When instead there is a trust decrease since the task is                The simulation is organized in rounds, called ticks, and
interrupted, even the trust in the following levels is                   on each tick U interacts with all of these devices.
decremented. Formula (3) describes what happens to the
autonomy trust value of level i, while Formula (4) shows                     The user U has a certain trust threshold in the various
what happen to the higher levels:                                        autonomy levels. First of all, the device needs to identify this
                                                                         limit value and operate in its range, periodically trying to
                                                                         increase it so that they will have an always-increasing
                                                                         autonomy.
                                                                             When U makes a positive experience with a device on a
    In Formula (4) ML is the index of the maximal level
                                                                         given autonomy level it can access, the trust U has on that
defined in the system. Here, in particular, it is equal to 4. The
                                                                         level increases. We argue that even the trust on the very next
two variables increment and penalty are real values that can
                                                                         level will increase. When this trust value overcomes a
assume different values in range [0, 1]. According to [27] we
                                                                         threshold, then the devices may attempt to perform tasks
chose to give a higher weight to negative outcomes than the
                                                                         belonging to that level. In this case the user, given his trust
positive ones, as trust is harder to gain than to lose.
                                                                         value on that level, has three possibilites. If the trust value is
    What has been said so far concerns the aspect of                     enough, it simply accepts the task. If the trust value is within
autonomy. However, it is necessary to take into                          a given range of uncertainty, and the user is not sure whether
consideration that a device can fail when doing a task.                  to accept the task or not, it then asks for feedback, which will
Failures are due to multiple causes, both internal and external          be accepted with a given probability. If the trust value is too
to the device itself. A device can fail because a sensor                 low, it refuses the task, blocking it.
detected a wrong measurement, because it did not arrive to
                                                                             This is what happens to autonomy. The efficiency
do the requested action in time, because it did something
                                                                         dimension has a similar behavior, with the difference that if
differently from what the user expected, or because a second
                                                                         the trust on a given level increases, it will not affect the
partner device was wrong. All of this is modeled through the
                                                                         higher levels; it is not given that if a device performs
dimension called efficiency.
                                                                         properly on a set of tasks, it will do the same on the higher
    What matters to us in this case is that each device has a            level; nor is it true that if it performs badly on a level, it will
certain error probability on each level. Although these values           do the same on the higher one. Each level is completely
are expected to grow as the level increases, it is not said that         independent of the others. Again, given the specific trust
is so; there may be mistakes that affect lower level tasks but           value on that level, the user can accept the task, refuse it or
not upper level tasks.                                                   ask for a feedback.
   It is therefore necessary to have a mechanism able to                 A. The User
identify which levels create problems, without necessarily                   In the simulations, we have a single user U dealing with a
blocking the subsequent levels.                                          number of IoT devices. He uses them to pursue his own
    Depending on the device’s performance, the trust values              purposes, but granting them just a given trust level, which
concerning efficiency, defined as real numbers in range [0,              limits their autonomy. While dealing with the device D, U
1], will be updated in a similar way to autonomy. Given that             will update his trust values concerning D on each task level,
we are still dealing with trust and both efficiency and                  both for the efficiency and the autonomy. His decisions to
autonomy are modeled in the same way, for the sake of                    accept, ask for a feedback, or refuse a task depend on two
simplicity, we used the same parameters of the autonomy:                 internal thresholds, th-min and Th-max (equal for all the
with a positive interaction, the new trust value                         agents). In particular, when he asks for feedback, it will be
newEfficiencyTrustLi is computed as the sum of the old trust             accepted with a given acceptance probability, a specific
value efficiencyTrustLi and “increment” while, in case of                value characterizing the individual user. The trust values will
failure, it is decreases of “penalty”. The Formulas (5) and (6)          be updated, increasing them with the constant user-
describe this behavior:                                                  increment, or decreasing them with user-penalty.
                                                                         B. The Devices
                                                                             There can be a variable number of devices in the world.
                                                                         All of them possess two purposes. The first one is to pursue
   Differently from the autonomy, for the efficiency we
                                                                         the user’s task of satisfying his need (even if he has not
change just the trust value of the considered level.
                                                                         explicitly requested them). The second one consists of trying




                                                                    93
                                       Workshop "From Objects to Agents" (WOA 2019)


to increase these trust values, so that they can operate with a            proportionally to the trust levels, there is a 59% probability
higher autonomy level, performing increasingly smart and                   that it will select a task belonging to level 0, and a 41%
complex functions for the user.                                            probability that it will select a task belonging to level 1.
    First of all, in order to understand at what levels they can           D. Acceptance, Interruption, and Feedback
work, they need to estimate the user’s trust values. On each                   Here we analyze how the user can react to task chosen by
turn the device will identify which task they are allowed to               a device. As already mentioned, the user evaluates the
perform, then they will select a task belonging to a specific              trustworthiness of the different autonomy levels of the IoT
level, with a probability proportional to the estimated trust:             devices, but he must also take into account the efficiency
the more trust there is on a level, the more likely it is that a           aspect.
task of that level will be selected. Then they try to perform
that task. Now the user can interact or not with the devices. If               The user will check the two trust values and compare
the device D selected a task belonging to a sufficiently                   them with the thresholds. If the specific value is lower than
trusted level, then the task will be accepted; if it is not trusted        the first acceptance threshold (th-min), the task is
enough it will be rejected.                                                interrupted. If it is greater than the second acceptance
                                                                           threshold (Th-max), the task is accepted. However, a
    But there is an intermediate interval, halfway between                 situation of uncertainty arises between the two thresholds. In
acceptance and rejection. In this interval, if U is not sure               this case, the user U does not know whether to accept the
what to do, then it will ask the device for feedback, which                task or not. At this point, U asks for a feedback to the device,
will explain what it is doing. The feedback determines the                 which is fundamental for the prosecution of the task. For a
task’s acceptance or its rejection (see Section IV.D below).               feedback on the autonomy, the device explains what it is
    If the task is accepted, then U also checks D’s                        doing, while for a feedback on the efficiency, the device
performance, which can be positive or negative. Each device                clarifies the final result of the action it is performing.
has in fact a given error probability linked to specific levels.               The feedback is a fundamental element of this complex
This probability generally increases with each level, as tasks             system. Thanks to it, it is possible to overcome the limit
with a greater autonomy usually imply a greater complexity,                situations that the devices need to face.
and so it is more difficult to get the result. But this is not
always true. For example, some errors may occur at a                           Feedback will be accepted with a certain probability. In
specific level, but not in others.                                         the case of autonomy, this probability p is an intrinsic
                                                                           characteristic of the user; it represents his willingness to
    Resuming, the device is characterized by: the user’s trust             accept a new task with greater autonomy. Regarding the
estimation on the various levels; its efficiency estimation;               feedback on the efficiency, it depends on the level of trust
error percentage on each level, an intrinsic characteristic of             that the user has on the efficiency of the device. In particular,
the device, which neither it nor the user can directly access it,          the probability c of accepting the feedback will increase
they can just try to estimate it.                                          linearly from 0% to th-min to 100% at Th-max.
C. Task Selection                                                          E. The Interaction User-Device
    Once a precise task classification has been provided, it is                In this section we focus on how users and devices
necessary to identify a methodology for correctly selecting a              interact, analyzing their behavior and the actions they can
task itself. It is fundamental that the devices select tasks (a)           perform.
to which the user is well disposed, therefore with a degree of
autonomy that falls within the allowed limits; and (b) in                      Starting from the idle state, when a device performs a
which they can guarantee a certain level of performance.                   task τ the user checks its internal state, that is, its trust values
                                                                           for the autonomy ta and for the efficiency te, concerning the
    For the purpose of considering both these constraints, the             level of the task τ. These values trigger the different actions
devices compute what we call global trust vector, computing                described in Section IV.D: to accept the task; to refuse the
level by level the average between the trust values of                     task; to ask for feedback for the autonomy; to ask for
autonomy and efficiency. In order for a task to be selected,               feedback for the efficiency.
the relative trust value must be above a certain threshold.
Generally, this threshold is equal to 0.5, but when a device is                Concerning the feedback, it will involve the acceptance
interrupted due to insufficient autonomy, this threshold is                or the refusal of the task with a probability equal to p for the
raised to 0.75 for a certain period.                                       autonomy and c for the efficiency. Both these probabilities
                                                                           are described in Section IV.D.
    The tasks presented to the device are multiple and of
various natures; it is not the same task performed with                        Starting from the idle state, the device selects a task
different autonomy. So it can happen that tasks of different               according to the user model UM, which is the estimation of
levels are needed. In general, however, the devices try to                 the user’s internal state in terms of the trust values
perform sparingly the tasks that are not certain to be accepted            characterizing autonomy and efficiency. Once a task is
by the user. The selection of the task level takes place in a              selected, it starts executing it. If the user does not interfere,
probabilistic manner, with probability proportional to the                 the task is completed. Otherwise it can be blocked or there
overall trust estimated at that level.                                     can be a feedback request, which will result in the acceptance
                                                                           of the task or in its rejection. Notice that when the user stops
    Let us make an example, to clarify this point. Suppose                 a device, the device does not explicitly know if it is due to
that the device D estimates that the global trust values are 1             autonomy or efficiency, but it can deduce it, since it has an
for level 0, 0.7 for level 1, and 0 for levels 2, 3, and 4. Given          estimate of the user’s trust values. The trust update both for
that only levels 0 and 1 exceed the threshold of 0.5, D can                the user and the device is done according to the principles
just select a task belonging to these two levels. In particular,           and formulas of Section III.A.




                                                                      94
                                     Workshop "From Objects to Agents" (WOA 2019)


                     V. SIMULATIONS                                      by negative outcomes than positive outcomes [27], penalty
    The simulations were realized using NetLogo [28], an                 and user-penalty should be respectively greater than
agent-based framework. We aim to understand if the                       increment and user-increment. Third, as the devices need to
described algorithm works and actually leads to the user                 estimate the user’s trust values, it is very useful that their
acceptance of new autonomy levels. Therefore, we                         parameters coincide. A more complete solution would
investigate two sample scenarios that can happen while                   require that the devices estimate the user’s values at runtime.
interacting with IoT systems, observing their evolution and              However, this is beyond the aims of the experiment.
the final level of autonomy achieved. In the first one, we                   As for user profiles, these affect the initial levels of
check what happens when there is no error, assuming that the             confidence in the autonomy of the devices. The cautious user
devices are always able to get the expected result. Since the            is the most restrictive; its initial values are [1 0.75 0.5 0.25
devices’ efficiency will always be maximal, we will focus on             0]. This means that at the beginning only the first 2 task
the autonomy. In a second experiment, we considered that                 levels can be executed. The normal user has slightly higher
the execution of a task can be affected by errors: a sensor              values: [1 1 0.75 0.5 0.25]. With this user it is possible to
reporting wrong information, a partner device making a                   perform the first 3 task levels. The last type of user is the
mistake, a different way to get the same result, or even a               open-minded: [1 1 1 0.75 0.5]. Since this user is the most
delay in getting the result can be considered by the user as a           open towards the devices, it will be possible to immediately
mistake. Here we focus on the relationship between                       execute the first 4 levels of the task. We will focus on the
autonomy and efficiency.                                                 cautious user, as it is the most restrictive. Then, if necessary,
    As we are interested in the final result of the model, we            we will show the differences for the other users.
need to grant the system enough time to reach each of them.                  We chose to set the efficiency trust values to 0.5, which
In order to do so, the experiments’ duration is 1000 runs; we            represents an intermediate condition. The user does not
will show the final trust values just after that period.                 possess any clues nor has an internal predisposition that
Moreover, to eliminate the small differences randomly                    could lead him to trust more or less a specific device on a
introduced in the individual experiments, we will show the               specific level. Therefore, he needs to build experience to
average results among 100 equal setting simulations. In                  calibrate these values.
particular, we will analyze the aggregate trust values that the
user has (the values estimated for each device are aggregated                Concerning the choice of th-min and Th-max, there is
into a single value) in autonomy and efficiency. For                     only the constraint that the first should be smaller than the
convenience, in the experiments we will indicate the values              second. We chose 0.3 and 0.6, respectively, in order to divide
of trust or error in the various levels with the form [x0 x1 x2          the trust degree in three intervals of similar size.
x3 x4] in which the subscript stands for the level.                          In the below tables, we can see what happens to the user
A. First Experiment                                                      after the interaction with the devices. Each row represents
                                                                         the trust values that a user with a given percentage of
   The first experiment analyzes the case in which the                   feedback acceptance has on the five task levels. As we can
devices make no mistake. In this situation, we just focus on             see from the values of autonomy and efficiency (respectively
the aspect of autonomy, while the efficiency plays a                     Tables I and II), in this situation the designated algorithm
secondary role. Experimental setting:                                    allows to reach the optimal trust levels.
        1.   Number of devices: 10
                                                                         TABLE I.       USER TRUST LEVELS CONCERNING AUTONOMY WHEN THE
        2.   Error probability: [0 0 0 0 0]                                               DEVICES DO NOT MAKE MISTAKES.

        3.   Penalty = user-penalty = 0.1
        4.   Increment = user-increment = 0.05
        5.   User profile = (cautious, normal, open-minded)
        6.   Feedback acceptance probability: 0%, 25%,
             50%, 75%, 100%                                              TABLE II.      USER TRUST LEVELS CONCERNING EFFICIENCY WHEN THE
                                                                                          DEVICES DO NOT MAKE MISTAKES.
        7.   Duration: 1000 time units
        8.   th-min = 0.3
        9.   Th-max = 0.6
        10. Initial trust values for efficiency: [0.5 0.5 0.5 0.5
            0.5]
                                                                             This is just the ideal case, but it is also the proof that the
    Before starting the discussion of the experiment, we                 whole mechanism works. The device can estimate the user’s
discuss the choice of the simulation parameters, especially              trust values and they first try to adapt to them. After that,
for the user. We did not investigate different values of                 there is a continuous phase of adaptation, both for the
penalty and increment (and the corresponding user-penalty                devices and for the user: the devices continuously try to
and user-increment), but we made a few considerations for                modify the user’s trust values. At the end, it will be possible
determining their values. First, they need to be sufficiently            to execute the tasks belonging to any level.
small to provide a stable trust evaluation, as high values                  Notice that the final results are independent of the
would lead to an unstable evaluation, too dependent on the
                                                                         percentage of feedback acceptance and the user profile.
last experience. Second, since humans are more influenced                These parameters do not influence the final value, but the




                                                                    95
                                     Workshop "From Objects to Agents" (WOA 2019)


time needed to get it. Those that we saw are in fact the final            TABLE VI.      USER TRUST LEVELS CONCERNING AUTONOMY WHEN THE
                                                                         DEVICES’ ERROR INCREASES WITH THE TASK LEVEL AND THE USER IS OPEN-
results, after 1000 runs. We did not analyze the way the trust                                        MINDED.
levels change during this time window. The feedback
acceptance probability for the autonomy influences the speed
at which these values are reached, so that a “more willing to
innovate” user will reach those values first. For instance,
Table III shows what happens in the first experiment after
only 250 runs. Here we can see significant differences, due
precisely to the fact that users with a lower feedback
acceptance probability need more time to reach the final                                        VI. DISCUSSION
values. After a sufficiently long time, they all will converge
                                                                             The experiments we proposed analyze two interesting
to the same final value; the ending point is always the same.
                                                                         situations, with the aim of verifying the behavior of the
TABLE III.   USER TRUST LEVELS CONCERNING AUTONOMY AFTER 250
                                                                         theorized model. The first experiment proves that in the
    RUNS, WHEN THERE IS NO ERROR AND THE USER IS CAUTIOUS.               absence of errors, and therefore in ideal conditions, it is
                                                                         possible to reach the maximum levels of autonomy and
                                                                         efficiency. This depends on the fact that in the model we
                                                                         considered that users have no constraint on their confidence
                                                                         towards the devices if they are shown to perform correctly.
                                                                         In other words, there is no implicit limitation impeding the
                                                                         increase of trust in such cases as the devices perform well;
B. Second Experiment                                                     this is clearly expressed by the Formulas (1)–(6) on Section
    In this second experiment, we consider the presence of               III.A, regulating the dynamics of trust. Of course, this model
errors. We made the assumption that error probability                    is just further extended, making it more realistic, considering
increases while the task level increases: starting with 0% at            that some users could have intrinsic limitations against a too-
the initial level, as the device is supposed to perform its basic        strong autonomy of the devices. Then we analyzed the
functions correctly, it is raised up to a maximum of 20% at              factors affecting the system, trying to understand what effect
the last level. This makes sense because the device is going             they have and if they represent a constraint for autonomy.
to perform increasingly complex tasks; however it is not said                The first factor is that of efficiency. It has a very strong
that it works always this way, other types of error may occur.           effect, so in the presence of a high error rate, some tasks are
The experimental setting is the same of before, we just                  no longer performed. In case of low-level tasks, there is no
changed the error probability to [0 5 10 15 20].                         influence on the next levels. However, if the error were to
    Introducing errors, the trust in the devices’ efficiency             concern the highest level, this could also lead to the non-
decreases as the error increases, as shown in Table IV. As far           achievement of the highest levels of autonomy.
as autonomy is concerned (Table V), we would have                            Concerning the initial user profile, its relevance is due to
expected it to reach maximum values, but it does not.                    the fact that, in the presence of error, a more open profile
Sometimes, in fact, it happens that a device makes mistakes              makes it possible to reach slightly higher levels of autonomy
repeatedly on level 4. If this occurs so many times as to                precisely because these values are higher at the beginning. It
reduce confidence in the efficiency below the th-min                     is important to underline that there are many more structural
threshold, the user will block all future execution attempts of          differences between the typologies of users we choose; these
that task level for the specific device. As it is no longer              differences could be integrated in cognitive variables that
performed, its trust in autonomy will also remain low.                   could influence the outcome, reducing, with respect to the
   Concerning the user profiles, they influence the final trust          results shown, the acceptance of the system. Given the
value in the autonomy. Since they start from slightly higher             absence of real data, in this work we decided to model the
values, even at the end of the simulation they will reach                different user profiles based only on the initial availability.
higher values. For example, Table VI shows the autonomy                  However, we plan to integrate this aspect in future works.
graphs when the user is open-minded.                                         The last factor is the feedback acceptance probability for
                                                                         the autonomy, a characteristic of the specific user. As we
TABLE IV.     USER TRUST LEVELS CONCERNING EFFICIENCY WHEN THE           have shown in the results (Table III), these parameter
  DEVICES’ ERROR INCREASES WITH THE TASK LEVEL AND THE USER IS
                           CAUTIOUS.                                     influences the speed at which the corresponding final trust
                                                                         values are reached, so that a “more willing to innovate” user
                                                                         will reach those values first.
                                                                                              VII. CONCLUSIONS
                                                                            In this work, we propose a model for the users’
                                                                         acceptance of IoT systems. While the current literature is
TABLE V.       USER TRUST LEVELS CONCERNING AUTONOMY WHEN THE            working on their security and privacy aspects, very little has
   DEVICES’ ERROR INCREASES WITH THE TASK LEVEL AND THE USER IS          been said about the user’s point of view. This is actually a
                            CAUTIOUS.                                    key topic, as even the most sophisticated technology needs to
                                                                         be accepted by the users, otherwise it simply will not be
                                                                         used. The model we proposed uses the concepts of trust and
                                                                         control, with particular reference to the feedback.




                                                                    96
                                             Workshop "From Objects to Agents" (WOA 2019)


    Our first contribution is a precise classification of the                         [9]  Pecorella, T.; Brilli, L.; Mucchi, L. The Role of Physical Layer
tasks an IoT device can do according to the autonomy the                                   Security in IoT: A Novel Perspective. Information 2016, 7, 49.
user grants. We defined 5 levels of autonomy, depending on                            [10] Sheth, A. Internet of things to smart iot through semantic, cognitive,
                                                                                           and perceptual computing. IEEE Intell. Syst. 2016, 31, 108–112.
the functionalities a device has; the execution of a task
                                                                                      [11] Nadin Kokciyan, N.; Yolum, P. Context-Based Reasoning on Privacy
belonging to a certain level assumes that it is also possible to                           in Internet of Things. In Proceedings of the Twenty-Sixth
execute (at least according to autonomy) the tasks of the                                  International Joint Conference on Artificial Intelligence, AI and
previous levels.                                                                           Autonomy Track, Melbourne, Australia, 19–25 August 2017; pp.
                                                                                           4738–4744, doi:10.24963/ijcai.2017/660.
    Based on this classification, we provided a theoretical                           [12] Maurya, A.K.; Sastry, V.N. Fuzzy Extractor and Elliptic Curve Based
framework for the device–user relationship, formalizing their                              Efficient User Authentication Protocol for Wireless Sensor Networks
interaction. It is in fact a complex interaction: on the one                               and Internet of Things. Information 2017, 8, 136.
hand, the device must adapt to the user, on the other hand, it                        [13] Asiri, S.; Miri, A. An IoT trust and reputation model based on
must ensure that the user adapts to it. The realized model                                 recommender systems. In Proceedings of the 2016 14th Annual
perfectly responds to these needs. We proved this by the                                   Conference on Privacy, Security and Trust (PST), Auckland, New
                                                                                           Zealand, 12–14 December 2016; pp. 561–568.
means of simulation, implementing the proposed model and
showing that it works and it allows enhancing user’s trust on                         [14] Yan, Z.; Zhang, P.; Vasilakos, A.V. A survey on trust management
                                                                                           for Internet of Things. J. Netw. Comput. Appl. 2014, 42, 120–134.
the devices and consequently the autonomy the devices have.
                                                                                      [15] Ghazizadeh, M.; Lee, J.D.; Boyle, L.N. Extending the Technology
    In a further step, we tested the model in the presence of                              Acceptance Model to assess automation. Cogn. Technol. Work 2012,
                                                                                           14, 39–49.
incremental error, i.e. increasing with the complexity of the
task. Of course, even if we did not consider them, there can                          [16] Miranda, J.; Mäkitalo, N.; Garcia-Alonso, J.; Berrocal, J.; Mikkonen,
                                                                                           T.; Canal, C.; Murillo, J.M. From the Internet of Things to the
be other kinds of error, such as hardware-related errors (for                              Internet of People. IEEE Int. Comput. 2015, 19, 40–47.
instance a non-functioning sensor or actuator) or errors due                          [17] Ashraf, Q.M.; Habaebi, M.H. Introducing autonomy in internet of
to the cooperation with other devices (wrong partner choice,                               things. In Proceedings of the 2015 14th International Conference on
wrong coordination, etc.).                                                                 Applied Computer and Applied Computational Science (ACACOS
                                                                                           '15), Kuala Lumpur, Malaysia, 23-25 April; pp. 215–221
   The entire work provides some hints and interesting                                [18] Kranz, M.; Holleis, P.; Schmidt, A. Embedded interaction: Interacting
considerations about the user’s acceptance of IoT systems.                                 with the internet of things. IEEE Int. Comput. 2010, 14, 46–53.
Their designers should keep in mind this analysis in the                              [19] Economides, A.A. User Perceptions of Internet of Things (IoT)
design phase. It is worth noting that these results have been                              Systems. In International Conference on E-Business and
obtained focusing not on the specific characteristics of the                               Telecommunications; Springer: Cham, Switzerland, 2016; pp. 3–20.
device, intrinsic in its nature and bound to a specific domain,                       [20] Castelfranchi, C.; Falcone, R. Trust and Control: A Dialectic Link. In
but on what it is authorized to do based on the autonomy                                   Applied Artificial Intelligence Journal; Special Issue on “Trust in
                                                                                           Agents” Part 1; Castelfranchi, C., Falcone, R., Firozabadi, B., Tan,
granted to it. This means that these results are applicable to                             Y., Eds.; Taylor and Francis: Abingdon, UK, 2000; Volume 14, pp.
IoT systems in general, regardless of the domain.                                          799–823, ISSN 0883-9514.
                                                                                      [21] Bekier, M.; Molesworth, B.R.C. Altering user’ acceptance of
                          ACKNOWLEDGMENT                                                   automation through prior automation exposure. Ergonomics 2017, 60,
   This work is partially supported by the project CLARA-                                  745–753.
CLoud plAtform and smart underground imaging for natural                              [22] Falcone, R.; Sapienza, A.; Castelfranchi, C. Recommendation of
Risk Assessment, funded by the Italian Ministry of                                         categories in an agents world: The role of (not) local communicative
                                                                                           environments. In Proceedings of the 2015 13th Annual Conference on
Education, University and Research (MIUR-PON).                                             Privacy, Security and Trust (PST), Izmir, Turkey, 21–23 July 2015;
                                                                                           pp. 7–13.
                              REFERENCES                                              [23] Conte, R.; Paolucci, M. Reputation in Artificial Societies: Social
[1]   Internet of Things Installed Base Will Grow to 26 Billion Units by                   Beliefs for Social Order; Kluwer Academic Publishers: Boston, MA,
      2020. Gartner Press Release. 2013. Available online:                                 USA, 2002.
      www.gartner.com/newsroom/id/2636073                                             [24] Falcone, R.; Sapienza, A.; Castelfranchi, C. The relevance of
[2]   Lin, H.; Bergmann, N.W. IoT privacy and security challenges for                      categories for trusting information sources. ACM Trans. Int. Technol.
      smart home environments. Information 2016, 7, 44.                                    (TOIT) 2015, 15, 13.
[3]   Atzori, L.; Iera, A.; Morabito, G. The internet of things: A survey.            [25] Jipp, M. Levels of automation: Effects of individual differences on
      Comput. Netw. 2010, 54, 2787–2805.                                                   wheelchair control performance and user acceptance. Theor. Issues
[4]   Medaglia, C.M.; Serbanati, A. An overview of privacy and security                    Ergon. Sci. 2014, 15, 479–504, doi:10.1080/1463922X.2013.815829.
      issues in the internet of things. In The Internet of Things; Springer:          [26] Falcone, R.; Castelfranchi, C. The Human in the Loop of a Delegated
      New York, NY, USA, 2010; pp. 389–395.                                                Agent: The Theory of Adjustable Social Autonomy. IEEE Trans.
[5]   Castelfranchi, C.; Falcone, R. Trust Theory: A Socio-Cognitive and                   Syst. Man Cybern. A: Syst. Hum. 2001; 31, 406–418, ISSN 1083-
      Computational Model; John Wiley and Sons: Chichester, UK, 2010.                      4427.
[6]   Suo, H.; Wan, J.; Zou, C.; Liu, J. Security in the internet of things: A        [27] Urbano, J.; Rocha, A.P.; Oliveira, E. Computing Confidence Values:
      review. In Proceedings of the 2012 International Conference on                       Does Trust Dynamics Matter? In Proceedings of the 14th Portuguese
      Computer Science and Electronics Engineering (ICCSEE), Hangzhou,                     Conference on Artificial Intelligence, EPIA 2009, Aveiro, Portugal,
      China, 23–25 March 2012; IEEE: Los Alamitos, CA, USA, 2012;                          12–15 October 2009; Lopes, L.S., Lau, N., Mariano, P., Rocha, L.M.,
      Volume 3, pp. 648–651.                                                               Eds.; Springer: Berlin/Heidelberg, Germany, 2009; LNAI 5816, pp.
                                                                                           520–531.
[7]   Jing, Q.; Vasilakos, A.V.; Wan, J.; Lu, J.; Qiu, D. Security of the
      internet of things: Perspectives and challenges. Wirel. Netw. 2014, 20,         [28] Wilensky, U. NetLogo. Center for Connected Learning and
      2481–2501.                                                                           Computer-Based Modeling, Northwestern University, Evanston, IL,
                                                                                           USA, 1999. Available online: http://ccl.northwestern.edu/netlogo/
[8]   Roman, R.; Najera, P.; Lopez, J. Securing the internet of things.
      Computer 2011, 44, 51–58.




                                                                                 97