<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The problem of behaviour and preference manipulation in AI systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hal Ashton</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matija Franklin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University College London</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Statistical AI or Machine learning can be applied to user data in order to understand user preferences in an effort to improve various services. This involves making assumptions about either stated or revealed preferences. Human preferences are susceptible to manipulation and change over time. When iterative AI/ML is applied, it becomes difficult to ascertain whether the system has learned something about its users, whether its users have changed/learned something or whether it has taught its users to behave in a certain way in order to maximise its objective function. This article discusses the relationship between behaviour and preferences in AI/ML, existing mechanisms that manipulate human preferences and behaviour and relates them to the topic of value alignment.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Increased data collection possibilities in the modern age
mean that Statistical Artificial Intelligence (AI) or Machine
Learning (ML) are often used to learn the preferences of
users in order to better (sometimes for the user, sometimes
to the system owner) deliver some service to them.
Preferences can be learned directly by asking subjects directly
(Stated Preferences) or they can be inferred in a process
known as Revealed Preference Theory (RPT)
        <xref ref-type="bibr" rid="ref87">(Varian 2006)</xref>
        .
Both approaches come with an extensive set of limitations
which have been demonstrated over time by experimental
economists and psychologists. One set of limitations broadly
falls into the category of ’irrational’ behaviour or beliefs.
For example
        <xref ref-type="bibr" rid="ref39">Gui, Shanahan, and Tsay-Vogel (2021</xref>
        )
discuss the phenomenon of users acting inconsistently as they
balance conflicting short and long term preferences.
Preferences might not be static between contexts; the social
norms of people ’in-group’
        <xref ref-type="bibr" rid="ref25">(Cialdini and Trost 1998)</xref>
        , might
run contrary to their person’s private preference, revealed
through their digital behaviour. The presence of multiple
preferences active in different circumstances poses the
question which preference ’revealed’ from behaviour ought to be
selected by decision makers as the ’true’ preference or
’normative’ preferences (Beshears et al. 2008). Decision makers
might also make mistakes
        <xref ref-type="bibr" rid="ref62">(Nishimura 2018)</xref>
        , be susceptible
to various environmental effects like framing (Tversky and
      </p>
      <p>
        Kahneman 1985), and they may exhibit satisficing where
users do not even view the best option because of search
costs
        <xref ref-type="bibr" rid="ref18 ref26 ref37">(Caplin, Dean, and Martin 2011)</xref>
        .
      </p>
      <p>
        We will concentrate on a problem with preference elicitation
and representation which we argue when combined with the
iterative nature of AI/ML risks can cause profound
problems. The issue stems from user preferences being quite
fluid and changeable in practice
        <xref ref-type="bibr" rid="ref16 ref55 ref59 ref81 ref86">(Bleidorn, Hopwood, and
Lucas 2018; Mathur, Moschis, and Lee 2003)</xref>
        and worse,
they can be influenced in any number of ways. The existence
of a large and successful behavioural change industry, with
practitioners in government and advertising, is evidence of
this. This is relevant to preferences because, amongst
others,
        <xref ref-type="bibr" rid="ref11">Ariely and Norton (2008)</xref>
        have shown that behaviour is
not only caused by preference but also the inverse is true:
Behaviour causes preferences to form.
      </p>
      <p>This article will explore the implications of non-static
preferences and plastic behaviour/preferences when AI/ML
systems are tasked with learning user preferences over time. It
will point to a small but growing body of research that shows
that the plasticity of human preferences under algorithmic
influence is a profound problem without obvious solutions.</p>
    </sec>
    <sec id="sec-2">
      <title>Behaviour change accepted; preference change unacknowledged</title>
      <p>
        There is a large body of research showing that the behaviour
of users can be reliably changed with a variety of techniques.
The commercial side of this behaviour change complex
comprises the advertising industry
        <xref ref-type="bibr" rid="ref78">(Sutherland 2019)</xref>
        and
the academic side falls under the umbrella of behavioural
science
        <xref ref-type="bibr" rid="ref66">(Ruggeri 2018)</xref>
        , typically distributed across but not
limited to Business schools, Psychology and Economics
departments. The practice was brought to popular attention
by
        <xref ref-type="bibr" rid="ref82">Thaler and Sunstein (2008)</xref>
        , with the virtuous behaviour
change practice called ’nudging’. Specifically this is the
development of choice architectures, the background for
people’s behaviours, aimed at influencing people’s behaviour,
without limiting or forcing options, or significantly changing
their economic incentives. A major consumer of nudging
expertise has been governments; to date nudging has been used
as a policy tool in over 80 countries and by supranational
institutions
        <xref ref-type="bibr" rid="ref63">(OECD 2017)</xref>
        .
      </p>
      <p>
        All environments influence behaviour to some extent, even
when people are not aware of it
        <xref ref-type="bibr" rid="ref77">(Sunstein 2016)</xref>
        . To give
a concrete example, content recommender engines, even if
not labelled as such, nudge their users because they
deliberately alter the choices that a user can make when
delivering personalised search results on the first page of results in
web browsers, or projected onto maps in cars and phones, or
when suggesting further things to watch on the TV.
The observation that behaviour can be changed by
system designers
        <xref ref-type="bibr" rid="ref44 ref46 ref52 ref55 ref73 ref86">(Schneider, Weinmann, and vom Brocke
2018; Kozyreva, Lewandowsky, and Hertwig 2020)</xref>
        through
changes in choice architectures or other techniques
immediately calls into question the practicality of user preference
elicitation and in particular RPT. This is because there is a
considerable body of evidence showing that behaviour
history forms preferences
        <xref ref-type="bibr" rid="ref11 ref13 ref4 ref42 ref5 ref61 ref79 ref82 ref92">(Ariely and Norton 2008; Albarrac´ın
and Jr 2000; Albarrac´ın and McNatt 2005; Hill, Kusev, and
van Schaik 2019; Wyer, Xu, and Shen 2012)</xref>
        . A response
might be to say that behaviour which has been altered does
not reflect the ’real’ or ’normative’ preferences of a user and
better efforts should be made to learn un-manipulated
preferences. Firstly this is not trivial for any preference learner
because it means they then have to distinguish between
representative and non-representative behaviours in their data.
Secondly it is na¨ıve because it does not allow users to
autonomously change their preferences (by developing a taste
for Nollywood cinema or Mongolian throat singing say).
The behaviour change complex overcomes the difficulty
in eliciting preferences by not really modelling them;
behaviour is the key metric of success
        <xref ref-type="bibr" rid="ref14">(Atkins et al. 2017)</xref>
        .
Whether someone who has had their behaviour changed
prefers their new behaviour to their old one is not usually
a focus. Proponents of the use of behaviour change defend
the practice ethically by arguing that they only influence
behaviour, but do not limit or force options. This is described
as Libertarian Paternalism, a form of soft means
paternalism, with the central idea that institutions can positively
affect people’s behaviour, while still respecting their
freedom of choice
        <xref ref-type="bibr" rid="ref81">(Thaler and Sunstein 2003)</xref>
        . It is described
as ‘soft’ because it avoids material incentives and
coercion, thus maintaining freedom of choice; and as
‘meansorientated’ because it does not attempt to change people’s
goals (or ends), but rather gives people a sense of best
practice, given their own ends. Proponents of libertarian
paternalism favour the intentional design of choice architecture as
a policy tool
        <xref ref-type="bibr" rid="ref76">(Sunstein 2014)</xref>
        . They argue that since choice
architecture is omnipresent, unavoidable and influences
people’s behaviour, even when they are not aware of it, so it
might as well be harnessed to do good. Nevertheless the
Libertarian Paternalist argument seldom considers the
observation that behaviour change has a causal relationship with
preference change.
      </p>
      <p>
        Private sector companies are wary about stating an intent to
change user behaviour because of the likely public
opprobrium which may occur. This attested by the recent
popularity of popular media examining the manipulative behaviour
of big-tech attest
        <xref ref-type="bibr" rid="ref44 ref52 ref64">(Orlowski, Coombe, and Curtis 2020)</xref>
        . As
a result of the public’s sensitivity surrounding behaviour
change, the objective of behaviour change is couched in
terms of preference learning - the desire to learn about
customers to better engage with them and improve their user
experience. On the occasions that companies have been shown
to use AI to maximise profitable behaviour over maximising
an objective function based on user preferences, the public
reception has not been warm
        <xref ref-type="bibr" rid="ref55 ref86">(Lewis and McCormick 2018)</xref>
        .
Training a video recommender to maximise play-through
because more complete videos watched equals to more
adverts consumed fulfils a logical business objective but in the
language of behavioural change, has spillovers
        <xref ref-type="bibr" rid="ref29">(Dolan and
Galizzi 2015)</xref>
        . As
        <xref ref-type="bibr" rid="ref7">Alfano et al. (2020)</xref>
        show, such a
system can involve recommending extremist content to
maintain users’ attention. Ignoring preference change in this case
ignores the social externality that AI/ML powered behaviour
change causes. The impact of recommender systems on user
preferences was studied by
        <xref ref-type="bibr" rid="ref3">Adomavicius et al. (2013)</xref>
        . It
stretches credulity to say that recommender system
designers do not know about their nudging power. The survey of
nudging mechanisms in recommender systems by
        <xref ref-type="bibr" rid="ref44">Jesse and
Jannach (2020)</xref>
        shows just that.
      </p>
      <sec id="sec-2-1">
        <title>Preference change</title>
      </sec>
      <sec id="sec-2-2">
        <title>Preferences</title>
      </sec>
      <sec id="sec-2-3">
        <title>Behaviour</title>
      </sec>
      <sec id="sec-2-4">
        <title>Stated</title>
      </sec>
      <sec id="sec-2-5">
        <title>Preferences</title>
        <p>
          Public discomfort concerning the practice of private
companies manipulating user behaviour is beginning to be reflected
in regulation. Article 5 of the EU draft AI Act 2021 prohibits
the use of an AI system that deploys subliminal techniques
beyond a person’s consciousness in order to materially
distort a person’s behaviour in a manner that causes or is likely
to cause that person or another person physical or
psychological harm. At present, uncertainties exist about almost
every aspect of this provision and how it will be enforced.
Given that behaviour change is possible, behaviour can
influence preferences and preferences change anyway in
response to exogenous events, it seems strange that models of
preference change are few and far between.
          <xref ref-type="bibr" rid="ref43">Jacobs (2016)</xref>
          provides one of the few dedicated literature reviews on the
subject that we could find. Perhaps this is because
empirical evidence concerning the effect of deployed AI systems
is hard to find. This is puzzling given the generally
acknowledged explosion in data collection possibilities that
modern technology has enabled.
          <xref ref-type="bibr" rid="ref47">Kramer, Guillory, and Hancock
(2014</xref>
          ) demonstrated that users’ moods could be
manipulated by changing what appeared on their Facebook news
feed. The ensuing public and academic reception to the
deliberate altering of people’s moods without telling them was
understandably not positive
          <xref ref-type="bibr" rid="ref88">(Verma 2014)</xref>
          . Consequently
direct sources of proprietary data concerning the effect of
Algorithm design on user preferences have not been
forthcoming for public research. Other obstacles exist; the US
Supreme Court recently ruled in Van Buren v. United States
(2021) that certain academic research on web platforms
would be protected from prosecution under Computer Fraud
and Abuse Act 1986 (CFAA)
          <xref ref-type="bibr" rid="ref89">(Villasenor 2021)</xref>
          . Researchers
can now devise programs to monitor user-facing algorithms
without fear of custodial jail sentences but are still not party
to the large scale behavioural data which would shed light
on behavioural and by extension preference change.
One could argue that since the incentives of governments
and large companies are not aligned with those of their users,
behaviour and preference change externalities are inevitable.
We will later argue that even a developer of an AI system
whose only objective is to learn the preferences of their users
is just as prone to manipulating their users’ preferences as
someone who is targeting behaviour or preference change
for profit. Firstly we will consider in more detail the
mechanisms that alter human preferences.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>The mechanisms that manipulate preference</title>
      <p>
        In this section we will briefly identify the most likely
mechanisms which alter user preferences predominantly in the
simple case of content-recommenders. We posit that
preference manipulation comes from two separate sources which
combine efficiently: 1) the mechanics of the recommender
algorithm itself and 2) the generator of the content. There is
a symbiosis between content generators who generate
popular content and recommender systems that can alter
preferences to fit that content. For the most part recommender
system owners do not yet create content though there are
some exceptions. Netflix amongst other video content
platforms will use its analytics to make more addictive shows.
Some internet retailers may choose to design and retail their
own branded goods using their privileged data and product
placement powers. In the near future, the advent of improved
generative text and video technology can drastically lower
the cost of developing and prototyping content and facilitate
the exploration of novel manipulation techniques for media
content platforms in an end to end automated manner.
One feature of recommender systems independent of
preference plasticity is the phenomenon of popularity bias,
whereby certain popular items are recommended more often
than less popular items. This allows popular items to grow
ever more popular
        <xref ref-type="bibr" rid="ref12 ref2 ref58 ref63">(Abdollahpouri, Burke, and Mobasher
2017; Mansoury et al. 2020)</xref>
        and the process reinforces
itself. This is a symptom of a wider problem with
recommender systems - confounded data. The behaviour data
used to train and test algorithms has already been
influenced by the algorithm; this creates an amplifying feedback
loop which increases homogeneity of recommended content
        <xref ref-type="bibr" rid="ref55 ref86">(Chaney, Stewart, and Engelhardt 2018)</xref>
        . In summary, naive
recommenders have a natural tendency to push people
towards the same small set of content and user’s experiences
are homogenised
        <xref ref-type="bibr" rid="ref1">(Abdollahpouri 2019)</xref>
        .
      </p>
      <p>
        The mere-exposure effect describes the tendency for people
to adapt preferences towards things they are familiar with
        <xref ref-type="bibr" rid="ref34">(Fang, Singh, and Ahluwalia 2007)</xref>
        . Related and similar
effects are the availability bias
        <xref ref-type="bibr" rid="ref83">(Tversky and Kahneman 1973)</xref>
        ,
anchoring
        <xref ref-type="bibr" rid="ref26 ref37">(Furnham and Boo 2011)</xref>
        ) and the recognition
heuristic
        <xref ref-type="bibr" rid="ref38">(Goldstein and Gigerenzer 2002)</xref>
        . This suggests
that a content recommender that increases homogenisation
for certain users would change the preferences of those users
to whatever narrow band of content they are being
recommended.
      </p>
      <p>
        The combination of a recommender amplifying a few
popular items and humans changing their preferences to the
things which they are familiar with (ie recommended more
often) is a powerful combination. However it does not
explain the popularity of extreme content and the emergence of
polarisation. Looking at the specific effects of content types,
it seems certain types of content are more likely to lead
to preference change than others. For example, it has been
shown that conspiracy theory content is particularly potent
        <xref ref-type="bibr" rid="ref85">(van der Linden 2015)</xref>
        ;van Prooijen and van Vugt (2018)
hypothesise this predilection is for evolutionary reasons.
Similarly content purporting to be from an impartial news source
is effective at altering people’s preferences;
        <xref ref-type="bibr" rid="ref6">Alfano, Carter,
and Cheong (2018</xref>
        ) call this top-down technological
seduction. Content which engenders strong emotion is likely to
have manipulative effects on user preferences
        <xref ref-type="bibr" rid="ref51">(Kusev et al.
2017)</xref>
        . It is alleged that Facebook’s newsfeed algorithm
prioritised content that had received angry face emojis to
maximise user engagement
        <xref ref-type="bibr" rid="ref31 ref60 ref65">(Merill and Oremus 2021)</xref>
        . So
serious is the problem, Roozenbeek and van der Linden (2021)
consider the effects of such content types as a matter of
international security.
      </p>
      <p>
        This is a simple account of preference change dynamics and
ignores other mechanisms which tap into the many
psychological biases and heuristics that humans have been shown to
reliably exhibit.
        <xref ref-type="bibr" rid="ref6">Alfano, Carter, and Cheong (2018</xref>
        ) for
example point to auto completion systems as ways of grouping
users together and pushing them in certain directions. Other
research have looked to the study of social effects where
groups of people with similar views can coalesce leading
to similar effects of polarization driven by confirmation bias
        <xref ref-type="bibr" rid="ref28">(Del Vicario et al. 2017)</xref>
        .
      </p>
      <p>
        The discussion has so far been focused on recommender
type dynamics where users are served content and their
preferences are inferred through their observed behaviour.
Preference elicitation is also vulnerable to behaviour
manipulation techniques and has been more widely studied.
Perhaps most famously people’s numerical estimates can be
adjusted based on prior exposure to higher or lower
numbers using the anchoring effect
        <xref ref-type="bibr" rid="ref26 ref37">(Furnham and Boo 2011)</xref>
        .
A simple example of this in practice is the suggested
donation figures routinely used on donation forms. Perhaps most
damning was the finding by
        <xref ref-type="bibr" rid="ref41">(Hall, Johansson, and
Strandberg 2012)</xref>
        that even after having given their preferences,
when they were secretly changed by the experimenters,
participants would often alter their views to match their (falsely
recorded) ones. In short People can be told what their
preferences are and they will change them.
      </p>
      <p>None of the preference change mechanisms in this section
are particularly complicated. In the cases of recommenders
it amounts to repeating content types which claim to be true
to users to the exclusion of other content types. We do not
think that this scheme was intentional from the outset, it has
just occurred. This begs the question, could an AI reproduce
preference manipulation from scratch? We think a
generative text algorithms would recover many human preference
or behaviour manipulation techniques (framing for instance)
with high regularity. Even a simple AI could just make up
what it thought its users’ preferences were and provided they
believed it had listened to them in the first place, adjust
whatever they did really think to the AI’s choice. The question is
why would an AI system be incentivised to intend to change
human preferences?</p>
    </sec>
    <sec id="sec-4">
      <title>Value Alignment and Preferences</title>
      <p>
        An area where computer scientists are very interested in
learning preferences is the subject of value alignment. The
value alignment problem concerns the difficulty of writing
objective functions for AI systems which prevent
undesirable behaviour or allows AI to solve tasks that are otherwise
hard to describe. In practice, as
        <xref ref-type="bibr" rid="ref54">Lehman, Clune, and Misevic
(2020</xref>
        ) show, AI systems have a reliable habit of cheating to
find solutions to given objectives.
      </p>
      <p>
        We don’t believe the observed problems surrounding
recommender systems in the previous section are examples of the
alignment problem. Though often unintended, the changes
brought in user preferences are favourable for system
owners, principally by making users more predictable. The
algorithms are doing what they were designed to do - make
money efficiently for their owners by increasing the time
their users spend online. In common with many persistent
externalities, the measurement and valuation of the harm
caused is difficult.
        <xref ref-type="bibr" rid="ref68">Russell, Dewey, and Tegmark (2015</xref>
        )
calls this a validity problem; ”validity is concerned with
undesirable behaviours that can arise despite a system’s
formal correctness”.
      </p>
      <p>
        One approach to the problem of value alignment is Inverse
Reinforcement Learning (IRL); the construction of a
human’s utility function or values by the observation of their
behaviour
        <xref ref-type="bibr" rid="ref4 ref61">(Ng and Russell 2000)</xref>
        . IRL is hard, it is an
illposed problem in that any number of solutions (utility
functions) can explain a given observed behaviour set in a
single setting
        <xref ref-type="bibr" rid="ref4 ref61">(Ng and Russell 2000)</xref>
        though it can be shown
with a wide enough variety of settings, utility functions can
be faithfully recovered
        <xref ref-type="bibr" rid="ref8">(Amin and Singh 2016)</xref>
        . Even so,
certain assumptions need to be made about rationality, else
as
        <xref ref-type="bibr" rid="ref13">Armstrong and Mindermann (2019)</xref>
        show, any algorithm
that derives a utility function could be arbitrarily bad at
recovering an agent’s actual utility.
        <xref ref-type="bibr" rid="ref40">Hadfield-Menell et al.
(2016)</xref>
        present Cooperative Inverse Reinforcement
Learning as a better way of achieving alignment.
        <xref ref-type="bibr" rid="ref67">(Russell 2020)</xref>
        presents three principles for AI developers to create
beneficial machines which all rely on preferences:
      </p>
      <sec id="sec-4-1">
        <title>1. The machine’s only objective is to maximise the re</title>
        <p>alization of human preferences.</p>
      </sec>
      <sec id="sec-4-2">
        <title>2. The machine is initially uncertain about what those</title>
        <p>preferences are.</p>
      </sec>
      <sec id="sec-4-3">
        <title>3. The ultimate source of information about human preferences is human behavior.</title>
        <p>The difficulty in applying these principles is the causal
relationship between behaviour and preferences as in Figure 1;
behaviour indicates preferences but behaviour change begets
preference change.</p>
        <p>
          Given the non-stationarity and plasticity of
humanpreferences, any AI/ML approach to the learning of
preferences seems to have a difficulty at its heart. Preference
measurement takes time and the process might affect them,
in other words, preference elicitation efforts suffer from the
Observer Effect
          <xref ref-type="bibr" rid="ref71">(Salkind 2010)</xref>
          . This also includes any other
techniques concerned with the elicitation and
representation of preferences such as CP-nets
          <xref ref-type="bibr" rid="ref17 ref56">(Boutilier et al. 2004;
Loreggia et al. 2018)</xref>
          and active learning type efforts
          <xref ref-type="bibr" rid="ref24 ref70">(Sadigh
et al. 2017; Christiano et al. 2017)</xref>
          . More problematically
the AI/ML system is not often neutral to the preferences it
learns, as Russell states: ”like any rational entity, the
Algorithm learns how to modify the state of its environment
in this case the user’s mind - in order to maximise its own
reward”. The same effect is noted in
          <xref ref-type="bibr" rid="ref75">Soares (2016)</xref>
          :
”Actions which manipulate the operator to make their
preferences easier to fulfil may then be highly rated, as they lead
to highly-rated outcomes (where the system achieves the
operator’s now-easy goals)”. Further back in time still
          <xref ref-type="bibr" rid="ref93">Yudkowsky (2011)</xref>
          note that AI might rewire a programmers’
brains to fulfil the objective of maximally pleasing them.
          <xref ref-type="bibr" rid="ref49">Krueger, Maharaj, and Leike (2020</xref>
          ) term this Auto-Induced
Distributional Shift. This effect has been modelled by
          <xref ref-type="bibr" rid="ref33">Everitt
et al. (2021)</xref>
          using causal influence diagrams. With this
technique, situations can be identified where there is a
’Instrumental Control Incentive’ over user behaviour/preferences,
that is to say settings where an algorithm has an incentive
to alter the behaviour of the users it models in order to
maximise its own objective function.
          <xref ref-type="bibr" rid="ref31">Evans and Kasirzadeh
(2021)</xref>
          show this to occur in the case of a recommender
system trained through Reinforcement Learning. In a process
that the authors term user-tampering, the recommender
polarises its users in order to increase their predictability. This
is also shown to be the case by
          <xref ref-type="bibr" rid="ref45">Jiang et al. (2019)</xref>
          with a
multi arm bandit learning model.
        </p>
        <p>We make the observation that in practice AI/ML systems are
often inter-temporal in their nature regardless of whether the
learning algorithm behind them explicitly recognises
multiple periods or not. Users will reuse a system over time and
therefore their preferences will change as they adapt to the
system. Commercial systems are typically iterated in
practice, with a constant program of minor design improvements,
A/B testing and retraining. Unless a particular effort is made
to measure a users preferences before they begin
interacting with an AI/ML system, it becomes impossible to know
whether the system is doing a really good job or whether the
system has just altered the preferences of its users to do a
really good job.</p>
        <p>
          Which preferences should be learned when the topic of
preferences learning arises? The preferences that might exist
before the user came into contact with the preference
elicitation system or preferences after they have been altered?
The instinctive response is to say the former, and that is
suggested as solution by
          <xref ref-type="bibr" rid="ref33">Everitt et al. (2021)</xref>
          to the problem of
altering user preferences/behaviour to suit an objective. An
alternative is the impact regularizer of
          <xref ref-type="bibr" rid="ref10">Amodei et al. (2016)</xref>
          or a low impact learner
          <xref ref-type="bibr" rid="ref12 ref63">(Armstrong and Levinstein 2017)</xref>
          which would seek to minimise the effect of the system on
preferences. Neither solution is perfect because they might
deny the legitimacy of a user’s changed preferences. To use
the example of a video recommender system, it could be the
case that a user learns something after watching something
and their preferences change as a result. Serving content to
them as if they couldn’t change could be just as bad as
serving them content that targets prolonged engagement since it
might trap users in a certain category of content.
Efforts are beginning to be made to address these
problems. On the subject of non-stationary preferences,
          <xref ref-type="bibr" rid="ref19">Chan
et al. (2019)</xref>
          present a bandit algorithm to aid in the
situation where a user is unsure about their preferences. In
chapter 9,
          <xref ref-type="bibr" rid="ref67">Russell (2020)</xref>
          discusses the problems associated with
preference change and the difficulty with assigning moral
valence to it. Perhaps here the more visible discussion
surrounding the ethics of behaviour change can help. Resources
are available to assess what constitutes good and bad
behaviour change
          <xref ref-type="bibr" rid="ref44 ref52">(Lades and Delaney 2020)</xref>
          . The problem
has been also considered for a long time in Welfare
Economics and the philosophy of autonomy through the prism
of Adaptive Preference Formation which was originally a
rejection of utilitarianism
          <xref ref-type="bibr" rid="ref5 ref79">(Teschl and Comim 2005)</xref>
          .
          <xref ref-type="bibr" rid="ref30">Elster
(2016)</xref>
          develops a theory to separate more desirable
preference changes like those caused by learning and
experience from some of the less desirable ones that this
article has touched on.
          <xref ref-type="bibr" rid="ref26">Colburn (2011)</xref>
          characterises adapted
preferences as those formed through covert influence and
therefore undermine autonomy because users have not
consciously chosen them. As Russell puts it, for an AI to learn
preferences safely, it must be given some preferences over
the type of preference changes that are allowed. For this to
occur, the causes of any preference change need to be
understood.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>This article makes the observation that AI/ML
practitioners often make an implicit assumption that preferences are
static artefacts which can be learned with no effect on them.
Sometimes preferences are learned from stated preference
data, but more often than not they are learned from data
concerning behaviour - i.e., assuming some variant of
Revealed Preference Theory and rational behaviour
assumptions. The assumption of non-changeable preferences is at
odds with the behavioural change complex whose founding
principle is that user behaviour can be manipulated. Systems
that learn user preferences are at best likely to impact them
during the process and at worst are likely to manipulate them
to suit their own objective function in a process called
Autoinduced Distributional Shift. Without an effort to record user
preferences over time, it is difficult to know whether a
particular system is doing its task well or altering user preferences
to make its task easier.</p>
      <p>
        A more considered approach to preference change in
computer science is emerging, born from concerns surrounding
Artificial General Intelligence (AGI) and value alignment.
These are well founded since we have seen how user
manipulation has already been effected by very limited algorithms.
Theoretical and empirical research concerning the impact of
recommender systems does recognise preference/behaviour
change as a cause of problems like user polarisation.
Companies are not incentivised to share data on such a sensitive
topic, so much of the research on the topic has necessarily
required multi agent simulations. This type of research is
not without its critics due to its non-standardised approach
        <xref ref-type="bibr" rid="ref90">(Winecoff et al. 2021)</xref>
        and has open challenges
        <xref ref-type="bibr" rid="ref21">(Chaney
2021)</xref>
        . We believe the validity of the results produced by
simulations depend on the realism of their user preference
change mechanisms. As a priority, a cross-disciplinary
effort grounded on Empirical research is required to
understand these processes as proposed by Franklin et al. (2022).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Abdollahpouri</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Popularity Bias in Ranking and Recommendation</article-title>
          .
          <source>In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society</source>
          ,
          <volume>529</volume>
          -
          <fpage>530</fpage>
          . Honolulu HI USA: ACM.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Abdollahpouri</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ; Burke, R.; and
          <string-name>
            <surname>Mobasher</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Controlling Popularity Bias in Learning-to-Rank Recommendation</article-title>
          .
          <source>In Proceedings of the Eleventh ACM Conference on Recommender Systems</source>
          ,
          <volume>42</volume>
          -
          <fpage>46</fpage>
          . Como Italy: ACM.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Adomavicius</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bockstedt</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Curley</surname>
            ,
            <given-names>S. P.</given-names>
          </string-name>
          ; and Zhang,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <year>2013</year>
          .
          <article-title>Do recommender systems manipulate consumer preferences? A study of anchoring effects</article-title>
          .
          <source>Information Systems Research</source>
          ,
          <volume>24</volume>
          (
          <issue>4</issue>
          ):
          <fpage>956</fpage>
          -
          <lpage>975</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Albarrac</surname>
            ´ın, D.; and Jr,
            <given-names>R. S. W.</given-names>
          </string-name>
          <year>2000</year>
          .
          <article-title>The Cognitive Impact of Past Behavior: Influences on Beliefs, Attitudes, and Future Behavioral Decisions</article-title>
          .
          <source>Journal of Personality and Social Psychology</source>
          ,
          <volume>79</volume>
          (
          <issue>1</issue>
          ):
          <fpage>5</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Albarrac</surname>
          </string-name>
          ´ın, D.; and
          <string-name>
            <surname>McNatt</surname>
            ,
            <given-names>P. S.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Maintenance and Decay of Past Behavior Influences: Anchoring Attitudes on Beliefs Following Inconsistent Actions</article-title>
          .
          <source>Personality and Social Psychology Bulletin</source>
          ,
          <volume>31</volume>
          (
          <issue>6</issue>
          ):
          <fpage>719</fpage>
          -
          <lpage>733</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Alfano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Carter</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ; and Cheong,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Technological Seduction and Self-Radicalization</article-title>
          .
          <source>Journal of the American Philosophical Association</source>
          ,
          <volume>4</volume>
          (
          <issue>3</issue>
          ):
          <fpage>298</fpage>
          -
          <lpage>322</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Alfano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fard</surname>
            ,
            <given-names>A. E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Carter</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Clutton</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Technologically scaffolded atypical cognition: the case of YouTube's recommender system</article-title>
          .
          <source>Synthese.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Amin</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Towards Resolving Unidentifiability in Inverse Reinforcement Learning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>arXiv:1601</source>
          .06569 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Amodei</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Olah</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Steinhardt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Christiano,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Schulman</surname>
          </string-name>
          , J.; and Mane´,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2016</year>
          .
          <article-title>Concrete Problems in AI Safety</article-title>
          . arXiv:
          <volume>1606</volume>
          .06565 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Ariely</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and Norton,
          <string-name>
            <surname>M. I.</surname>
          </string-name>
          <year>2008</year>
          .
          <article-title>How actions create - not just reveal - preferences</article-title>
          .
          <source>Trends in Cognitive Sciences</source>
          ,
          <volume>12</volume>
          (
          <issue>1</issue>
          ):
          <fpage>13</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Armstrong</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Levinstein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Low Impact Artificial Intelligences</article-title>
          . arXiv:
          <volume>1705</volume>
          .10720 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Armstrong</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Mindermann</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Occam's razor is insufficient to infer the preferences of irrational agents</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Atkins</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Francis</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Islam,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <article-title>;</article-title>
          <string-name>
            <surname>O'Connor</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Patey</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ivers</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Foy</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Duncan</surname>
            ,
            <given-names>E. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Colquhoun</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Grimshaw</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lawton</surname>
          </string-name>
          , R.; and
          <string-name>
            <surname>Michie</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems</article-title>
          . Implementation Science,
          <volume>12</volume>
          (
          <issue>1</issue>
          ):
          <fpage>77</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          2008.
          <article-title>How are preferences revealed</article-title>
          ? NBER Working Paper Series.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Bleidorn</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ; Hopwood,
          <string-name>
            <surname>C. J.;</surname>
          </string-name>
          and Lucas,
          <string-name>
            <surname>R. E.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Life Events and Personality Trait Change: Life Events and Trait Change</article-title>
          .
          <source>Journal of Personality</source>
          ,
          <volume>86</volume>
          (
          <issue>1</issue>
          ):
          <fpage>83</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Boutilier</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Brafman</surname>
            ,
            <given-names>R. I.</given-names>
          </string-name>
          ; Domshlak,
          <string-name>
            <given-names>C.</given-names>
            ;
            <surname>Hoos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            ; and
            <surname>Poole</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2004</year>
          .
          <article-title>CP-nets: A Tool for Representing and Reasoning with Conditional Ceteris Paribus Preference Statements</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          ,
          <volume>21</volume>
          :
          <fpage>135</fpage>
          -
          <lpage>191</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Caplin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2011</year>
          . Search and Satisficing. American Economic Review,
          <volume>101</volume>
          (
          <issue>7</issue>
          ):
          <fpage>2899</fpage>
          -
          <lpage>2922</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Chan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hadfield-Menell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Srinivasa</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Dragan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>The Assistive Multi-Armed Bandit</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          arXiv:
          <year>1901</year>
          .08654 [cs, stat].
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Chaney</surname>
            ,
            <given-names>A. J. B.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Recommendation System Simulations: A Discussion of Two Key Challenges</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <source>arXiv:2109</source>
          .02475 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          2018.
          <article-title>How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility</article-title>
          .
          <source>Proceedings of the 12th ACM Conference on Recommender Systems</source>
          ,
          <volume>224</volume>
          -
          <fpage>232</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Christiano</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Leike</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Brown, T. B.;
          <string-name>
            <surname>Martic</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Legg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Amodei</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Deep reinforcement learning from human preferences</article-title>
          .
          <source>arXiv:1706</source>
          .03741 [cs, stat].
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Cialdini</surname>
            , R. B.; and Trost,
            <given-names>M. R.</given-names>
          </string-name>
          <year>1998</year>
          .
          <article-title>Social influence: Social norms, conformity and compliance. In The handbook of social psychology</article-title>
          .
          <source>Mcgraw-Hill.</source>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Colburn</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2011</year>
          . Autonomy and
          <string-name>
            <given-names>Adaptive</given-names>
            <surname>Preferences</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Utilitas</surname>
          </string-name>
          ,
          <volume>23</volume>
          (
          <issue>1</issue>
          ):
          <fpage>52</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Del Vicario</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Scala</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Caldarelli</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Stanley,
          <string-name>
            <given-names>H. E.</given-names>
            ; and
            <surname>Quattrociocchi</surname>
          </string-name>
          ,
          <string-name>
            <surname>W.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Modeling confirmation bias and polarization</article-title>
          .
          <source>Scientific Reports</source>
          ,
          <volume>7</volume>
          (
          <year>December 2016</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Dolan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and Galizzi,
          <string-name>
            <surname>M. M.</surname>
          </string-name>
          <year>2015</year>
          .
          <article-title>Like ripples on a pond: Behavioral spillovers and their implications for research and policy</article-title>
          .
          <source>Journal of Economic Psychology</source>
          ,
          <volume>47</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>Elster</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Sour Grapes: Studies in the subversion of rationality</article-title>
          . Cambridge: Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kasirzadeh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>User Tampering in Reinforcement Learning Recommender Systems</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <source>arXiv:2109</source>
          .04083 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <surname>Everitt</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Carey</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Langlois</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ortega</surname>
            ,
            <given-names>P. A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Legg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Agent Incentives: A Causal Perspective</article-title>
          .
          <source>In AAAI Conference on Artifical Intelligence.</source>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and Ahluwalia,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2007</year>
          .
          <article-title>An Examination of Different Explanations for the Mere Exposure Effect</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <source>Journal of Consumer Research</source>
          ,
          <volume>34</volume>
          (
          <issue>1</issue>
          ):
          <fpage>97</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          2022.
          <article-title>Recognising the importance of preference change: A call for a coordinated multidisciplinary research effort in the age of AI</article-title>
          . AAAI-22 Workshop on AI For Behavior Change.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <string-name>
            <surname>Furnham</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and Boo,
          <string-name>
            <surname>H. C.</surname>
          </string-name>
          <year>2011</year>
          .
          <article-title>A literature review of the anchoring effect</article-title>
          .
          <source>The Journal of Socio-Economics</source>
          ,
          <volume>40</volume>
          (
          <issue>1</issue>
          ):
          <fpage>35</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <surname>Goldstein</surname>
            ,
            <given-names>D. G.</given-names>
          </string-name>
          ; and Gigerenzer,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2002</year>
          .
          <article-title>Models of ecological rationality: The recognition heuristic</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>109</volume>
          (
          <issue>1</issue>
          ):
          <fpage>75</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <surname>Gui</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Shanahan</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Tsay-Vogel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Theorizing inconsistent media selection in the digital environment</article-title>
          .
          <source>The Information Society</source>
          ,
          <volume>37</volume>
          (
          <issue>4</issue>
          ):
          <fpage>247</fpage>
          -
          <lpage>261</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <string-name>
            <surname>Hadfield-Menell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          ; Abbeel,
          <string-name>
            <given-names>P.</given-names>
            ; and
            <surname>Dragan</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <year>2016</year>
          .
          <article-title>Cooperative inverse reinforcement learning</article-title>
          .
          <source>Advances in neural information processing systems</source>
          ,
          <volume>29</volume>
          :
          <fpage>3909</fpage>
          -
          <lpage>3917</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Johansson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and Strandberg,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <year>2012</year>
          .
          <article-title>Lifting the Veil of Morality: Choice Blindness and Attitude Reversals on a Self-Transforming Survey</article-title>
          .
          <source>PLoS ONE</source>
          ,
          <volume>7</volume>
          (
          <issue>9</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Hill</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kusev</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and van Schaik,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Choice Under Risk: How Occupation Influences Preferences</article-title>
          . Frontiers in Psychology,
          <volume>10</volume>
          :
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Jacobs</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Accounting for Changing Tastes: Approaches to Explaining Unstable Individual Preferences</article-title>
          .
          <source>Review of Economics</source>
          ,
          <volume>67</volume>
          (
          <issue>2</issue>
          ):
          <fpage>121</fpage>
          -
          <lpage>183</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <string-name>
            <surname>Jesse</surname>
            ,
            <given-names>M. W.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Jannach</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Digital nudging with recommender systems: Survey and future directions</article-title>
          .
          <source>arXiv</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Chiappa,
          <string-name>
            <surname>S.</surname>
          </string-name>
          ; Lattimore,
          <string-name>
            <surname>T.</surname>
          </string-name>
          ;
          <article-title>Gyo¨rgy, A.;</article-title>
          and Kohli,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Degenerate feedback loops in recommender systems</article-title>
          .
          <source>AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society</source>
          ,
          <volume>383</volume>
          -
          <fpage>390</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>Kozyreva</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lewandowsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and Hertwig,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools</article-title>
          .
          <source>Psychological Science in the Public Interest</source>
          ,
          <volume>21</volume>
          (
          <issue>3</issue>
          ):
          <fpage>103</fpage>
          -
          <lpage>156</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Kramer</surname>
            ,
            <given-names>A. D. I.</given-names>
          </string-name>
          ; Guillory,
          <string-name>
            <given-names>J. E.</given-names>
            ; and
            <surname>Hancock</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. T.</surname>
          </string-name>
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <article-title>Experimental evidence of massive-scale emotional contagion through social networks</article-title>
          .
          <source>Proceedings of the National Academy of Sciences of the United States of America</source>
          ,
          <volume>111</volume>
          (
          <issue>29</issue>
          ):
          <fpage>8788</fpage>
          -
          <lpage>8790</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <string-name>
            <surname>Krueger</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; Maharaj,
          <string-name>
            <given-names>T.</given-names>
            ; and
            <surname>Leike</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Hidden Incentives for Auto-Induced Distributional Shift</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          arXiv:
          <year>2009</year>
          .09153 [cs, stat].
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          <string-name>
            <surname>Kusev</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Purser</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ; Heilman,
          <string-name>
            <given-names>R.</given-names>
            ;
            <surname>Cooke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.; Van</given-names>
            <surname>Schaik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Baranova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ;
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          ; and Ayton,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Understanding Risky Behavior: The Influence of Cognitive, Emotional and Hormonal Factors on Decision-Making under Risk</article-title>
          . Frontiers in Psychology,
          <volume>8</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>Lades</surname>
            ,
            <given-names>L. K.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Delaney</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2020</year>
          . Nudge FORGOOD.
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <given-names>Behavioural</given-names>
            <surname>Public Policy</surname>
          </string-name>
          ,
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          <string-name>
            <surname>Lehman</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Clune</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Misevic</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities</article-title>
          .
          <source>Artificial Life</source>
          ,
          <volume>26</volume>
          (
          <issue>2</issue>
          ):
          <fpage>274</fpage>
          -
          <lpage>306</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>McCormick</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>How an ex-YouTube insider investigated its secret algorithm</article-title>
          . Guardian.
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          <string-name>
            <surname>Loreggia</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mattei</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Venabe</surname>
            ,
            <given-names>K. B.</given-names>
          </string-name>
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          <article-title>A Notion of Distance Between CP-nets</article-title>
          .
          <source>In Proceedings of AAMAS, 7.</source>
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          <string-name>
            <surname>Mansoury</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Abdollahpouri</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ; Pechenizkiy,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Mobasher</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          ; and Burke,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Feedback Loop and Bias Amplification in Recommender Systems</article-title>
          . arXiv:
          <year>2007</year>
          .13019 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          <string-name>
            <surname>Mathur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Moschis</surname>
            ,
            <given-names>G. P.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Life events and brand preference changes</article-title>
          .
          <source>Journal of Consumer Behaviour</source>
          ,
          <volume>3</volume>
          (
          <issue>2</issue>
          ):
          <fpage>129</fpage>
          -
          <lpage>141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          <string-name>
            <surname>Merill</surname>
            ,
            <given-names>J. B.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Oremus</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Five points for anger, one for 'like': How Facebook's formula fostered rage and misinformation</article-title>
          . The Washington Post.
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          <string-name>
            <surname>Ng</surname>
          </string-name>
          , A. Y.; and
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          <year>2000</year>
          .
          <article-title>Algorithms for inverse reinforcement learning</article-title>
          .
          <source>ICML, 1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          <string-name>
            <surname>Nishimura</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>The transitive core: Inference of welfare from nontransitive preference relations: The transitive core</article-title>
          .
          <source>Theoretical Economics</source>
          ,
          <volume>13</volume>
          (
          <issue>2</issue>
          ):
          <fpage>579</fpage>
          -
          <lpage>606</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          <string-name>
            <surname>OECD.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Behavioural Insights and Public Policy: Lessons from Around the World</article-title>
          . OECD.
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          <string-name>
            <surname>Orlowski</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Coombe</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Curtis</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>The Social Dilemma</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          <string-name>
            <surname>Roozenbeek</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>van der Linden</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Inoculation Theory and Misinformation</article-title>
          .
          <source>Technical report, NATO Strategic Communications Centre of Excellence.</source>
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          <string-name>
            <surname>Ruggeri</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Behavioral insights for public policy: concepts and cases</article-title>
          . Routledge.
        </mixed-citation>
      </ref>
      <ref id="ref67">
        <mixed-citation>
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Human Compatible. Penguin, 1st edition</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref68">
        <mixed-citation>
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dewey</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and Tegmark,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2015</year>
          .
          <article-title>Research Priorities for Robust and Beneficial Artificial Intelligence</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref69">
        <mixed-citation>
          <source>AI Magazine</source>
          ,
          <volume>36</volume>
          (
          <issue>4</issue>
          ):
          <fpage>105</fpage>
          -
          <lpage>114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref70">
        <mixed-citation>
          <string-name>
            <surname>Sadigh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dragan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Seshia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Active Preference-Based Learning of Reward Functions</article-title>
          .
          <source>In Robotics: Science and Systems XIII. Robotics: Science and Systems Foundation.</source>
        </mixed-citation>
      </ref>
      <ref id="ref71">
        <mixed-citation>
          <string-name>
            <surname>Salkind</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>2010</year>
          . Encyclopedia of Research Design.
        </mixed-citation>
      </ref>
      <ref id="ref72">
        <mixed-citation>
          2455
          <string-name>
            <given-names>Teller</given-names>
            <surname>Road</surname>
          </string-name>
          , Thousand Oaks California 91320 United States: SAGE Publications, Inc.
        </mixed-citation>
      </ref>
      <ref id="ref73">
        <mixed-citation>
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Weinmann</surname>
            ,
            <given-names>M.;</given-names>
          </string-name>
          <article-title>and vom</article-title>
          <string-name>
            <surname>Brocke</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref74">
        <mixed-citation>
          <article-title>Digital nudging: guiding online user choices through interface design</article-title>
          .
          <source>Communications of the ACM</source>
          ,
          <volume>61</volume>
          (
          <issue>7</issue>
          ):
          <fpage>67</fpage>
          -
          <lpage>73</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref75">
        <mixed-citation>
          <string-name>
            <surname>Soares</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>The Value Learning Problem</article-title>
          .
          <source>In Ethics for Artificial Intelligence Workshop at 25th International Joint Conference on Artificial Intelligence (IJCAI-2016).</source>
        </mixed-citation>
      </ref>
      <ref id="ref76">
        <mixed-citation>
          <string-name>
            <surname>Sunstein</surname>
            ,
            <given-names>C. R.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Why Nudge: The Politics of Libertarian Paternalism</article-title>
          . Yale University Press.
        </mixed-citation>
      </ref>
      <ref id="ref77">
        <mixed-citation>
          <string-name>
            <surname>Sunstein</surname>
            ,
            <given-names>C. R.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>The ethics of influence: Government in the age of behavioral science</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref78">
        <mixed-citation>
          <string-name>
            <surname>Sutherland</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Alchemy: The Surprising Power of Ideas that Don't Make Sense</article-title>
          . Random House.
        </mixed-citation>
      </ref>
      <ref id="ref79">
        <mixed-citation>
          <string-name>
            <surname>Teschl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Comim</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Adaptive Preferences and Capabilities: Some Preliminary Conceptual Explorations</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref80">
        <mixed-citation>
          <source>Review of Social Economy</source>
          ,
          <volume>63</volume>
          (
          <issue>2</issue>
          ):
          <fpage>229</fpage>
          -
          <lpage>247</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref81">
        <mixed-citation>
          <string-name>
            <surname>Thaler</surname>
            ,
            <given-names>R. H.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Sunstein</surname>
            ,
            <given-names>C. R.</given-names>
          </string-name>
          <year>2003</year>
          .
          <string-name>
            <given-names>Libertarian</given-names>
            <surname>Paternalism</surname>
          </string-name>
          .
          <source>The American Economic Review</source>
          ,
          <volume>93</volume>
          (
          <issue>2</issue>
          ):
          <fpage>175</fpage>
          -
          <lpage>179</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref82">
        <mixed-citation>
          <string-name>
            <surname>Thaler</surname>
            ,
            <given-names>R. H.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Sunstein</surname>
            ,
            <given-names>C. S.</given-names>
          </string-name>
          <year>2008</year>
          . Nudge. Yale University Press.
        </mixed-citation>
      </ref>
      <ref id="ref83">
        <mixed-citation>
          <string-name>
            <surname>Tversky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kahneman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1973</year>
          .
          <article-title>Availability: A heuristic for judging frequency and probability</article-title>
          .
          <source>Cognitive Psychology</source>
          ,
          <volume>5</volume>
          (
          <issue>2</issue>
          ):
          <fpage>207</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref84">
        <mixed-citation>
          <string-name>
            <surname>Tversky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kahneman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1985</year>
          .
          <article-title>The Framing of Decisions and the Psychology of Choice</article-title>
          . In Wright, G., ed.,
          <source>Behavioral Decision Making</source>
          ,
          <fpage>25</fpage>
          -
          <lpage>41</lpage>
          . Boston, MA: Springer US.
        </mixed-citation>
      </ref>
      <ref id="ref85">
        <mixed-citation>
          <string-name>
            <surname>van der Linden</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>The conspiracy-effect: Exposure to conspiracy theories (about global warming) decreases prosocial behavior and science acceptance</article-title>
          .
          <source>Personality and Individual Differences</source>
          ,
          <volume>87</volume>
          :
          <fpage>171</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref86">
        <mixed-citation>
          <string-name>
            <surname>van Prooijen</surname>
            , J.-W.; and van Vugt,
            <given-names>M.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Conspiracy Theories: Evolved Functions and Psychological Mechanisms</article-title>
          .
          <source>Perspectives on Psychological Science</source>
          ,
          <volume>13</volume>
          (
          <issue>6</issue>
          ):
          <fpage>770</fpage>
          -
          <lpage>788</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref87">
        <mixed-citation>
          <string-name>
            <surname>Varian</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Revealed Preference</article-title>
          .
          <source>In Samuelsonian economics and the twenty first century</source>
          ,
          <fpage>99</fpage>
          -
          <lpage>115</lpage>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref88">
        <mixed-citation>
          <string-name>
            <surname>Verma</surname>
            ,
            <given-names>I. M.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Editorial expression of concern: Experimental evidence of massive-scale emotional contagion through social networks</article-title>
          .
          <source>Proceedings of the National Academy of Sciences of the United States of America</source>
          ,
          <volume>111</volume>
          (
          <issue>29</issue>
          ):
          <fpage>10779</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref89">
        <mixed-citation>
          <string-name>
            <surname>Villasenor</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2021</year>
          .
          <article-title>Reining in overly broad interpretations of the Computer Fraud</article-title>
          and Abuse Act.
        </mixed-citation>
      </ref>
      <ref id="ref90">
        <mixed-citation>
          <string-name>
            <surname>Winecoff</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lucherini</surname>
          </string-name>
          , E.; and
          <string-name>
            <surname>Narayanan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2021</year>
          .
          <source>Simulation as Experiment: An Empirical Critique of Simulation Research on Recommender Systems.</source>
        </mixed-citation>
      </ref>
      <ref id="ref91">
        <mixed-citation>
          <source>arXiv:2107</source>
          .14333 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref92">
        <mixed-citation>
          <string-name>
            <surname>Wyer</surname>
            ,
            <given-names>R. S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>A. J.;</given-names>
          </string-name>
          and Shen,
          <string-name>
            <surname>H.</surname>
          </string-name>
          <year>2012</year>
          .
          <article-title>The Effects of Past Behavior on Future Goal-Directed Activity</article-title>
          .
          <source>In Advances in Experimental Social Psychology</source>
          , volume
          <volume>46</volume>
          ,
          <fpage>237</fpage>
          -
          <lpage>283</lpage>
          . Elsevier.
        </mixed-citation>
      </ref>
      <ref id="ref93">
        <mixed-citation>
          <string-name>
            <surname>Yudkowsky</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Complex Value Systems in Friendly AI</article-title>
          . In
          <string-name>
            <surname>Schmidhuber</surname>
          </string-name>
          , J.; Tho´risson, K. R.; and Looks, M., eds.,
          <source>Artificial General Intelligence</source>
          , volume
          <volume>6830</volume>
          ,
          <fpage>388</fpage>
          -
          <lpage>393</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>