<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Everyday-Life Information-Seeking With AI: How Insights From ELIS Can Help Design Trustworthy AI Systems.</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Beretta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rebecca Guerrini</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Information Science and Technologies (ISTI-CNR)</institution>
          ,
          <addr-line>56124 Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Padova</institution>
          ,
          <addr-line>35131 Padova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As artificial intelligence (AI) systems are becoming an integral part of our everyday life, their role in shaping human information-seeking behaviors has become a crucial area of study. During the years, part of the literature on Human-Computer Interaction (HCI) that has focused on AI, concentrated on the role of trust in mediating users' reliance on the AI output. However, the experimental tasks they used in these studies did not reflect the way in which people actually seek information in their everyday life, undermining the generalization of the obtained results. Here, we posit that the way people practice information-seeking in their daily lives can influence the way in which they then decide to rely on the output of the AI. Therefore, we propose that the perspective ofered by the Everyday Life Information-Seeking literature (ELIS) can help designers create more trustworthy AI systems, that can foster users' appropriate reliance.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;trustworthy AI</kwd>
        <kwd>appropriate reliance</kwd>
        <kwd>information-seeking</kwd>
        <kwd>ELIS</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Imagine it is 10 a.m of a Sunday morning. You just woke up and while you are having breakfast, you
are planning the day ahead of you. It is not a workday, but you have to run some errands, maybe fix
some small things that you ignored all week, and you want to dedicate some time sewing your cosplay
costume for an upcoming comic convention. One thing that is particularly bothering you is the squeaky
bathroom door, which is still opens fine but let all your roommates know when you are going to the
toilet. Not knowing how to fix the problem, you turn to ChatGPT to find some quick solutions. You
pick one of the answer that it provided to you, and decide to rely on it. Your choice can depend on the
trust that you have in ChatGPT, but also on variable like the time that it will take you to implement that
solution, the tools needed, their availability, or how much do you care about the success of the solution.
Since it is Sunday morning and the door is not an important problem, you can decide to rely on the
given suggestions and apply just a little bit of oil on the hinges to see if it works. Later, you want to
continue working on the costume for the cosplay. It is your dream cosplay, and you want to wear it
at a comic convention you are really excited about. You want to do a good job, and you really value
the success of your work. So when your sewing machine gives you trouble, you approach ChatGPT’s
suggestions more cautiously and likely rely on them diferently. And it is possible that, in the end, you
will decide to call your grandparents -former tailors- for help.</p>
      <p>
        This imagination exercise had the purpose to highlight the connection between the reliance on
AI outputs, and our information-seeking habits, specifically the one that occurs in our everyday
lives. Some studies among the information-seeking literature decided to focus on this latter aspect:
they investigated the ways individuals research information for everyday purposes, a phenomenon
described as Everyday Life Information-Seeking (ELIS) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. ELIS explores the search of information that
supports users’ personal interests and broader life responsibilities [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][2]. This perspective can ofer
a valuable framework for understanding how users interact with AI-driven information systems, in
non-professional, real-world scenarios. However, the impact of AI on ELIS (and conversely, the impact
of ELIS on AI use) has not been explored yet.
      </p>
      <p>While ELIS research provides insights into how individuals seek information for personal and
everyday needs, another body of literature in Human-Computer Interaction (HCI) focuses on a diferent
but related aspect: trust and reliance on AI outputs. These studies (e.g. [3] [4] [5]) have explored the
topic by asking the participants to evaluate AI outputs and decide whether to rely on them in order to
solve some imposed tasks in which participants do not have knowledge or interest in.</p>
      <p>The fact that subject do not encounter these tasks in their everyday life, or worse, they would
not choose to engage with them outside of the experimental settings, can probably compromise the
generalization of the results in more naturalistic settings.</p>
      <p>This paper gives a brief overview on these two literature, and aim to bridge the existing gap by
examining the intersection of ELIS and reliance on AI. Specifically, we posit that users’ reliance behaviors
change based on the type of information they are retrieving and the values they assign to it. According
to this perspective, insights from ELIS literature can be beneficial for reliance research, by giving a
more comprehensive understanding of user-AI interactions, that can be used to design trustworthy AI
systems that foster appropriate reliance.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Information-seeking and insights from ELIS</title>
      <sec id="sec-2-1">
        <title>2.1. Information-seeking: definition and influential frameworks</title>
        <p>With information-seeking we mean the activities of searching for information. This search is active and
intentional, and it is usually directed to information which are salient for the seeker (e.g, information
they need to solve tasks or make decisions) [6] [7].</p>
        <p>Information seeking behaviors are varied, and can include asking questions, looking up information
on books or on the internet, but also interrogate AI systems (like, for example, Large Language Models
-LLMs).</p>
        <p>During the years, research coming from the fields of psychology and information science have
largely investigated the motives and antecedents of information-seeking, but also the search strategies
employed by the users (specific for search engines), the use of the retrieved information, their sharing,
and the discrimination between factual information and misinformation [8].</p>
        <p>To this day a great number of frameworks and models of information-seeking exists in both literature,
indeed without agreement on which one better describe the process [9] [6] [10] [7].</p>
        <p>Other rich literature related to information-seeking include the one of curiosity, seen as an antecedent
[11] [12] [8], but also information-avoidance, the counterpart of information-seeking [13] [14].</p>
        <p>One model that has recently obtained attention is the one proposed by Sharot and Sunstein [7]. The
authors propose that, in order to decide to seek information, people evaluate the instrumental, hedonic
and cognitive utility of the information. Instrumental utility concern how much the information will
allow the person to achieve a goal; hedonic utility correspond to the amount of positive afects minus
the amount of negative afects that an information would induce; cognitive utility regards the role of
information in strengthening the internal mental models of who is going to retrieve the information.</p>
        <p>An estimate, that can be positive, negative or zero, will be assigned to the three dimensions. Then,
these evaluations will be integrated together into a final estimation of the value of the information, that
can trigger information-seeking (if positive), information-avoidance (if negative) or neither (if zero).</p>
        <p>In general, people will end up choosing to seek information that a) help them select actions that
lead to the best outcomes; b) elicit positive afective responses at the present time; and c) that will
strengthen their mental models or that are related to concepts frequently activated and interconnected
in the models [7]. The authors also posit the role of biases and individual diferences [ 15] in influencing
the estimation of the three utilities. Figure 1, illustrates the authors’ framework.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. ELIS literature</title>
        <p>
          One line of research focused on what has been defined Everyday Life Information-Seeking (ELIS) [ 2]
[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. ELIS has been described as a process of information-seeking that concerns knowledge useful for
daily life needs, or to fulfill roles and life responsibilities. ELIS practices include searching information
to solve specific problems, but also the passive receipt or monitoring of information through media and
social network [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>In one of his core papers on ELIS, Savolainen [2] introduces two captivating concepts related to the
topic, namely the way of life and the mastery of life. The way of life (or the "order of things") refers to
the preference (hence, the order) that an individual assigns to her daily life activities, which are not
only job related. The assigned order depends both on objective (like the length of the working day and
the amount of free time) and subjective evaluations (pleasantness experienced by doing some activities).
On the other hand, the mastery of life (or "keeping things in order") refers to the efort that individuals
make to maintain their meaningful order of activities. The core idea is that both the way of life and the
mastery of life afect individuals’ information-seeking practices.</p>
        <p>
          What is really diferent about ELIS literature is its focus on a type of information seeking that interests
areas of individual’s life that go beyond mere duties (work or study related), but are related to personal
interests, attitudes and needs. Additionally, it can be seen as an approach that gives a more ecological
and naturalistic view on the information-seeking process, traditionally less explored. Finally, ELIS
research takes into consideration the complexity and diversity of information-seeking behaviors: ELIS
behaviors are characterized by many diferent factors, but also difer between people and diferent
contexts [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>
          One of the trend of ELIS research concerns how the rapid development of technologies is changing
people information behaviors, now that they have become part of their everyday lives [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. However,
these line of research has not yet explored the relationship between the use of AI systems and ELIS
behaviors, suggesting that this is still a potential area for investigation.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The impact of ELIS on reliance on AI system</title>
      <p>The advent of AI has introduced a new type of interaction between user and system, which consequently
influenced information behaviors. In the last years this interaction has been studied, with a strong
focus on the concept of trust and reliance on the AI outputs, in experimental situations that are not
ecologic. In the next paragraphs we give an overview on the literature on trust and reliance on AI, and
explain why ELIS can give some insights on the topic.</p>
      <sec id="sec-3-1">
        <title>3.1. Trust and reliance: some definitions</title>
        <p>According to the widely used definition proposed by Lee and See [ 16], trust is referred to as "the attitude
that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and
vulnerability”. Trust implies a relationship between a trustor, the one that trusts, and a trustee, the party
that is being trusted. In the case of automation, the user is the trustor and the system is the trustee.</p>
        <p>The concept of trust in the context of HCI has been explored in diferent fields (e.g. e-commerce,
e-health, e-government, digital information, human-robot interaction, computer-supported cooperative
work, and so on) and the process of trust formation has been investigated through diferent theories [ 17].
Moreover, several frameworks about the role of trust in the interaction between human and automation
have been proposed over the years [16] [18] [19] [20] [3]. All of the proposed models share the idea that
trust is a complex construct, influenced by component that are dispositional (so related to the person’s
tendencies, background, cognitive and emotional aspects), and external, so dependent on the context
and the environment (cultural, organizational, etc.), including relational and social aspects [17].</p>
        <p>Trust has received so much attention in the last decades in the context of HCI, because of its role
in the modulation of reliance on automation. Reliance, contrary to trust, is a behavior. In the context
of AI, reliance is usually intended as the frequency to which a person accepts the AI advices, so how
much it "relies" on the suggestions of the machine. Usually, behaviors are influenced by the individual’s
attitudes, but also by other factors. For example, according to the Theory of Planned Behavior (TPB),
a well known framework proposed by Ajzen [21] [22], subjective norm concerning the behavior and
perceived behavioral control are as important as attitudes in influencing intention to perform it and, if
the individual act on that intention, on the behavior. This adds a layer of complexity on the analysis of
reliance behaviors that, to our knowledge, has not been properly addressed in literature.</p>
        <p>So, what is really important to keep in mind is that reliance is influenced by trust, but also by other
factors (like self-confidence or perceived risk). This makes reliance possible without trust, but also
implies that trust does not always translate into reliance behaviors [3] [16].</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. The issue with overreliance</title>
        <p>Both trust and reliance have an important role in the interaction between human and automation: a
miscalibration of them, can lead to behaviors of overreliance and underreliance on the system.</p>
        <p>As Lee and See [16] defined it, calibration consists in “the correspondence between a person’s trust in
the automation and the automation’s capabilities”. So, we have a calibrated trust when "trust matches
system capabilities, leading to appropriate use" [16].</p>
        <p>To prevent errors in decision-making when using automated systems, what matters is not the level
of trust per se, but its calibration.</p>
        <p>When trust is not well calibrated, we incur in overtrust or in distrust, respectively a situation in which
"trust exceed system capabilities" and a one in which "trust falls short of the automation’s capabilities"
[16].</p>
        <p>As said earlier, trust has a mediating role on reliance, so a poor trust calibration might lead to two
scenarios. In the context of human-AI cooperation for instance, users can reject the AI’s suggestion
when it is actually correct: in this case we talk about underreliance. Alternatively, users can often accept
incorrect AI decisions, incurring in overreliance [23]. This concept is visually explained in Table 1, equal
in content to the one proposed in Vasconcelos et al. [23].</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. The problem with the study of reliance and where ELIS can help</title>
        <p>The issue with the literature regarding trust and reliance in HCI, is related to a confusion around the
terminology and, consequently, the measures employed to evaluate the constructs of trust and reliance.</p>
        <p>Usually, these two terms are used indistinctly even if they are two diferent construct (one is an
attitude -a latent variable- and the other one is a behavior -hence, observable-), and the boundaries
between the two construct tend to vanish.</p>
        <p>This lead to a lack of agreement on the measures employed to evaluate them, which can translate in
a lack of agreement on the interpretation of results.</p>
        <p>Another issue emerges from the experimental design employed in the study of trust and reliance
with AI. These studies (e.g. [3] [4] [5]) usually asks the participants to solve tasks in which they have
no interest or in which they have not expertise in. This happens because the task is imposed to the
participants, who cannot chose it. Here, we posit that this task imposition can potentially influence
the participants’ reliance behaviors (or trust behaviors, when the term is used improperly), potentially
causing them to difer from the one that they would have in everyday life situations, or for tasks that
they care about or are important to them. In this scenario, ELIS perspective can come at hand. By
embracing the way in which people seek information with AI, we can better understand the motives
that lead users to search information with this tool and, consequently, how they decide to rely on the
AI output. The results that can be drawn from these observations will probably be more useful to
understand the dynamic interaction between user and AI, therefore, to design trustworthy interfaces.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Open questions and insights for future works</title>
      <p>With this new perspective in mind, illustrated by Figure 2 here we propose some research questions
that can inspire future works:</p>
      <p>
        • RQ1: How does the use of AI systems influence the process of everyday life information-seeking?
Given the literature gap in ELIS research, it would be useful to explore how the introduction and use of
AI systems is influencing users’ information seeking [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>• RQ2: How can the way in which people seek information in their everyday-life modulate reliance
behaviors on AI outputs?
As mentioned earlier, it is possible that the type of information that people choose to seek, together
with the personal values that they assign to them [7] [2], have an influence on the user decision to rely
or disregard the AI output. However, specific research on the topic are missing.</p>
      <p>• RQ3: Which strategies can be implemented to foster appropriate reliance on AI outputs when
engaging in everyday life information-seeking activities?
After we have investigated the users’ reliance behaviors when engaged in ELIS, strategies to foster
appropriate reliance should be designed, tested and implemented.</p>
      <p>One final issue concerns the method that should be employed to investigate these research questions,
and specifically to measure the information-seeking behavior.</p>
      <p>It could be measured using quantitative measures like questionnaires [24] [25], or qualitative methods
like think aloud [26] [27], use of diaries [28] [29] and interviews [30] [31] (structured and
semistructured). It needs to be clarified which one is the most suitable to measure information-seeking, or if
a combination of these methodologies is needed.</p>
      <p>We hope that the insights presented in this section can guide some future research, and allow the
designers to create systems that support everyday information-seeking, tasks and decisions.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Now, let’s get back for a moment to the imagination exercise that we started at the beginning of this
discussion. It is Sunday again, but the day is ending. As you crawl into bed, you think about the day
that has just passed. Overall you had a nice day, and you took care of all the things you neglected
during the week. You encountered some problems along the way, but you actively searched for useful
information to solve them. What tool did you use to search for that information? How did you decide
to rely on it, based on the task you had to accomplish? Did the tools you used supported you in the
research? If not, how could they be better designed to support your search and your decision to rely on
the information?</p>
      <p>In this paper we have briefly explored the literature about information-seeking and reliance in AI, in
order to draw a connection between these two themes and delineate some useful research questions for
future works. We suggested that studying how people seek information with AI in their everyday-life,
will help design AI systems that can better suit users and support them during the process of information
seeking. More importantly, the knowledge gained through the observation of this process will allow to
create trustworthy AI systems that can foster user appropriate reliance.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work has been funded by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - “FAIR
- Future Artificial Intelligence Research” - Spoke 1 “Human-centered AI”.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[2] R. Savolainen, Everyday life information seeking: Approaching information seeking in the context
of “way of life”, Library &amp; information science research 17 (1995) 259–294.
[3] M. Schemmer, N. Kuehl, C. Benz, A. Bartos, G. Satzger, Appropriate reliance on ai advice:
Conceptualization and the efect of explanations, in: Proceedings of the 28th International Conference on
Intelligent User Interfaces, 2023, pp. 410–422.
[4] H. Xiang, J. Zhou, B. Xie, Ai tools for debunking online spam reviews? trust of younger and older
adults in ai detection criteria, Behaviour &amp; Information Technology 42 (2023) 478–497.
[5] G. Zhang, L. Chong, K. Kotovsky, J. Cagan, Trust in an ai versus a human teammate: The efects
of teammate identity and performance on human-ai cooperation, Computers in Human Behavior
139 (2023) 107536.
[6] C. Shah, E. M. Bender, Envisioning information access systems: What makes for good tools and a
healthy web?, ACM Transactions on the Web 18 (2024) 1–24.
[7] T. Sharot, C. R. Sunstein, How people decide what they want to know, Nature Human Behaviour
4 (2020) 14–19.
[8] T. D. Wilson, Curiosity and information-seeking behaviour: a review of psychological research
and a comparison with the information science literature, Journal of Documentation 80 (2024)
43–59.
[9] H. Allam, M. Bliemel, N. Nassiri, S. Toze, L. M. Peet, R. Banerjee, A review of models of information
seeking behavior, 2019 Sixth HCT Information Technology Trends (ITT) (2019) 147–153.
[10] K. Murayama, A reward-learning framework of knowledge acquisition: An integrated account of
curiosity, interest, and intrinsic–extrinsic rewards., Psychological Review 129 (2022) 175.
[11] G. Loewenstein, The psychology of curiosity: A review and reinterpretation., Psychological
bulletin 116 (1994) 75.
[12] L. L. van Lieshout, F. P. de Lange, R. Cools, Why so curious? quantifying mechanisms of information
seeking, Current Opinion in Behavioral Sciences 35 (2020) 112–117.
[13] J. L. Foust, J. M. Taber, Information avoidance: Past perspectives and future directions, Perspectives
on Psychological Science (2023) 17456916231197668.
[14] R. Golman, D. Hagmann, G. Loewenstein, Information avoidance, Journal of economic literature
55 (2017) 96–135.
[15] C. A. Kelly, T. Sharot, Individual diferences in information-seeking, Nature communications 12
(2021) 7062.
[16] J. D. Lee, K. A. See, Trust in automation: Designing for appropriate reliance, Human factors 46
(2004) 50–80.
[17] S. Gulati, J. McDonagh, S. Sousa, D. Lamas, Trust models and theories in human–computer
interaction: A systematic literature review, Computers in Human Behavior Reports (2024) 100495.
[18] K. A. Hof, M. Bashir, Trust in automation: Integrating empirical evidence on factors that influence
trust, Human factors 57 (2015) 407–434.
[19] C. D. Wirz, J. L. Demuth, A. Bostrom, M. G. Cains, I. Ebert-Uphof, D. J. Gagne II, A. Schumacher,
A. McGovern, D. Madlambayan, (re) conceptualizing trustworthy ai: A foundation for change,
Artificial Intelligence (2025) 104309.
[20] E. Glikson, A. W. Woolley, Human trust in artificial intelligence: Review of empirical research,</p>
      <p>Academy of management annals 14 (2020) 627–660.
[21] I. Ajzen, The theory of planned behavior, Organizational behavior and human decision processes
50 (1991) 179–211.
[22] I. Ajzen, The theory of planned behavior: Frequently asked questions, Human behavior and
emerging technologies 2 (2020) 314–324.
[23] H. Vasconcelos, M. Jörke, M. Grunde-McLaughlin, T. Gerstenberg, M. S. Bernstein, R. Krishna,
Explanations can reduce overreliance on ai systems during decision-making, Proceedings of the
ACM on Human-Computer Interaction 7 (2023) 1–38.
[24] I. B. Mun, A study of the impact of chatgpt self-eficacy on the information seeking behaviors in
chatgpt: the mediating roles of chatgpt characteristics and utility, Online Information Review 49
(2025) 373–394.
[25] E. Basak, F. Calisir, An empirical study on factors afecting continuance intention of using facebook,</p>
      <p>Computers in Human Behavior 48 (2015) 181–189.
[26] S. Gupta, Y.-C. Chen, C. Tsai, Utilizing large language models in tribal emergency management,
in: Companion Proceedings of the 29th International Conference on Intelligent User Interfaces,
2024, pp. 1–6.
[27] M. Al-Moteri, Evidence-based information-seeking behaviors of nursing students: Concurrent
think aloud technique, Heliyon 9 (2023).
[28] A. L. Colosimo, G. Badia, Diaries of lifelong learners: Information seeking behaviors of older
adults in peer-learning study groups at an academic institution, Library &amp; Information Science
Research 43 (2021) 101102.
[29] K. MacKay, C. Vogt, Information technology in everyday and vacation contexts, Annals of Tourism</p>
      <p>Research 39 (2012) 1380–1401.
[30] J. Koman, K. Fauvelle, S. Schuck, N. Texier, A. Mebarki, Physicians’ perceptions of the use of a
chatbot for information seeking: Qualitative study, Journal of medical Internet research 22 (2020)
e15185.
[31] P. J. McKenzie, A model of information practices in accounts of everyday-life information seeking,
Journal of documentation 59 (2003) 19–40.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H</given-names>
            .
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Everyday life information seeking: A systematic review with bibliometric analysis</article-title>
          ,
          <source>Journal of Librarianship and Information Science</source>
          (
          <year>2024</year>
          )
          <fpage>09610006241285514</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>