<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enriching human-AI teaming based on risk envelopment and team mates' inherent capabilities</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Antti Salovaara</string-name>
          <email>antti.salovaara@aalto.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Virpi Roto</string-name>
          <email>virpi.roto@aalto.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Design, Aalto University</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>With development of new technologies, humans have been able to continuously automate larger amounts of work. The recent developments of AI ofer again new opportunities for such automation. This time, however, the situation is diferent, as it has become possible to consider technologies as teammates instead of only as tools. This demands for conceptualization of such a human-AI teaming and better understanding of the ways in which risks and eficiency can be balanced, as well as how inherent strengths and weaknesses of the involved parties can be orchestrated. In this paper, we build on our earlier considerations how this may be achieved in a way that would ensure work enrichment for the human teammates, and thus keep work meaningful and interesting for people even with highly skilled AI teammates.</p>
      </abstract>
      <kwd-group>
        <kwd>Automation</kwd>
        <kwd>Human-AI teaming</kwd>
        <kwd>Envelopment</kwd>
        <kwd>Work enrichment</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        While many models and proposals for human–AI interaction and teaming have been presented in the
recent years (e.g., [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]), and many older ones (e.g., [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ]) also are still seen as valid, researchers
are still seeking for a synthesizing picture into the best ways to describe successful teaming between
humans, automation, robots, and AI. Instead of a synthesis, the recent development in AI have brought
about a new influx of richer ways to consider these relationships.
      </p>
      <p>In this paper, we wish to ofer more views into this exciting research area that is continuously under
development, with a wish for contributing to an emerging synthesis in the future. We are interested in
task and responsibility allocations that on one hand would elevate human potential and meaningfulness
in teams comprising humans and AIs, and in finding ways for using AIs for their best benefit while
ofsetting the related dangers and risks. We thus seek to present tentative answers to the following
research question: What are the task/responsibility configurations in human–AI teaming that a) provide
meaningful experiences at work for humans and b) ofer balancing principles for AI’s strengths and
weaknesses?</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>In this paper, we consider AI as a new, more competent form of automation – one which has higher
agency, representational capacity, and ability for interactive cooperation. In the following sub-sections,
we first review what literature has suggested about task allocations between humans and automation,
and then have a closer look at the recent works on human–AI interaction.</p>
      <sec id="sec-2-1">
        <title>2.1. Humans’ and AI’s inherent weaknesses and strengths</title>
        <p>
          One of the first recommendations for task division between humans and automation can be found from
1950s when Fitts [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] presented the “humans are better at – machines are better at” (HABA-MABA) lists
        </p>
        <p>CEUR
Workshop</p>
        <p>ISSN1613-0073</p>
        <p>Responding quickly to control signals
Reasoning deductively
Storing information briefly, erasing it completely
in a report on air navigation and trafic control (see Table 1).</p>
        <p>
          A much more recent presentation on human–automation task division is by Parasuraman et al.
[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] whose proposal has four information processing steps (information acquisition, information
analysis, decision/action selection and action implementation), each which can involve diferent levels
of automation (e.g., in decision/action selection, from a stage where a human makes all decisions
through automation-executed actions after human approval to a level where the automation performs
all decisions and ignores humans). This work has been widely cited especially in human factors research.
        </p>
        <p>The increasing capacity of machines to assume tasks that earlier have been humans’ responsibility
(especially with the last decade’s breakthroughs with AI) have motivated researchers to suggest
orchestrations by which the new forms of agency in AI’s operation can be harnessed successfully but
safely.</p>
        <p>Salovaara et al. [6] presented a three-layered organisational model by which organisations in high-risk
environments (malware protection in the case of their study) can automate virus protection processes
in customers’ computers while also making it amenable for rapid updates and reconfigurations should
errors occur or new detection require additions. The model builds on the diferences between mindful
and mindless activities [7]. The lowest level in their model is fully automated and takes care of “mindless”
activities: it analyses operations within the computer’s communications and internal processes, and
takes swift action if something doubtful is detected. The second layer is human-operated, mindful and
thus slow. It is a reactive layer that responds to new virus families, and continuously updates the lowest
layer’s automations. The third layer is anticipatory and research oriented, and tries to identify new
threats, so that the second layer can respond to new threats before they become epidemics. This way,
automation’s performance potential can be harnessed maximally yet its processes can be adapted based
on changing needs.</p>
        <p>Another way to maximise AI’s power while curbing is risks is to establish “envelopes” around AI’s
training data, training method, permitted outputs, and tasks that it is recruited to solve [8]. The term
“envelope” is adopted from shopfloor automation, where robot arms in assembly lines have 3D no-go
areas around them, to prevent fatal injuries [9]. With such envelopes, organisations may be able to
deploy powerful yet inscrutable black-box AI systems to tasks: they know that they will not be used in
decisions that are too risky, and will not suggest actions that would be severely erroneous.</p>
        <p>
          Also human–AI interactions have been an active topic of research. Microsoft researchers’ guidelines
for designing AI-infused systems [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], for example, are a widely cited source. It presents 18 guidelines
for 4 diferent interaction situations (“initially”, “during interaction”, “when wrong” and “over time”)
in human–AI interactions. The guidelines suggest, for example, that AI should remember earlier
interactions with its human teammates, so that it can adapt to it over time.
        </p>
        <p>However, what has remained less studied is the needs that human workers will have for the future
work with AI teammates. The focus has been heavily on the capabilities of AI, while the human
teammates’ psychological needs and best cognitive capabilities have been understudied. We therefore
review some of the much fewer works in these topics in the following sub-section.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Requirements inherent to humans</title>
        <p>As sentient teammates, humans need to be considered diferently than machines. At minimum, they
should have decent work conditions and fair income. According to International Labour Organization
(https://www.ilo.org/topics/decent-work), every worker should have security in the workplace, social
protection, better prospects for personal development and social integration, freedom to express their
concerns, organize and participate in the decisions that afect their lives and equal opportunities and
treatment for all women and men. AI colleagues influence many of these aspects, such as security,
social protection, personal development, social integration, and participating in decision making. Thus,
when designing ethical AI co-workers, these aspects should be addressed.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Enriched human–AI teaming</title>
      <p>
        We find that much of what Fitts [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] wrote about relative strengths of humans and machines is still valid
today, even if automation nowadays includes systems that are able to learn autonomously. What has
admittedly changed, however, is AI’s ability to demonstrate behaviours (e.g., pattern recognition and
use of natural language) that previously was considered solely as a territory that humans master.
      </p>
      <p>One of the underlying principles by which Fitts’s two lists can be distinguished from each other
seems to be the mindlessness of actions that machines can perform, and mindfulness that humans
are capable of (occasionally, at least). This distinction still holds today: even if machines can perform
even superhuman tasks cognitively, they are not capable of such self-reflection that mindful actions
presuppose.</p>
      <p>In the absence of self-reflection and mindfulness, the above-presented envelopment-oriented
organization-level techniques [8] may be amenable also for team-level human–AI cooperation. Also
on a team level, “boundary envelopment” can be applied to AIs’ tasks so that they only perform tasks
where errors are non-critical and where their use is socially acceptable.</p>
      <p>In some cases, however organisational envelopment is or inadequate or insuficient for team-level
cooperation. Not every team trains train their own models, so training-data envelopment, for example,
is unlikely to be applicable. Also, new cloud-based large language model AIs introduce data privacy and
hallucination/fabrication risks, calling for consideration whether new forms of team-level envelopment
could be identified and operationalized. Currently precautions for these two risks rest too much on the
shoulders of human teammates.</p>
      <p>Also the needs of human teammates need more attention. For example, an AI-induced robotic
co-worker must be designed to address employees’ safety. Traditionally, robotic tools are placed in
isolated areas to avoid accidents, but when robots become colleagues, it would be nicer to have them
safely among the other workers. Another requirement for decent work is the possibility for workers to
participate in decision making, especially when it relates to their own work. While it may be dificult
to interfere to the processes of deep neural networks, designers should allow people an opportunity to
influence their own work task allocation. Following the decent work requirements in AI design makes
AI a well-behaving co-worker.</p>
      <p>Decent work is the minimum requirement for designing human–AI teamwork. It prevents harm at
work but does not address aspects that make work intrinsically motivating. According to Roto [10],
addressing the requirements of decent work is the first level of work enrichment. Work enrichment is
the counteract of work simplification, which often happens with industrial AI. To avoid future work
being passive monitoring of automated processes, work enrichment introduces activities that address
employees’ timely pragmatic and/or psychological needs and makes work worthwhile. People are
motivated by their basic psychological needs, the fulfillment of which makes life and work worthwhile
and interesting. This applies also at work, and we see it as the next level of work enrichment [10].</p>
      <p>Self-Determination Theory [11] states that people are motivated at work when the work meets
the needs of feeling competent when executing work tasks, being able to execute one’s work tasks
autonomously, and having good social relations with colleagues. For example, to foster the competence
feeling, AI colleagues should allow people to complete tasks and keep up their competence at work, even
by teaching them how to complete new interesting tasks. AI teammates, if respecting the autonomy of
the human colleagues, give them choice and ensure they are willingly doing their work. To address the
third psychological need, relatedness, AI system could act socially like a human teammate. A social AI
can act like a butler, foreseeing workers’ needs and helping them complete uninteresting tasks.</p>
      <p>Roto [10] further proposes that the third level of work enrichment is eudaimonic design of AI. While
making work motivating for people is the goal of work enrichment on level two, the third level targets
a future where highly autonomous AI systems are widely spread and people can choose how much they
want to work. In this context, people may find lazy life meaningless and not improving their wellbeing
in long term. Some people may prefer to go to work, where they can both develop themselves and
make themself useful. The Aristotelean philosophy of eudaimonia guides people to live a good life and
to realize their true potential (e.g., [12]). When people come to work to improve meaningfulness of
their life, an AI colleague can be designed to act like a coach that helps them flourish and live a good
life. For example, AI can help workers in self-discovery and recognizing the most suitable work roles
and tasks for them. When people are aware of their best potential, they can put it into use at work. AI
can guide them to find personally meaningful objectives and make the best with their talents. AI can
even guide people to workplaces that provide suitable tasks for the individual. Through this kind of
AI teammates, we can move from work where humans are servants of AI to work where AI can make
human employees flourish.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>In this position paper, we have drafted two future pathways for improved human–AI teaming. First,
we have suggested that envelopment methods identified in organization-level AI deployments, where
they balance opportunities of more powerful models with risks that inscrutability introduces, could
be adapted to team-level concerns. Second, we have turned our attention to the human teammates’
psychological needs – which is a topic that has been overshadowed with the surging interest that the
expanding technological capabilities of AIs have created. We plan to investigate these two avenues in
the future, and wish that these viewpoints rais interesting discussions in this CHI workshop.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools in the preparation of this article.
[6] A. Salovaara, K. Lyytinen, E. Penttinen, High reliability in digital organizing: Mindlessness, the
frame problem, and digital operations, MIS Quarterly 43 (2019) 555–578. doi:10.25300/MISQ/
2019/14577.
[7] B. S. Butler, P. H. Gray, Reliability, mindfulness, and information systems, MIS Quarterly 30 (2006)
211–224. doi:10.2307/25148728.
[8] A. Asatiani, P. Malo, P. R. Nagbøl, E. Penttinen, T. Rinta-Kahila, A. Salovaara, Sociotechnical
envelopment of artificial intelligence: An approach to organizational deployment of inscrutable
artificial intelligence systems, Journal of the Association for Information Systems 22 (2021).
doi:10.17705/1jais.00664.
[9] S. Robbins, AI and the path to envelopment: Knowledge as a first step towards the responsible
regulation and use of AI‐powered machines, AI &amp; Society (2019). doi:10.1007/s00146-019-00891-1.
[10] V. Roto, Co-worker, butler, or coach? Designing automation for work enrichment, in: R. Rousi,
C. von Koskull, V. Roto (Eds.), Humane Autonomous Technology: Re-thinking Experience with
and in Intelligent Systems, Springer International Publishing, Cham, 2024, pp. 45–65. URL: https:
//doi.org/10.1007/978-3-031-66528-8_3. doi:10.1007/978-3-031-66528-8_3.
[11] E. L. Deci, R. M. Ryan, Intrinsic Motivation and Self-Determination in Human Behavior, Springer,</p>
      <p>New York, NY, 1985. doi:10.1007/978-1-4899-2271-7.
[12] C. D. Ryf, Happiness is everything, or is it? Explorations on the meaning of psychological
wellbeing, Journal of Personality and Social Psychology 57 (1989) 1069–1081. doi:10.1037/0022-3514.
57.6.1069.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Amershi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vorvoreanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fourney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nushi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Collisson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Suh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Iqbal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Inkpen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Teevan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kikin-Gil</surname>
          </string-name>
          ,
          <string-name>
            <surname>E.</surname>
          </string-name>
          <article-title>Horvitz, Guidelines for human-AI interaction</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI</source>
          <year>2019</year>
          ), ACM Press, New York, NY,
          <year>2019</year>
          , pp.
          <volume>3</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          :
          <fpage>13</fpage>
          . doi:
          <volume>10</volume>
          .1145/3290605.3300233.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Berretta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tausch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ontrup</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gilles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Peifer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kluge</surname>
          </string-name>
          ,
          <article-title>Defining human-AI teaming the human-centered way: A scoping review and network analysis</article-title>
          ,
          <source>Frontiers in Artificial Intelligence</source>
          <volume>6</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .3389/frai.
          <year>2023</year>
          .
          <volume>1250725</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Fitts</surname>
          </string-name>
          ,
          <article-title>Human Engineering for an Efective Air Navigation and Trafic Control System</article-title>
          ,
          <source>Technical Report, National Research Council</source>
          ,
          <year>1951</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Bradshaw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dignum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jonker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sierhuis</surname>
          </string-name>
          ,
          <article-title>Human-agent-robot teamwork</article-title>
          ,
          <source>IEEE Intelligent Systems</source>
          <volume>27</volume>
          (
          <year>2012</year>
          )
          <fpage>8</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1109/MIS.
          <year>2012</year>
          .
          <volume>37</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Parasuraman</surname>
          </string-name>
          , T. B.
          <string-name>
            <surname>Sheridan</surname>
            ,
            <given-names>C. D.</given-names>
          </string-name>
          <string-name>
            <surname>Wickens</surname>
          </string-name>
          ,
          <article-title>A model for types and levels of human interaction with automation</article-title>
          ,
          <source>IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans</source>
          <volume>30</volume>
          (
          <year>2000</year>
          )
          <fpage>286</fpage>
          -
          <lpage>297</lpage>
          . doi:
          <volume>10</volume>
          .1109/3468.844354.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>