=Paper= {{Paper |id=Vol-3136/paper1 |storemode=property |title=A research framework focused on humans and AI instead of humans versus AI |pdfUrl=https://ceur-ws.org/Vol-3136/paper-1.pdf |volume=Vol-3136 |authors=Gerhard Fischer |dblpUrl=https://dblp.org/rec/conf/avi/Fischer22 }} ==A research framework focused on humans and AI instead of humans versus AI== https://ceur-ws.org/Vol-3136/paper-1.pdf
A Research Framework Focused on AI and Humans instead of AI
versus Humans
Gerhard Fischer 1
1
    Center for LifeLong Learning & Design (L3D), University of Colorado, Boulder, USA

                                    Abstract
                                    The arguments in this position paper are grounded in my professional career as a faculty
                                    member in Computer Science and Cognitive Science. For the last three decades, our research
                                    in the Center for Lifelong Learning & Design (L3D) has been centered on human-centered
                                    design, intelligence augmentation, and distributed cognition with a focus on how to transcend
                                    the unaided individual human mind with socio-technical environments. The theme of this
                                    workshop “AI for Humans or Humans for AI” does not have a simple answer. My arguments
                                    provide support for the “AI for Humans” perspective. Our research activities and my
                                    contributions to previous CoPDA workshops explored problems beneficial to the needs of
                                    people, societies, and humanity by postulating “Quality of Life” as an overarching design
                                    objective, enriching the discourse about “AI for Humans” beyond a discussion of efficiency
                                    and productivity.

                                    Keywords 1
                                    Humans for AI, AI for Humans, Quality of Life

1. Introduction

    The arguments in this position paper are grounded in my professional career as a faculty member in
Computer Science and Cognitive Science. For the last three decades, our research in the Center for
Lifelong Learning & Design (L3D) has been centered on human-centered design, intelligence
augmentation, and distributed cognition with a focus on how to transcend the unaided individual human
mind with socio-technical environments [1, 2].
    The theme of this workshop “AI for Humans or Humans for AI” does not have a simple answer [3].
My arguments are focused to support the “AI for Humans” perspective [4, 5]. Our research activities
[6] and my contributions to previous CoPDA workshops explored problems beneficial to the needs of
people, societies, and humanity by postulating “Quality of Life” as an overarching design objective [7,
8], enriching the discourse about “AI for Humans” beyond a discussion of efficiency and productivity.

2. AI: What is it?
2.1.                           Differentiating AI Approaches

    There is no generally accepted definition for AI and there is no defined boundary to separate “AI
systems” from “non-AI systems”. Despite this shortcoming AI is currently being considered world-
wide as a “deus ex machina” and it is credited with miraculous abilities to solve all problems and exploit
all opportunities of the digital age. Figure 1 makes an attempt to unpack the meaning of AI into more
specific research areas [6] by differentiating between
    • Artificial General Intelligence (AGI) is the envisioned objective to create intelligent agents that
        will match human capabilities for understanding and learning any intellectual task that a human
        being can. While some researchers consider AGI as the ultimate goal of AI, for others AGI

Proceedings of CoPDA2022 - Sixth International Workshop on Cultures of Participation in the Digital Age: AI for Humans or Humans for
AI? June 7, 2022, Frascati (RM), Italy
EMAIL: gerhard@colorado.edu (G. Fischer)
                                 © 2022 Copyright for this paper by its authors.
                                 Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Wor
    Pr
       ks
        hop
     oceedi
          ngs
                ht
                I
                 tp:
                   //
                    ceur
                       -
                SSN1613-
                        ws
                         .or
                       0073
                           g
                                 CEUR Workshop Proceedings (CEUR-WS.org)




                                                                                                        1
        remains speculative as no such system has been demonstrated yet. Opinions vary both on
        whether and when AGI will arrive, if at all.
   •    AI for Specific Purposes (AISP) is an engineering discipline that explores specific well-defined
        problems for which AI systems performs better than human beings. Many successful
        contributions have occurred in achieving these objectives providing the basis for the current
        hype surrounding AI. Human involvement is not a relevant design criterion in these approaches.
   •    Human-Centered AI (HCAI) (closely related to intelligence augmentation [9, 3]) is focused on
        improving the quality of life of humans by creating AI systems that amplify, augment, and
        enhance human performance with systems that are reliable, safe, and trustworthy [5].


                                                   Ar#ficial General Intelligence (AGI)
                                                              (Strong AI)
                                                     Ar#ficial Intelligence is iden#cal
                                                         to Human Intelligence           AI for Specific Purposes (AISP)
                  Human-Centered AI (HCAI)
                                                                                          (Engineering Disciplines for
               (socio-technical environments for
                                                                                            replacing Human Beings)
                  empowering human beings)
                  Intelligence Augmenta#on                                                    Machine Learning


                  Explainable AI (XAI)
                                                       Ar#ficial Intelligence                  Deep Learning
                                                               (AI)
                                                                                              Big Data
                  Democra#zing AI

                  Ethics and Trust                                                            Robo#cs

                                                                                              Natural Language Processing
                  Shared Understanding
                                                                                              Predic#ve Analysis
                  Common Ground




Figure 1: Differentiating AI Approaches


2.2.    Contrasting “Humans for AI” versus “AI for Humans”

   While the growth of technology is certain, the inevitability of any particular future is not. Contrasting
“AI for Humans” versus “Humans for AI” represents an important objective to articulate design
guidelines about the future of technological developments.
   Frameworks centered on “Humans for AI” [10] are grounded in objectives such as
   • technological advances are more important than people;
   • requiring people to work on technology’s terms;
   • using people as stopgaps to do the parts of a task that machines can not yet do;
   • restricting perspectives to “can we do it?” and ignoring challenges derived from the questions
        “should we do it?” by insufficiently considering potential drawbacks such as (a) the loss of
        meaningful work (b) the loss of personal control (if big data is watching us, how can we retain
        personal freedom?), and (c) an increase in the digital divide and inequality (those who own the
        data own the future).

   In contrast frameworks centered on “AI for Humans” [4, 5] are grounded in objectives such as
   • humans and computers are different therefore focusing on complementing rather than emulating
        and replacing human capabilities by computers;
   • human-centered design, where the work starts with understanding people’s needs and
        capabilities;
   • transcending the unaided individual human mind by exploring the potential of distributed
        cognition;
   • identifying situations in which autonomous, intelligent technology should be deployed, often in
        areas characterized by the “three D’s”: dull, dirty, and dangerous; and




                                                                   2
   •    sparking design efforts for exploring a synthesis of humans and AI by integrating their strengths
        and reducing their weaknesses as identified by a design trade-off analysis.

3. “AI and Humans” and “AI versus Humans”

   Throughout history, there have always been two distinct forces at play: the substituting force, which
replaced human workers and the complementing force which empowered human beings [11].

3.1.    Distributed Cognition: AI and Humans

   A fundamental challenge for research in computer science, cognitive science, and the learning
sciences is to understand thinking, learning, working, and collaborating by exploiting the power of
omnipotent and omniscient technology. We need to understand what tasks should be reserved for
educated human minds and the collaboration among different human minds, and what tasks can and
should be taken over or aided by cognitive artifacts. In an information-rich world, the true power comes
not from more information, but from information that is personally meaningful, relevant to people’s
concerns, and relevant to the task at hand.
   Distributed cognition [12] is a fundamental framework by which to marry the intellectual power of
the human mind with appropriate technologies. People think in conjunction and partnership with others
and with the help of culturally provided tools [13]. Distributed cognition complements our biological
memory with external symbolic memory [14] and extends the individual mind with the social mind.
Distributed cognition transcends the individual, unaided human mind [15] but it comes at a cost:
external symbolic representations entail complex media that require extensive learning efforts by
humans.
   Many of our research efforts have addressed this challenge including:
   • domain-oriented design environments, focused on supporting human problem-domain
        interaction and not only human-computer interaction [16];
   • the Envisionment and Discovery Collaboratory, supporting communities of interest in
        Renaissance communities with boundary objects [1]; and
   • context-aware systems based on user and task models reducing information overload [17].

    “AI and Humans” as a research strategy is focused on complementing and augmenting human
abilities with socio-technical systems for supporting more inclusive societies instead of increasing the
digital divide [8]. To be successful, mutual understanding represents an important challenge for the “AI
and Humans” approach in order to overcome hurdles such as (1) the lack of self-knowledge (i.e., these
systems are unaware what they know and not know) and (2) by being black boxes they are incapable
of explaining how they reach their decisions in terms understandable to humans (e.g.: their reasoning
is based on correlations derived from “Big Data” [18] whereas humans understand and argue based on
causality).

3.2.    Automation: AI versus Humans

   Automation can be a two-edged sword:
   • at one extreme, it is a servant, relieving humans of (1) carrying out personally irrelevant tasks
       (such as checking the results of simple calculations or spelling corrections), (2) wasting time
       with low-level operations (e.g.: programming in machine languages), (3) protecting them from
       dangerous activities (e.g.: using robots to find hidden bombs), and (4) freeing them for higher
       cognitive functions (e.g.: having cars with automatic transmissions);
   • at the other extreme, automation can reduce the status of humans to that of 'button pushers', and
       can strip their work of its meaning and satisfaction. In personal meaningful activities, humans
       enjoy the process and not just the final product, and they want to take part in something [19].




                                                     3
    An early attempt leading to great expectations for AI systems replacing human beings was the
development of expert systems in the 1980s [20]. These developments provided the first phase of broad-
based enthusiasm for automating of high-level human activities that would lead to substantial economic
advantages. The expectations did not materialize, and subsequently researchers identified fundamental
limitation of the expert systems approach [21] that lead to the “AI-Winter” in the following decade. An
interesting question to be asked today in a new phase of AI enthusiasm is whether we will see another
“AI-Winter” in the years to come?

4. Examples for Illustrating the Different Approaches
4.1.    Adaptive versus Adaptable Systems

    Adaptive systems are grounded in the “AI versus Humans” approach: they change their behavior by
themselves driven by context-aware mechanisms including models of their users and specific task
contexts, whereas adaptable systems are examples for the “AI and Humans” approach allowing users
to adjust, modify, and extend systems in order to capture unforeseen and missing aspects of problems.
    Many research efforts have not clearly differentiated between adaptable and adaptive systems. Table
1 represents an initial effort to compare and differentiate the two approaches. Such a differentiation will
be important and useful by identifying the design trade-offs between them, demonstrating the possibility
for a successful integration, and analyzing the impact of these developments.

Table 1
A Comparison and Differentiation between Adaptive and Adaptable Systems
                            Adaptive Systems                       Adaptable Systems
 Definition      modifications and suggestions          users actively change the functionality
                 generate by the systems for specific   of the system
                 tasks and users

 Knowledge         contained in the system; projected in        knowledge is curated, modified, and
                   different ways                               extended by users

 Strengths         little (or no) effort by users; no special   users are in control; users know their
                   user knowledge is required; work for         tasks best; work with people
                   people

 Weaknesses        users lack control; common                   users must do substantial work;
                   understanding is reduced resulting in        require a learning effort; create a tool
                   filters bubbles; lack of explainability      mastery burden; systems may become
                                                                incompatible

 Mechanisms        models of users, tasks, and dialogs; big     meta-design environments supporting
 required          data resources; intelligent agents           modifiability, tailorability, and
                                                                evolution

 Application       active help systems, critiquing              open systems, co-designed systems,
 domains           systems, recommender systems                 end-user development

 Primary           automation grounded in Artificial            human involvement grounded in
 techniques        Intelligence (AI) approaches                 Intelligence Augmentation (IA)
                                                                approaches




                                                      4
4.2.    Learning Environments

   Making learning part of life is a necessity rather than a possibility or a luxury to be considered for
addressing the complex, systemic problems occurring in a world undergoing constant change.
   Different kinds of problems require different kinds of learning approaches and different socio-
technical environments supporting these approaches. Outside the classroom, much learning and
problem solving takes place as individuals explore personally meaningful problems, engage with each
other in collaborative activities while making extensive use of media and technologies.
   In classroom environments instructionist approaches dominate and learning is conceptualized as an
isolated process of information transmission and absorption whereas outside of schools learning is a
much more complex activity. Computational environments from the early beginnings have been
conceptualized and employed to support human learning in these two different settings and two
fundamentally different approaches have emerged:
   • intelligent tutoring systems [22], in which the problem is given by the teacher or the system,
         and
   • interactive learning environments [23], in which tools are provided that allow learners to
         explore problems of their own choice.

    Intelligent tutoring systems can provide substantial more support because the designers of the
environments know (at design time) the types of problems the learners will work on (at use time). To
support learners in interest-driven, self-directed activities, interactive learning environments need to be
augmented with mechanisms (such as domain-oriented design environments, critiquing systems, and
context-awareness) that can offer help and support for learners who get stuck or who do not know how
to proceed.

5. Research Challenges Associated with the “AI and Humans” Framework

   Arguing for the strong preference in our own research for a framework grounded in the objective
“AI and Humans”, it should not be overlooked that this framework presents several important pitfalls
[7] that require careful attention and further exploration including:

   •    overreliance: despite all the technological support for humans in a distributed cognition
        framework, which capabilities do humans need to learn to avoid overreliance on external tools?
        How can “tools for living” and “tools for learning” be differentiated in specific contexts?
   •    deskilling: will humans loose (1) basic mathematical capabilities by using hand-held
        calculators; (2) the ability to spell by using spelling correctors; (3) important geographical
        knowledge by using navigation systems; and (4) the motivation learning a foreign language by
        using automated translation systems?
   •    learning demands associated with powerful and complex tools: will AI technologies that
        empower human beings in distributed cognition approaches require reasonable learning efforts
        for humans to understand the possibilities and the limitations of these tools?
   •    establishing different discourses: will discourses and investigations facilitated and supported
        by “AI and Humans” technologies provide opportunities for exploring motivation, control,
        ownership, autonomy, and quality of life?
   •    quality of life: will “AI and Humans” approaches provide us with more time, less stress, and
        more control or will they lead to participatory overload problems by requiring the engagement
        in problems that individuals consider irrelevant for them.




                                                      5
Figure 2: A Comparison of Intelligent Tutoring Systems and Interactive Learning Environments

   For all these research issues that are no simple answers, only design trade-offs [7]. And because
there are no decontextualized sweet spots for analyzing these design trade-offs, the investigations must
be situated and explored in specific contexts.

6. The Past, the Present, and the Future of the CoPDA Workshops
   The AVI’2022 workshop is the 6th CoPDA workshop (see Figure 3). An important challenge for the
researchers getting together in the workshop this year may be to explore the foundational idea(s) that
these workshops have pursued and how they are related to each other. A particular objective of all
previous CoPDA workshops has been to collectively identify important and interesting themes for future
workshops and my hope is that this will happen again this year by exploring post-AI attitudes
prioritizing human well-being and quality of life as primary objectives.




                                                    6
                IS-EUD’2015: Coping with Information,                NordiCHI’2016: From “Have to” to
                Participation, and Collaboration Overload            “Want to” Participate



    AVI’2014: Social Computing for
                                                                                   AVI’2018: Design Trade-offs for
    Working, Learning, and Living
                                                                                   an Inclusive Society


                                       CoPDA: Cultures of Participation
                                               in the Digital Age




     IS-EUD’2013: Empowering End Users                                             AVI’2022: AI for Humans
     to Improve their Quality of Life                                              or Humans for AI




                              Identification of Fundamental Challenges
                                           for the Digital Age
Figure 3: An Overview of the CoPDA Workshops


7. Conclusions

    We are in a period of major changes in technology, impacting almost all areas of human lives. The
world-wide euphoria about AI based on increases in computational and communication power, the
advent of ubiquitous sensors supporting the Internet of Things, and powerful new software tools are
changing education, work, healthcare, transportation, industry, manufacturing, and entertainment.
    The impact of these changes upon people and society is both positive and negative. The positive
impacts should be celebrated, and the negative impacts should be avoided rather than treated as
unfortunate but unavoidable side effects. Future research needs to identify the positive and negative
effects and provide evidence for the success and failure of specific developments.
    We need new ways of thinking and new approaches in which we address the basic question
associated with the themes “AI and Humans” and “AI versus Humans”: (1) which tasks or components
of tasks are or should be reserved for educated human minds aided by cognitive artifacts (distributed
cognition), and (2) which tasks can and should be taken over by AI systems acting independently
(automation)?

8. References

[1] E. G. Arias, H. Eden, G. Fischer, The Envisionment and Discovery Collaboratory (EDC):
    Explorations in Human-Centered Informatics, Morgan & Claypool Publishers, San Rafael, CA,
    USA, 2016.
[2] E. G. Arias, H. Eden, G. Fischer, A. Gorman, E. Scharff, Transcending the Individual Human
    Mind—Creating Shared Understanding through Collaborative Design, in: J. M. Carroll (Ed.),
    Human-Computer Interaction in the New Millennium, ACM Press, New York, NY, USA, 2001,
    pp. 347-372.
[3] J. Markoff, Machines of Loving Grace (the Quest for Common Ground between Humans and
    Robots), HarperCollins, New York, NY, USA, 2016.
[4] G. Fischer, K. Nakakoji, Beyond the Macho Approach of Artificial Intelligence: Empower Human
    Designers - Do Not Replace Them, Knowledge-Based Systems Journal, Special Issue on AI in
    Design, 5.1 (1992) 15-30.


                                                            7
[5] B. Shneiderman, Human-Centered AI, Oxford University Press, Oxford, United Kingdom, 2022.
[6] G. Fischer, End-User Development: Empowering Stakeholders with Artificial Intelligence, Meta-
     Design, and Cultures of Participation, in: D. Fogli, D. Tetteroo, B. R. Barricelli, S. Borsci, P.
     Markopoulos, G. A. Papadopoulos (Eds.), IS-EUD 2021 Proceedings, Springer, LNCS 12724,
     2021, pp. 3–16.
[7] G. Fischer, Design Trade-Offs for Quality of Life, ACM Interactions 25.1 (2018) 26-33.
[8] D. Fogli, A. Piccinno, S. Carmien, G. Fischer, G., Exploring Design Trade-Offs for Achieving
     Social Inclusion in Multi-Tiered Design Problems, Behaviour & Information Technology, 39.1
     (2020) 27-46.
[9] D. C. Engelbart, Toward Augmenting the Human Intellect and Boosting Our Collective IQ,
     Communications of the ACM, 38.8(1995) 30-33.
[10] R. Kurzweil, The Singularity Is Near, Penguin Books, London, United Kingdom, 2006.
[11] D. Susskind, A World without Work: Technology, Automation, and How We Should Respond,
     Metropolitan Books/Henry Holt & Company, New York, NY, USA, 2020.
[12] J. Hollan, E. Hutchins, D. Kirsch, Distributed Cognition: Toward a New Foundation for Human-
     Computer Interaction Research, in: J. M. Carroll (Ed.), Human-Computer Interaction in the New
     Millennium, ACM Press, New York, NY, USA, 2001, pp. 75-94.
[13] G. Salomon (Ed.), Distributed Cognitions: Psychological and Educational Considerations,
     Cambridge University Press, Cambridge, United Kingdom, 1993.
[14] J. Bruner, The Culture of Education, Harvard University Press, Cambridge, MA, USA, 1996.
[15] S. Sloman, P. Fernbach, The Knowledge Illusion — Why We Never Think Alone, Riverhead
     Books, New York, NY, USA, 2017.
[16] G. Fischer, Domain-Oriented Design Environments, Automated Software Engineering, 1.2 (1994)
     177-203.
[17] G. Fischer, Context-Aware Systems: The ‘Right’ Information, at the ‘Right’ Time, in the ‘Right’
     Place, in the ‘Right’ Way, to the ‘Right’ Person, in: G. Tortora, S. Levialdi, M. Tucci (Eds.),
     Proceedings of the Conference on Advanced Visual Interfaces (AVI), ACM, Capri, Italy, 2012,
     pp. 287-294.
[18] V. Mayer-Schönberger, K. Cukier, Big Data, Houghton Mifflin Harcourt, New York, NY, USA,
     2013.
[19] G. Fischer, J. Greenbaum, F. Nake, Return to the Garden of Eden? Learning, Working, and Living,
     The Journal of the Learning Sciences, 9.4, (2000) 505-513.
[20] B. G. Buchanan, E. H. Shortliffe (Eds.), Rule-Based Expert Systems: The Mycin Experiments of
     the Stanford Heuristic Programming Project, Addison-Wesley Publishing Company, Reading,
     MA, USA, 1984.
[21] T. Winograd, F. Flores, Understanding Computers and Cognition: A New Foundation for Design,
     Ablex Publishing Corporation, Norwood, NJ, USA, 1986.
[22] J. R. Anderson, A. T. Corbett, K. R. Koedinger, R.Pelletier, Cognitive Tutors: Lessons Learned,
     The Journal of the Learning Sciences 4.2 (1995) 167-207.
[23] S. Papert, Mindstorms: Children, Computers and Powerful Ideas, Basic Books, New York, NY,
     USA, 1980.




                                                   8