<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. Utterberg Modén);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>The Challenge of Incorporating End-User Values into Design: A Methodological Perspective of Using Provotypes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marie Utterberg Modén</string-name>
          <email>marie.utterberg@ait.gu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tiina Leino Lindell</string-name>
          <email>tiina.leino.lindell@ait.gu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Johan Lundin</string-name>
          <email>johan.lundin@ait.gu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marisa Ponti</string-name>
          <email>marisa.ponti@ait.gu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Gothenburg, Department of Applied IT</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This study addresses the critical need for ethics and fairness in AI and machine learning, focusing on the often-overlooked inequalities behind biases in these technologies. Adopting a valuesensitive design approach, it investigates a design method that use conflicts through the introduction of 'provotypes', intended to enhance end-user agency in the AI tool design process. Specifically concentrating on the educational sector, centers on teachers, this paper offers an indepth perspective at both the application of these methods and the outcomes they produce, covering both methodological insights and findings related to values on AI in education.</p>
      </abstract>
      <kwd-group>
        <kwd>end user design</kwd>
        <kwd>value sensitive design</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>education 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        There is an increasing recognition of the importance of ethics and fairness in machine
learning and AI systems research. Despite this, much of the effort has been on examining
and addressing biases by implementing ‘fairness-aware’ algorithms, rather than on
understanding and handling the deeper, systemic inequalities that these biases may reflect
or reinforce [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. There is an identified gap and need for more proactive, inclusive
approaches, such
as
participatory
methodologies, that involve
stakeholders in
conceptualizing and designing AI tools [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. The active involvement of practitioners in the
design phase is critical for enhancing the quality of AI-tools and ensuring sustainable
implementation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Moreover, engaging end-users and stakeholders from the beginning is
vital for safeguarding the freedom, values, and rights of those the AI-tool is designed for [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5,
6, 7</xref>
        ]. Fairness is a value often emphasized in the design and use of AI-tools [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], highlighting
the “ethical need to understand the historical and social contexts into which these systems
are being deployed” [8, p. 2]. To explore fairness of AI-tools, they should be examined within
both a broad societal perspective and the specific context of its application. The notion of
fairness is not just about abstract principles but involves critically analyzing who stands to
determine what fairness means, who benefits from these definitions, and the power
dynamics influencing these determinations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In this way, “many of the sources of
unfairness are not straightforward to identify but instead require thorough domain
knowledge” [9, p. 382]. However, articulating and meaningfully translating the values and
needs of stakeholders into the design process can be challenging [10]. Moreover, exploring
the values and needs of various stakeholders may uncover conflicts [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This study adopts a
value-sensitive design [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] perspective and explores a design method aimed at taking
advantage of conflicts and using them to provoke critical discussions and the exploration of
novel ideas. By doing so, it introduces ‘provocative prototypes’, or provotypes [11, 12], as a
tool that advocates diverse perspectives and tensions of values as a catalyst for creativity
and innovation. The current study centers on teachers, who play a critical role in
implementing fairness within their daily activities in schools. It details an ongoing project
intending to empower secondary school teachers in the design of prototypes of AI-based
educational tools that uphold fairness. This paper offers an in-depth perspective at both the
application of these methods and the outcomes they produce, covering both methodological
insights and findings related to end-users’ perspectives on AI in education.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The design process</title>
      <sec id="sec-2-1">
        <title>2.1. Participants</title>
        <p>Forty participants took part in this study, which included teachers, who were considered
direct users of AI tools, and principals, students, and pedagogical developers, who were
identified as indirect users, with 10 individuals from each category. They were purposefully
recruited based on their interest in exploring AI in education, representing secondary
schools across four municipalities in Sweden. We chose to engage with these specific
participants to gain insights from a group that generally does not have a say in design of AI
tools [13] but where teachers, principals and pedagogical developers are responsible for
distributing fairness in their everyday work [14].</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Workshops design</title>
        <p>A series of two workshops and focus group discussions were conducted for each category
of participants for a total of 40. Both the first and second workshops lasted approximately
three hours. The first workshop was audio-recorded, while the second was video-recorded.
The focus group discussions, which lasted about one hour, involved smaller groups from
each category and were audio-recorded. All sessions took place at the university.</p>
        <p>Workshop 1 - End users’ perception of fairness. A week before the workshop, we
distributed a video to teachers, pedagogical developers and principals, providing them with
an introduction to the basics of AI. It has been highlighted that providing participants with
knowledge of advanced and complex technologies is necessary to bridge the gap between
nonprofessional and professional designers/researchers [15]. Thus, the first workshop
began with a lecture about AI, with a particular emphasis on understanding biases, both
broadly and within the specific area of education.</p>
        <p>The workshop consisted of two sessions. The first session aimed to highlight the
participants’ efforts in promoting fairness within their schools. It emphasized that teachers
and principals naturally promote equity through their daily practices. Meanwhile,
pedagogical developers are tasked with supporting schools, which includes advocating for
equitable education for all students. Participants were asked to reflect on the following
question: What specific actions do you take in your work to create a fair and equitable
school? Initially, they individually documented on post-it notes strategies they had
personally implemented. Subsequently, they formed small groups to collaboratively discuss
those strategies.</p>
        <p>The second session was designed to focus on the role of AI in education. Participants
were encouraged to pair up and create a storyboard illustrating a future scenario, a ‘sketch
of use’ [16], scenarios that highlights values emphasizing the social and ethical
considerations of new technologies [17] focusing on how AI could be used to accomplish
work tasks and other activities. The workshop concluded with a collective discussion where
participants shared and reflected on their work in relation to fairness.</p>
        <p>Workshop 2 - Provotypes illustrating conflicting values. In preparation for the second
workshop, our analysis of transcribed recordings from workshop 1 revealed tensions in
participants perspectives. We identified three primary areas of tension that reflect both the
participants’ values regarding fairness and their conceptualizations of using AI tools in
education. The first area of tension addresses the challenge of balancing personalized
education for each student against the need to frame the classroom as a space for
collaborative work and discussion. The second area points the tension between the
efficiency of monitoring students through AI for time-saving analysis and the depth of
understanding that teachers achieve through direct interaction with their students. Lastly,
the third area of tension contrasts the benefits of obtaining data-driven insights on student
with the ethical imperative to respect students’ privacy and protect their personal
information.</p>
        <p>
          Although these tensions in participants values are widely recognized in the field of AI
and fairness research [18, 19], the persistent challenge is how to thoughtfully mitigate these
tensions and coherently integrate them into AI tool design [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] In design processes, tensions
stemming from conflicting values among stakeholder perspectives have traditionally been
addressed by devising strategies to foster consensus within or among stakeholder groups
[20, 21].
        </p>
        <p>Instead of circumventing these tensions, we have used them as a resource, creating
provocative prototypes (provotypes) that embody these very tensions [12]. From a
theoretical perspective, Activity Theory [22] can serve as a foundation for a provotyping
approach [11]. Activity theory is based on the concept that activities are inherently subject
to systemic contradictions, which act as catalysts for change processes, ultimately leading
to transformation of the activity [22]. This understanding has guided our development of
three provotypes designed to actively engage with such contradictions.</p>
        <p>The participants were divided in small groups and each group was given
paperprovotypes. The participants were told that the provotypes illustrated ideas capturing their
varied imaginaries of AI-tools in education from previous discussions. Provotyping, as a
method, focuses on identifying and highlighting contradictions within a specific practice
[11]. Thus, by interacting with the provotypes, participants from different educational
backgrounds and schools were able to critically confront and reflect upon the varied and
sometimes conflicting ideas of AI in education. The provotypes aimed to act as catalysts,
stimulating creativity and encouraging new ideas by questioning norms and values while
designing for future practices [12] in education. In this way, provotyping was viewed as an
intermediary, linking the exploration of current concrete practices with the imagination of
future opportunities by uncovering values and intended to facilitate the transition from
analysis to design [12]. By exposing contradictions, provotyping aimed to address the
identified contradictions and inform design [11]. To do so, participants redesigned the
provotypes and created their own interface prototypes. They were equipped with plain
paper prototypes along with a selection of pens in various colors, sticky notes, scissors,
rulers, glue, and pre-made stencils of elements like buttons, icons, and form fields.</p>
        <p>Focus group discussions - Prototype-stimulated discussions. In preparation for the
focus group discussions, students were invited to engage with, and respond to, the three
provotypes and create prototypes. They were divided into small groups and encouraged to
reflect on the perceived strengths and weaknesses of the provotypes, their personal values
related to its use and functionality, and any ethical or practical concerns they identified.</p>
        <p>Together, teachers, principals, pedagogical developers, and students generated a wide
variety of detailed prototype designs in response to the provotypes. A selection of these
prototypes and video-recorded reflective discussions, which took place during their
creation, collectively served as ‘stakeholder prompts’ [17] in subsequent focus group
discussions. These prompts were intended to elicit the underlying values, guiding the
conversation and analysis in these groups, which were homogeneous, each consisting of
either teachers, principals, or pedagogical developers.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and discussion</title>
      <p>Participants design ideas and values were reflected through a diversity of methods and
representations, including scenarios, prototypes, and stakeholder prompts, alongside
verbal presentations of the designs in relation to fairness. During the design process, we
recognized the necessity for participants to switch back and forth between concrete and
abstract thinking to uncover their values.</p>
      <p>Teachers value direct, personal interaction with students, viewing it as essential for
comprehensively understanding each student’s emotional, social, and practical needs. Thus,
the teacher’s role is seen as irreplaceable, with human insight and empathy being crucial
for supporting students. While integrating AI, many participants also highlight
‘conservation values’ [23], emphasizing the importance of preserving the stability and
maintaining traditional educational practices, such as age-based classes and student
collaborative group work.</p>
      <p>At the same time, the participants were open to assigning a wide range of tasks to AI
tools, motivated by the desire to enhance student outcomes. They were willing to delegate
tasks such as automated individualization and progress monitoring (bordering on
surveillance) to prevent students from falling behind, along with other supportive
functions, to AI tools to augment teacher capabilities. This willingness could be interpreted
as placing the value of high-quality education for all students above concerns for students’
data privacy.</p>
      <p>Throughout the design process, a significant challenge has been inspiring participants to
broaden their understanding of AI tools. Encouraging them to envision how AI can innovate
and transform education necessitates a shift in perspective and moving beyond traditional
approaches to explore the novel possibilities and risks that AI introduces to the field. An
additional challenge has involved raising awareness about the potential consequences of AI
usage on the teaching profession, including the risk of deskilling and maintaining elements
of teaching that contribute to job satisfaction.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>This work was supported by the Marianne and Marcus Wallenberg Foundation under Grant
MMW 2021.0030.
[9] S. Feuerriegel, M. Dolata, and G. Schwabe, “Fair AI: Challenges and Opportunities,” Bus
Inf Syst Eng, vol. 62, no. 4, pp. 379–384, Aug. 2020, doi:
10.1007/s12599-020-006503.
[10] B. Friedman, “Value-sensitive design,” Interactions, vol. 6, no. 3, pp. 16–23, 1996,
[Online]. Available: https://dl.acm.org/doi/pdf/10.1145/242485.242493
[11] L. Boer and J. Donovan, “Provotypes for participatory innovation,” in Proceedings of
the Designing Interactive Systems Conference, Newcastle Upon Tyne United Kingdom:
ACM, Jun. 2012, pp. 388–397. doi: 10.1145/2317956.2318014.
[12] P. Mogensen, “Challenging practice: An approach to cooperative analysis,” Doctoral</p>
      <p>Dissertation, Aarhus university, Denmark, 1994.
[13] M. Utterberg Modén, M. Ponti, J. Lundin and, M. Tallvid, “When fairness is an
abstraction: Equity and AI in Swedish compulsory education,” Nordic journal of
educational research, In preparation 2024.
[14] “Education Act 2010:800,” Department of education, Stockholm, Sweden, SFS
2010:800, 2010.
[15] T. Bratteteig and G. Verne, “Does AI make PD obsolete?: exploring challenges from
artificial intelligence to participatory design,” in Proceedings of the 15th Participatory
Design Conference: Short Papers, Situated Actions, Workshops and Tutorial - Volume
2, Hasselt and Genk Belgium: ACM, Aug. 2018, pp. 1–5. doi:
10.1145/3210604.3210646.
[16] M. B. Rosson and J. M. Carroll, Usability engineering: scenario-based development of
human-computer interaction, 1st ed. in Morgan Kaufmann series in interactive
technologies. San Francisco: Academic Press, 2002.
[17] D. Yoo, A. Huldtgren, J. P. Woelfer, D. G. Hendry, and B. Friedman, “A value sensitive
action-reflection model: evolving a co-design space with stakeholder and designer
prompts,” in Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, Paris France: ACM, Apr. 2013, pp. 419–428. doi: 10.1145/2470654.2470715.
[18] M. Utterberg Modén, M. Tallvid, J. Lundin, and B. Lindström, “Intelligent Tutoring
Systems: Why Teachers Abandoned a Technology Aimed at Automating Teaching
Processes,” presented at the 54th Hawaii International Conference on System
Sciences, 2021, pp. 1538–1547.
[19] N. Selwyn, “The future of AI and education: Some cautionary notes,” Euro J of</p>
      <p>Education, vol. 57, no. 4, pp. 620–631, Dec. 2022, doi: 10.1111/ejed.12532.
[20] E. Grönvall, L. Malmborg, and J. Messeter, “Negotiation of values as driver in
community-based PD,” in Proceedings of the 14th Participatory Design Conference:
Full papers - Volume 1, Aarhus Denmark: ACM, Aug. 2016, pp. 41–50. doi:
10.1145/2940299.2940308.
[21] R. M. Jonas and B. V. Hanrahan, “Designing for Shared Values: Exploring Ethical
Dilemmas of Conducting Values Inclusive Design Research,” Proc. ACM Hum.-Comput.</p>
      <p>Interact., vol. 6, no. CSCW2, pp. 1–20, Nov. 2022, doi: 10.1145/3555182.
[22] Y. Engeström, Learning by expanding: an activity-theoretical approach to
developmental research, Second edition. New York, NY: Cambridge University Press,
1987.
[23] C. Russo, F. Danioni, I. Zagrean, D. and Barni, D.. Changing personal values through
value-manipulation tasks: a systematic literature review based on Schwartz’s theory
of basic human values. European Journal of Investigation in Health, Psychology and
Education, 12(7), 2022, 692-715.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Selbst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Boyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Friedler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Venkatasubramanian</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Vertesi</surname>
          </string-name>
          , “
          <source>Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency</source>
          , Atlanta GA USA: ACM, Jan.
          <year>2019</year>
          , pp.
          <fpage>59</fpage>
          -
          <lpage>68</lpage>
          . doi:
          <volume>10</volume>
          .1145/3287560.3287598.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Gan</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Moussawi</surname>
          </string-name>
          , “
          <article-title>A Value Sensitive Design Perspective on AI Biases,” presented at the 55th</article-title>
          <source>Hawaii International Conference on System Sciences (HICSS)</source>
          ,
          <source>Hyatt Regency Maui</source>
          , Hawaii, USA: University of Hawai'i at Manoa, Hamilton Library, Jan.
          <year>2022</year>
          , pp.
          <fpage>5548</fpage>
          -
          <lpage>5557</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lundin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Utterberg</given-names>
            <surname>Modén</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Leino</given-names>
            <surname>Lindell</surname>
          </string-name>
          , and G. Fischer, “
          <article-title>A Remedy to the Unfair Use of AI in Educational Settings,” IxD&amp;A</article-title>
          , no.
          <issue>59</issue>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>78</lpage>
          , Dec.
          <year>2023</year>
          , doi: 10.55612/s-5002-059-002.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E. B.-N.</given-names>
            <surname>Sanders</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Stappers</surname>
          </string-name>
          , “
          <article-title>Co-creation and the new landscapes of design,” CoDesign</article-title>
          , vol.
          <volume>4</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>5</fpage>
          -
          <lpage>18</lpage>
          , Mar.
          <year>2008</year>
          , doi: 10.1080/15710880701875068.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bødker</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kyng</surname>
          </string-name>
          , “
          <article-title>Participatory Design that Matters-Facing the Big Issues,”</article-title>
          <source>ACM Trans. Comput</source>
          .-Hum. Interact., vol.
          <volume>25</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          , Feb.
          <year>2018</year>
          , doi: 10.1145/3152421.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Friedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Borning</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Huldtgren</surname>
          </string-name>
          , “
          <article-title>Value sensitive design and information systems</article-title>
          .,
          <article-title>” in Early engagement and new technologies: open up the laboratory</article-title>
          , N. Doorn,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuurbiers</surname>
          </string-name>
          , I. van de Poel, and M. Gorman, Eds., Springer International Publishing,
          <year>2013</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          , “
          <article-title>Revisiting and Broadening the Meta-Design Framework for End-User Development,” in New Perspectives in End-User Development</article-title>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Wulf</surname>
          </string-name>
          , Eds., Cham: Springer International Publishing,
          <year>2017</year>
          , pp.
          <fpage>61</fpage>
          -
          <lpage>97</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -60291-
          <issue>2</issue>
          _
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Silberg</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Manyika</surname>
          </string-name>
          , “
          <article-title>Notes from the AI frontier: Tackling bias in AI (and in humans</article-title>
          ),
          <source>” McKinsey Global Institute</source>
          , vol.
          <volume>6</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          , [Online]. Available: https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial% 20Intelligence/Tackling%20bias%
          <article-title>20in%20artificial%20intelligence%20and%20in %20humans/MGI-Tackling-bias-in-</article-title>
          <string-name>
            <surname>AI-</surname>
          </string-name>
          June-2019.pdf
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>