<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tommaso Calò</string-name>
          <email>tommaso.calo@polito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Torino, Corso Duca degli Abruzzi</institution>
          ,
          <addr-line>24, Torino, 10129</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>8</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>Artificial Intelligence (AI) has been an active research area for a long time, but its adoption as a mainstream technology in the creative/design space is a relatively new phenomenon. Much of the research has been directed into creating systems capable of performing tasks, and while many powerful systems and applications have emerged, the user experience has not kept pace. In applications of AI systems to the design process, there has been a general lack of focus on the designers creativity. Most of the current approaches focus on the use of Artificial Intelligence to replace the repetitive work in the design process while ignoring the subversive changes in thinking and methodology that AI can bring to designers' as end users. This dissertation work explores the opportunities for AI to bring radical changes in human experiences during the design and creative processes, assist designers' creative ability, and provide new approaches to augment design cognition.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Context and Motivation</title>
      <p>
        Despite increasing levels of automation enabled by Artificial Intelligence — whether it is AI
driving our vehicles, designing our drugs, determining what news and information we see, and
even deciding how our money is invested — the common thread among these systems is the
human element. AI’s long-term success is contingent upon our acknowledgment that people
are critical in its design, operation, and use [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ].
      </p>
      <p>
        Human-Centered AI (HCAI) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] is an emerging discipline with the intent of creating AI
systems that amplify and augment rather than displace human abilities [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. HCAI seeks to
preserve human control in a way that ensures artificial intelligence meets our needs while also
operating transparently, delivering equitable outcomes, and respecting privacy [6].
      </p>
      <p>Advocates of this new synthesis seek to amplify, augment, and enhance human abilities
so as to empower people, build their self-eficacy, support creativity, recognize responsibility,
and promote social connections. Researchers, developers, business leaders, policymakers, and
others are expanding the technology-centered scope of AI to include HCAI ways of thinking.
This expansion from an algorithm-focused view to embrace a human-centered perspective
can shape the future of technology so as to better serve human needs. Educators, designers,</p>
      <p>Calò)
rOcid 0000-0002-3200-2348 (T.</p>
      <p>Calò)
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
software engineers, product managers, evaluators, and government agency stafers can build on
AI-driven technologies to design products and services that make life better for their users [8].</p>
      <p>AI creativity refers to the ability of humans and AI to co-live and co-create by playing to
each other’s strengths to achieve more. AI should complement to human intelligence, and it
consolidates wisdom from human supervisions, making collaboration across time possible. AI
empowers humans throughout the entire creative process and makes creativity more accessible
and more inclusive than ever.</p>
      <p>AI has been shown to be suitable to deal with repetitive and predictable problems, as well as
complex and multi-tasking scenarios; while humans are more flexible and creative, and adept at
knowledge understanding and strategic thinking, as is summarized in (Fig 1). Collaboration
between humans and AI varies across domains [9, 7] . Human leads where tasks are more
about creative or strategy and compassion is needed, while AI leads where tasks are more about
routine or optimization and compassion is not needed (Fig. 1).</p>
      <p>My dissertation work aims to exploit recently discovered AI methods to enhance human
creativity in the design process, analyze existing approaches’ opportunities and limitations, and
develop, test, and evaluate new techniques and tools to enhance designers’ creativity.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Works</title>
      <p>
        Increasing numbers of researchers are exploring AI-assisted creativity in industries such as the
creative and arts sector [
        <xref ref-type="bibr" rid="ref3">11, 12, 13, 3, 14, 15, 16</xref>
        ], with a focus on data-driven design [17, 18, 19].
      </p>
      <p>Zeng et al. [10] utilized AI to augment typeface design creativity, leveraging a generative
network trained on standardized Chinese typefaces. Iterative application and retraining of the
network, guided by designer feedback, yielded designs matching their needs, thus enhancing
creative cognition.</p>
      <p>Sun T. et al. [20] developed an AI system that assists with digital icon design. The system
uses a generative model trained on a large icon database and allows user-guided generation of
icons. A user study validated its performance.</p>
      <p>Zhang et al. [21] presented a network that generates user-guided magazine layouts. Users
sketch the rough positions and sizes of elements, and the network generates the layout.</p>
      <p>
        Chen et al. [
        <xref ref-type="bibr" rid="ref6">22</xref>
        ] proposed a two-stage design system: a semantic ideation network and a
visual concepts synthesis network. Users can explore semantic connections and then generate
images synthesizing selected visual concepts using a generative network [
        <xref ref-type="bibr" rid="ref7">23</xref>
        ].
      </p>
      <p>Building on these promising findings, my work will explore under-investigated aspects of
the design process that could benefit from AI assistance, with the aim to enhance creativity and
idea refinement.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Research Objective</title>
      <p>The overall goal of the dissertation work is to explore how to use artificial intelligence to
augment design creativity, expand new design methods and design forms, and improve the
quality of design. Specifically, the following research questions will be addressed:
1. RQ1: How might we embrace AI and apply it to fostering personal human creativity in
design?
2. RQ2: How does adjusting the AI control over the design decisions afect the human’s
creativity?
3. RQ3: What are the efects of AI over the designer’s perception of limitations and
frustration?
4. RQ4: What are the common characteristics and best practices that we can synthesize to
design systems able to provide improvement in designers’ creativity?</p>
      <p>The presented research questions are synthesized from the needs highlighted in the available
literature, and include hints from my personal skills and research interests, as well as long term
goals of the research group.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Research approach</title>
      <p>The objective of the research is to propose and evaluate approaches that can empower designer
creativity. The current research plan envisions three main phases:
1. Exploring opportunities and limitations of existing approaches to enhance
designers creativity. An analysis of the literature is carried out with the goal of identifying
the potential need for alternatives or improved integration of the existing approaches
(RQ1).
2. Development of new techniques or tools to enhance designers creativity. We aim
to design, implement, and evaluate novel HCAI approaches that enhance the creative
cognition of the designers, with a specific regards to the research of systems that potentially
allow an high-user control and an high automation (RQ2). Evaluation of the efect of the
tools being used on the quality and quantity of the outcomes, as well as the cognitive
aspect of usability (RQ3).
3. Guidelines elicitation. Starting from the outcomes of the previous phases, explicitly
identify guidelines for the design of HCAI tools that enhance designers creativity (RQ4).
Extensive evaluation of the usefulness, usability, and pleasure of use of the tools that
abide by such guidelines.</p>
      <p>The methods include identifying relevant dimensions (in terms of quality of the outcome
and support for the designers), converting such dimensions into measurable variables, and
conducting user testing to establish an empirical basis for the approaches being ofered and for
making comparisons between them.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results and contributions to date</title>
      <p>
        Our research group is probing the integration of AI into design support tools, paying special
attention to human-computer interaction [
        <xref ref-type="bibr" rid="ref8">24</xref>
        ]. The sketch-based UI design approach is prevalent,
serving as ”hints” for the AI network (Walker 2002, Suleri 2019).
      </p>
      <p>
        Several projects, like Pix2code [
        <xref ref-type="bibr" rid="ref9">25</xref>
        ] and sketch2code [
        <xref ref-type="bibr" rid="ref10">26</xref>
        ], have aimed to automate
sketchto-code translation. Our unique approach [
        <xref ref-type="bibr" rid="ref11 ref12">27, 28</xref>
        ] translates a sketch into related code while
letting the designer choose element styles from a reference image. We divide style selection into
color and text style and have trialed our method using a navigation bar. Our results indicate
efective style selection from the reference image.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Dissertation Status and long term goals</title>
      <p>I am currently at the beginning of the second year of the National Ph.D. program in Artificial
Intelligence at Politecnico di Torino under the supervision of professor Luigi De Russis.</p>
      <p>In my first year, I explored the scientific literature focused on seeking opportunities for
enhancing design creativity by using Artificial Intelligence as the main approach. Therefore, I
tried to understand better the relationship between creative and design processes and explored
opportunities for as well as obstacles against using AI methods in website design (Phase 1).</p>
      <p>Method Overview.</p>
      <p>In my second year, I am validating the obtained results with a pool of designers, and I will
design, implement, and evaluate a tool that realize the proposed approaches (Phase 2). I plan
to expand my research into a more broad set of AI applications to enhance creativity, such as
image and text generation. Specifically, I will focus on overcoming AI limitations to fulfill the
designers needs and analyze in depth the aspect of creative processes that can be enhanced by
the more advanced AI methods.</p>
      <p>The results will be used, during the third year, to elicit and refine guidelines to design systems
that can empirically empower designers creativity (Phase 3).</p>
      <p>I expect my contribution to influence and bring consistent improvement to the creative
process of designers and to bring novelty and knowledge progresses in the intersection between
HCI and AI.
[6] R. A. Fiebrink, Real-time human interaction with supervised learning algorithms for music
composition and performance, 2011. AAI3445567.
[7] Z. Wu, D. Ji, K. Yu, X. Zeng, D. Wu, M. Shidujaman, AI Creativity and the Human-AI</p>
      <p>Co-creation Model, 2021, pp. 171–190. doi:10.1007/978- 3- 030- 78462- 1_13.
[8] M. WARE, E. FRANK, G. HOLMES, M. HALL, I. H. WITTEN, Interactive
machine learning: letting users build classifiers, International Journal of
HumanComputer Studies 55 (2001) 281–292. URL: https://www.sciencedirect.com/science/article/
pii/S1071581901904999. doi:https://doi.org/10.1006/ijhc.2001.0499.
[9] S. Amershi, D. Weld, M. Vorvoreanu, A. Fourney, B. Nushi, P. Collisson, J. Suh, S. Iqbal,
P. N. Bennett, K. Inkpen, J. Teevan, R. Kikin-Gil, E. Horvitz, Guidelines for human-ai
interaction, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems, CHI ’19, Association for Computing Machinery, New York, NY, USA, 2019, p.
1–13. URL: https://doi.org/10.1145/3290605.3300233. doi:10.1145/3290605.3300233.
[10] Z. Zeng, X. Sun, X. Liao, Artificial intelligence augments design creativity: A typeface
family design experiment, in: A. Marcus, W. Wang (Eds.), Design, User Experience, and
Usability. User Experience in Advanced Technological Environments, Springer International
Publishing, Cham, 2019, pp. 400–411.
[11] A. Miller, The Artist in the Machine: The World of AI-Powered Creativity, 2019. doi:10.</p>
      <p>7551/mitpress/11585.001.0001.
[12] G. Fischer, K. Nakakoji, Amplifying Designers’ Creativity with Domain-Oriented Design</p>
      <p>Environments, Springer Netherlands, Dordrecht, 1994, pp. 343–364.
[13] S. Colton, G. Wiggins, Computational creativity: The final frontier?, Frontiers in Artificial</p>
      <p>Intelligence and Applications 242 (2012) 21–26. doi:10.3233/978- 1- 61499- 098- 7- 21.
[14] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, P. Abbeel, Infogan: Interpretable
representation learning by information maximizing generative adversarial nets, in: D. Lee,
M. Sugiyama, U. Luxburg, I. Guyon, R. Garnett (Eds.), Advances in Neural Information
Processing Systems, volume 29, Curran Associates, Inc., 2016. URL: https://proceedings.
neurips.cc/paper/2016/file/7c9d0b1f96aebd7b5eca8c3edaa19ebb-Paper.pdf.
[15] N. Anantrasirichai, D. Bull, Artificial intelligence in the creative industries: a review,</p>
      <p>Artificial Intelligence Review 55 (2022). doi: 10.1007/s10462- 021- 10039- 7.
[16] W. Chen, M. Shidujaman, T. Xuelin, Aiart: Towards artificial intelligence art, 2020.
[17] R. King, E. F. Churchill, C. Tan, Designing with data: Improving the user experience with
a/b testing, 2017.
[18] G. Dove, Codesign with data, 2015.
[19] D. Ha, D. Eck, A neural representation of sketch drawings, in: 6th International Conference
on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018,
Conference Track Proceedings, OpenReview.net, 2018. URL: https://openreview.net/forum?
id=Hy6GHpkCW.
[20] T.-H. Sun, C.-H. Lai, S.-K. Wong, Y.-S. Wang, Adversarial colorization of icons based on
contour and color conditions, in: Proceedings of the 27th ACM International Conference on
Multimedia, MM ’19, Association for Computing Machinery, New York, NY, USA, 2019, p.
683–691. URL: https://doi.org/10.1145/3343031.3351041. doi:10.1145/3343031.3351041.
[21] J. Li, J. Yang, A. Hertzmann, J. Zhang, T. Xu, Layoutgan: Generating graphic layouts with
wireframe discriminators (2019).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Dove</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Halskov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Forlizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zimmerman</surname>
          </string-name>
          ,
          <article-title>Ux design innovation: Challenges for working with machine learning as a design material</article-title>
          ,
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .1145/3025453.3025739.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Machine learning as a ux design material: How can we imagine beyond automation, recommenders</article-title>
          , and reminders?,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Carter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nielsen</surname>
          </string-name>
          ,
          <article-title>Using artificial intelligence to augment human intelligence</article-title>
          ,
          <source>Distill</source>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .23915/distill.00009, https://distill.pub/2017/aia.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <article-title>Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered ai systems</article-title>
          ,
          <source>ACM Trans. Interact. Intell. Syst</source>
          .
          <volume>10</volume>
          (
          <year>2020</year>
          ). URL: https://doi.org/10.1145/3419764. doi:
          <volume>10</volume>
          .1145/3419764.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Engelbart</surname>
          </string-name>
          ,
          <article-title>Augmenting human intellect: A conceptual framework</article-title>
          , https://www. bibsonomy.org/bibtex/298050040b80f71891383fcd83d7c7100/fcerutti,
          <year>1962</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Shi</surname>
          </string-name>
          , J. Han,
          <string-name>
            <surname>Y</surname>
          </string-name>
          . Guo,
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Childs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>An artificial intelligence based data-driven approach for design ideation</article-title>
          ,
          <source>Journal of Visual Communication and Image Representation</source>
          <volume>61</volume>
          (
          <year>2019</year>
          )
          <fpage>10</fpage>
          -
          <lpage>22</lpage>
          . URL: https://www.sciencedirect.com/ science/article/pii/S1047320319300604. doi:https://doi.org/10.1016/j.jvcir.
          <year>2019</year>
          .
          <volume>02</volume>
          .009.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>Generative adversarial nets</article-title>
          , in: Z.
          <string-name>
            <surname>Ghahramani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Welling</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Cortes</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Lawrence</surname>
          </string-name>
          , K. Weinberger (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>27</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2014</year>
          . URL: https://proceedings.neurips.cc/paper/2014/ file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R. T.</given-names>
            <surname>Hughes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , T. Bednarz,
          <article-title>Generative adversarial networks-enabled human-artificial intelligence collaborative applications for creative and design industries: A systematic review of current approaches and trends</article-title>
          ,
          <source>Frontiers in Artificial Intelligence</source>
          <volume>4</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>T.</given-names>
            <surname>Beltramelli</surname>
          </string-name>
          , Pix2code:
          <article-title>Generating code from a graphical user interface screenshot</article-title>
          ,
          <source>in: Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS '18</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1145/3220134.3220135.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Robinson</surname>
          </string-name>
          ,
          <article-title>Sketch2code: Generating a website from a paper mockup</article-title>
          ,
          <year>2019</year>
          . arXiv:
          <year>1905</year>
          .13750.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>T.</given-names>
            <surname>Calò</surname>
          </string-name>
          , L. De Russis,
          <article-title>Style-aware sketch-to-code conversion for the web</article-title>
          ,
          <source>in: Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS '22 Companion</source>
          , Association for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          , p.
          <fpage>44</fpage>
          -
          <lpage>47</lpage>
          . URL: https://doi.org/10.1145/3531706.3536462. doi:
          <volume>10</volume>
          .1145/3531706.3536462.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>T.</given-names>
            <surname>Calò</surname>
          </string-name>
          , L. De Russis,
          <article-title>Creating dynamic prototypes from web page sketches</article-title>
          ,
          <source>in: Proceedings of the 1st ACM SIGPLAN International Workshop on Programming Abstractions and Interactive Notations</source>
          , Tools, and Environments, Association for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          . URL: https://doi.org/10.1145/3563836.3568724. doi:
          <volume>10</volume>
          .1145/ 3563836.3568724.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>