=Paper=
{{Paper
|id=Vol-2827/KBS-Paper_6
|storemode=property
|title=Experiments in Algorithmic Design of Web Pages
|pdfUrl=https://ceur-ws.org/Vol-2827/KBS-Paper_6.pdf
|volume=Vol-2827
|authors=José N. Rebelo,Sérgio Rebelo,Artur Rebelo
}}
==Experiments in Algorithmic Design of Web Pages==
Experiments in Algorithmic Design of Web Pages José N. Rebeloa , Sérgio Rebeloa and Artur Rebeloa a University of Coimbra, CISUC, DEI Abstract Web Design and Web Development have been in a never-ending evolution state, since the first web page was made publicly available. Thus, these technologies are enabling the development of innovative ways to interact and communicate with people. In this paper, we present a computational design system that explores the use of algorithmic design processes for web page generation. This system, which is avail- able at http://awd3.dei.uc.pt/, automatically generates experimental web pages, reflecting the semantic meaning of its content. The content is gathered from Wikipedia API, through a textual input of the user. The system employs a Natural Language Understanding classifier and lexicon-based approaches to recognise the sentiments, the emotions and colours related to the content. Also, users may fine-tune the generated output, in a parametric way, according to their desires and tastes. Keywords Algorithmic Design, Data-Driven Design, Graphic Design, Web Design, Web Development 1. Introduction The public emergence of the Internet imposed a paradigm shift in our society [1]. Nowadays, when someone opens a web browser and searches anything, one has access to an immeasurable quantity of knowledge and resources [2]. Web pages perform, then, an important role in the contemporary world, since they are the main interface for someone accessing the data available online. Since the publishing of the first web page, in the early 1990s, the web environment has been in a state of never-ending evolution, and Web Design (WD) has not been left behind. Recent innovations on Web Development (WDEV) and, consequently, on WD have produced deep changes in the way that people interact and communicate online. Thus, we believe that the current web environment presents a fertile ground for the emergence of creative explorations where computational design technologies, especially Artificial Intelligence (AI), will allow the development of innovative data-driven and generative web designs. During the earlier times of the World Wide Web, most designers had harsh aesthetically lim- itations imposed on their designs. Most of those restrictions were due to limitations on screen sizes and typographic choices since the only accessible fonts were those that were available in all operating systems (i.e. web-safe fonts). At the time, some designers have predicted, there- fore, a collapse of the quality of Graphic Design standards, mainly due to two main reasons: (i) the limitations of the programming languages in that period; and (ii) the ease that someone Joint Proceedings of the ICCC 2020 Workshops (ICCC-WS 2020), September 7-11 2020, Coimbra (PT) / Online " jarebelo@student.dei.uc.pt (J.N. Rebelo); srebelo@dei.uc.pt (S. Rebelo); arturr@dei.uc.pt (A. Rebelo) 0000-0002-7276-8727 (S. Rebelo); 0000-0001-8741-078X (A. Rebelo) © 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) could use and access to these new technologies and create web pages [2]. These predictions failed and, in the mid-1990s, several designers began to design web pages showing that the restrictions and limitation of the web could be overcome. Nowadays, web pages become somewhat repetitive and tiresome, since most of them are designed under the same rules and developed using the same frameworks. However, in recent years, we began to observe a further exploration of new layouts and interaction. Most of the times, these experimental works explore the web medium (i.e. the design of page) in a way that it is as important as, or more, than the information transmitted on the page, i.e. the web pages influence the way that the information on it is transmitted [3]. Algorithmic and data-driven approaches are often the key tools employed, giving the possibility to designers manipulate the visuals and the content in a dynamic way. In this paper, we present a work in progress system that explores computational design tech- nologies in the contexts of WD. Briefly, the system automatically generates web pages, from a user inputted text. The generation process is described as follows. First, the user inputs a text (i.e. a search term) and the system performs a search by this term in Wikipedia API. Next, re- turned content is properly analysed by a Natural Language Understanding (NLU) classifier and lexicon-based approaches with the aim to recognise emotions, sentiments, and colours associ- ated with it. Based on the results of that analysis, the system defines the visual characteristics of the page and generates a web page. This way, the visual characteristics of the generated web pages convey, as much as possible, the semantic meaning of its content. During the generation process, users may change the displayed content as well as fine-tune the visual characteristic of the generated outputs, in a parametric way, according to their desires and tastes. You may experiment with this system at http://awd3.dei.uc.pt/. The key technical contributions presented in this paper include (i) an computational design system capable of algorithmically generate web pages based on a search term, regardless the length and purpose of the content, (ii) a parametric design approach that enables the user to fine-tune the generated outputs, and (iii) a method to extract sentiments, emotions and colours from the content that combines an NLU classifier and lexicon-based approaches. The remainder of this paper is organised as follows: Section 2 encapsulates related work focuses in algorithmic web design; Section 3 comprehensively describes the proposed system; and, finally, Section 4 draws the conclusions and points the directions for future work. 2. Related Work The use of algorithmic design processes to generate visual artefacts already existed in the ear- lier times of the second half of the 20th Century [4]. However, it is with the introduction of the personal computer and the subsequent release of the creative coding frameworks (such as Max, VVVV, Processing or Open Frameworks) that graphic designers begin to include often these processes on their workflows. Nowadays, several designers explore computer program- ming, allowing them to solve graphic problems in a flexible, participatory and customised way. Muriel Cooper and her students at the Visible Language Workshop and John Maeda were some of the pioneers that used tailor-made software to generate visual artefacts (see, e.g. [5] and [6]). Reas et al. [7] and Richardson [8] presents a good overview of the work on this field. Also, in the early time of the Internet, some designers understood the potential of the Internet as a new medium for visual exploration. This way, they began to design web pages in order to enrich their navigation and aesthetics. The website of Discovery Channel, designed by Jessica Helfand, in 1994, and the web page for MoMa’s exhibition Mutant Materials in Contemporary Design designed by Paola Antonelli, in 1995, are good examples of earlier work on this field [1] [2] The use of parametric and algorithmic technologies to generate web designs, as far as we know, is a recent and unexplored field. However, some related work may be pointed out. In 1999, Monmarché et al. [9] developed Imagine, an Evolutionary Computation (EC) system that interactively evolves CSS style sheets using an Interactive Evolutionary Computation (IEC). Subsequently, in 2002, Oliver et al. [10] extended this work, including the evolution of the position of each object on the page layout. In 2007, Park [11] developed the Evo-Web system that evolves websites and CSS files using IEC. In the same year, Quiroz et al. [12] presented a semi-automatic system for evolving user interface designs evaluating the outputs using a com- bination of IEC and hardwired evaluation based on user interfaces design guidelines. In the evaluation process, the user only needs to pick up two (the best and worst) candidate solutions of a subset presented, every certain number of generations instead of every single generation. In 2013, Sorn and Rimcharoen [13] used IEC to evolve HTML and CSS files for predefined content. In this approach, the outputs are evolved at the section level, instead of the whole output. The Grid application [14], first launched in 2015, presented an AI system to generate websites from the user input of the content and some design preferences. Soon after, in 2016, WIX released its Advanced Design Intelligence system [15] that also generates tailor-made web- sites based on the user preferences and in related information gathered online. More recently, similar online services, with distinct levels of automation, have also be launched (e.g. Fire- Drop [16], BookMark [17] or Huula [18]) In 2016, Schulz presented a tool to interpolate website components, in a parametric way [19]. In the same year, Gold presented a declarative and per- mutational design tool, the René [20], that creates multiple combinations of web components from a set of visual features defined by the user. In 2017, Huu.la released HuulaTypesetter [21], a web tool that automatically defines the font sizes for a web page, having in consideration factors such as the style of the text, style of its container, sibling, etc. In 2018, Pelzer presented Temper [22] a template-based website generator that enables the user to dynamically insert the content and define its styles according to a predefined set. In the same year, Orsi presented the Whole Web Catalog [23] a web system that generates a web page from a user given term. The generated web page presents several data related to this term, gathered using multiple popu- lar web APIs (e.g. New York Time, Wikipedia, Youtube, etc.). More recently, in 2019, Otander and Morse launched Components AI [24], a repository of generative web components that also includes parametric web page themes. Soon thereafter, Aukia presented Uibot [25], a web app that generates style and layout variation of a dashboard. Moreover, there is an increasing inter- est in the employment of deep learning approaches to automatically generate web pages from wireframes and preliminary designs (e.g. [26] or [27]). Figure 1: Schematic of the system’s architecture. 3. The System The present system automatically generates web pages with content gathered dynamically, through the Wikipedia API, from a user inputted search term. The final outputs are designed and structured in such a way that it resembles a web page created by a real designer. The main motivation behind this system comes from a need to experiment with algorithmic design in order to create diversity and variation on the web designs, in a dynamic and effortless way. The system generates web pages employing 4 main modules: (i) Data Processing; (ii) Content Analysis; (iii) Content Styling; and (iv) Placement and Design. Figure 1 presents a schematic overview of the system workflow. The generation process begins when the user inputs a search term, in a specific search form. After clicking the "search" button, the Data Processing module performs a search in Wikipedia API to obtain data relative to the search term. If the returned data satisfies the user, he/she sends the information to the Content Analysis module by clicking on the "analyse" button. This module analyses the content to recognise emotions, sentiments and colours related to it. After that, the Content Styling module visually styles the content based on previous analysis. Finally, the Placement and Design module creates the page according to the styles defined beforehand. After the page has been generated, users may fine-tune the generated page by redefining some visual variables, in a parametric way, through a specific interface (see Figure 2). This way, users may adjust some visual properties of the outputs according to their preferences and taste. 3.1. Data Processing The Data Processing module is responsible for gathering the data from the Wikipedia API, based on a term input by the user. Also, this module removes all the HTML tags returned with the content (e.g. text, images, hyperlinks, sections marks, inline styles, etc.) so that the content may be properly analysed by Content Analysis module. Its workflow is described as follows. First, the user inputs a text (e.g. sentence or word) in a text box and clicks on the "search" button, in a specific search form. After, a search about the term is performed on the API. Once the data is returned, the module split it into two copies. One copy goes directly to the front-end interface of the system and the other goes to the server. This way, the user may review the information (i.e. the first copy) and change it, if necessary. The second copy is a copy of the first one with the unnecessary content given by the API. When the user made a change in the first copy, the second copy is also updated. This copy is then formatted in such a way that it can only be read by a browser and it is not possible to be analysed by the Content Analysis module. This way, when the user clicks in the "analyse" button, all the unnecessary content of this second copy are removed and the content automatically sent to Content Analysis module. Figure 2: Refinement interface. After the generation, the user may parametrically refine the visual characteristics of the generated outputs. 3.2. Content Analysis The Content Analysis module is accountable for the analysis of the content to recognise sen- timents, emotions, and colours associated with it. This way, the module begins the analysis by counting the words and the frequency of each word on the content. It also counts the HTML tags that are present in the content and their frequency. After, it employs an NLU network and lexicon-based analysis to perform the analysis. First, this module simplifies the content so that the analysis results are more trustworthy. This process is described as follows. First, it transforms the contractions (e.g. I’m, you’re) to their uncontracted forms (e.g. I am, you are). Following, it converts all the words to lowercase. Next, it removes non-alphabetical and spe- cial characters from the gathered text. Subsequently, it finds and corrects spelling mistakes that may occur within the text. Finally, it removes stop words (e.g. but, a, or, what). After, the content is analysed. The sentiments are recognised by global analysis that em- ploys an NLU classifier to recognise sentiment in the text as a whole. This classifier is able to recognise the sentiment present in the gathered text on the positive-negative axis. On the other hand, emotions are recognised though a lexicon-based approach, at the local level, i.e. analysing all the words of content. This way, after tokenizing the text, each word is searched in a word-emotion association lexicon. The used lexicon is developed by Moham- mad and Bravo-Marquez [28] and enables the recognition of 8 basic and prototypical emotions (i.e.anger, anticipation, disgust, fear, joy, sadness and surprise). This module also analyses the content to recognise colours associated with it. Using a word- colour association lexicon by Mohammad [29]. The lexicon has data about the relation of several words with 11 colours: black; blue; brown; green; grey; orange; purple; pink; red; white; and yellow. In the end, this analysis creates an annotated map that describes the intensity of the relationship between these colours and content. The intensity is calculated based on the scores of word-colour associations and in the times that one colour is associated with a word in the content. The data returned from this module is compiled as a JSON file and, after, used to define the appearance of the output by the Content Styling module. At the end of this analysis, we have the following data: (i) the words present on content and their frequency; (ii) the HTML tags used on content and their frequency; (iii) the emotions recognised in the content’s word; (iv) the sentiment transmitted by the text; and (v) the colours associated with the content and the intensity of their associations. This module implements the methods and the NLU classifier available in NLP.js library [30] 3.3. Content Styling The Content Styling module visually styles the content based on the results from the analy- sis described in the subsection above. In this process, this module explores OpenType Variable Fonts technology that enables certain design attributes of the typeface can be adjusted paramet- rically. We used the font Recursive developed and designed by ArrowType studio [31]. Thus, we dynamically define its attributes based on the semantic analysis of the content. The parametric attributes of this typeface are (i) the weight (i.e. from light to extra black); (ii) the monospace (i.e. from, natural-width sans serif font to a monospaced font); (iii) the casual (i.e. from a linear to casual type design), (iv) the slant (i.e. from 0◦ to -15◦ degrees), and (v) the cursive (i.e. select- ing between roman, cursive and an automatic selection). The font’s weight of each word is defined based on the word count (e.g. more frequent words are designed with more weight). Each one of the other attributes is defined based on the average score of sentiment, in the content, multiplied by a random number in a range of 0 to 7. After, the resulting value is normalised to the scale of the typeface attribute. Regarding the selection of which colour is rendered, the previous module performed an analysis to understand what are the colours more associated with the content. Using these results, the system randomly picks a colour of the 5 more associated with content to use in the background. Also, it will use the other colour associated with the content to colour the typography, if the combination of these colours is not against to the ratio of legibility on web standards [32]. Finally, the module defines the size, the flow and the patterns of the HTML elements where the content will be placed. The process is as follows. First, the width of the main container is defined based on the number of words in the text. Next, it defines the margins for each section on the gathered text based on the number of emotions recognised in this section. 3.4. Placement and Design The Placement and Design module is responsible for employing the necessary means to render the content. This module has a set of predefined base layouts to place the content. Each layout is designed to convey a specific sentiment and emotion. This way, we implemented 16 variable Figure 3: Output generated by the system. The Outputs at the top are generated using the word Joker. In the middle, the outputs are generated using the word Stoic. In the bottom, the outputs are generated using the word Mercury. layouts, each one conveys one of the 8 emotions and one of the 2 sentiments. This way, the module random selects one layout that conveys the recognised sentiment and one emotion of the 3 emotions more present in words of the content. The base layouts were designed by us throughout empirical exploration. Finally, the style of each element on the layout (i.e. background colour, typeface style, colour, margins, etc.) is determined based on the values defined on the previous module and the output presented to the user. Figure 3 presents some typical outputs generated by the system. After the generation, the user may visually fine-tune the output using the refinement interface according to their necessities and tastes (see figure 2). 4. Discussion and Conclusions We have present a system that automatically generates web pages using content dynamically gathered online. The content is gathered based on a search term inputted by the user. The outputs are generated throughout the employment of 4 main modules: (i) Data Processing; (ii) Content Analysis; (iii) Content Styling; and (iv) Placement and Design. Briefly, the system works as follows. From a user inputted term, the Data Processing module performs a search in Wikipedia API to obtain the content. If the returned content satisfies the user, the information is processed by the Content Analysis module. In this module, the content is analysed to recognise the sentiments, the emotions and colours related to it. In this process, an NLU classifier and lexicon-based approaches are employed. After, the Content Styling module analyses the results and defines a set of visual variables according to the semantic meaning of the content. Finally, the Placement and Design module employs the necessary means to render the output. If users desire, they may refine the generated output by adapting some values, in a specific interface, in a parametric way. This way, they may adjust the output according to their taste and export the results. The presented system still is a work in progress. However, it is already is able to automat- ically generate outputs that archive a high level of diversity. Besides this, the system also has the potential to be a functional co-creativity tool. The users are invited by the system to be included in the design process, being able to choose the content that will be presented and to adjust some visual properties of the generated output. This way, we believe that this is a useful tool for enhancing user creativity, especially for web designers in the exploratory stages of their projects. At the same time, we believe that the system may be used by general people to more easily generate web pages for presenting information about a thematic. Also, the system displays how the recent advances in WDEV and AI may expand the tools and automate some processes in WD, promoting a novel and dynamic way of communicating with people. Future work on this system will focus on (i) to more experimental ways of present the con- tent, (ii) to increase the number of variables that can be changed by the results returned by the Content Analysis, (iii) to explore the use of an EC to automatically evolve some generated outputs, and, finally, (iv) to evaluate the quality of the obtains outputs with users. Acknowledgments The second author is funded by FCT under the grant SFRH/BD/132728/2017. This work is par- tially supported by national funds through the Foundation for Science and Technology (FCT), Portugal, within the scope of the project UID/CEC/00326/2019. References [1] H. Armstrong, Giving Form to The Future, in: H. Armstrong (Ed.), Digital Design Theory: Readings From the Field, Princeton Architectural Press, New York, NY, 2016, pp. 9–20. [2] P. B. Meggs, A. W. Purvis, Meggs’ History of Graphic Design, 6th ed., John Wiley & Sons, New York, NY, 2016. [3] M. McLuhan, Understanding Media: The Extensions of Man, MIT Press, Cambridge, MA, 1994. [4] R. Leavitt, Artist and Computer, Harmony Books, New York, NY, 1976. [5] M. Cooper, Computers and design, Design Quarterly 1 (1989) 1–31. [6] J. Maeda, Maeda@ media, Thames & Hudson, London, United Kingdom, 2000. [7] C. Reas, C. McWilliams, LUST, Form and Code: In Design, Art and Architecture, A Guide to Computational Aesthetics, Princeton Architectural Press, New York, NY, 2010. [8] A. Richardson, Data-Driven Graphic Design: Creative Coding for Visual Communication, Bloomsbury Publishing, London, United Kingdom, 2016. [9] N. Monmarché, G. Nocent, M. Slimane, G. Venturini, P. Santini, Imagine: a tool for gen- erating HTML style sheets with an interactive genetic algorithm based on genes frequen- cies, in: IEEE SMC’99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028), volume 3, IEEE, New York, NY, 1999, pp. 640–645. [10] A. Oliver, M. N, G. Venturini, Interactive Design Of Web Sites With A Genetic Algo- rithm, in: Proceedings of the IADIS Internationcal Conference WWW/INTERNET, 2002, pp. 355–362. [11] S. Park, Webpage design optimization using genetic algorithm driven CSS, Ph.D. thesis, Iowa State University, 2007. [12] J. C. Quiroz, S. J. Louis, A. Shankar, S. M. Dascalu, Interactive genetic algorithms for user interface design, in: 2007 IEEE Congress on Evolutionary Computation, IEEE, New York, NY, 2007, pp. 1366–1373. [13] D. Sorn, S. Rimcharoen, Web page template design using interactive genetic algorithm, in: 2013 International Computer Science and Engineering Conference (ICSEC), 2013, pp. 201–206. [14] The Grid Homepage, 2015 (Last accessed 28 May 2020). URL: https://thegrid.io. [15] WIX, The future of website creation: Wix artificial design intelligence, 2016 (Last accessed 28 May 2020). URL: https://www.wix.com/blog/2016/06/ wix-artificial-design-intelligence/. [16] Firedrop Homepage, 2015 (Last accessed 28 May 2020). URL: https://firedrop.ai/. [17] BookMark Homepage, n.d. (Last accessed 28 May 2020). URL: https://www.bookmark. com/. [18] Huula Homepage, 2019 (Last accessed 28 May 2020). URL: https://huu.la/. [19] F. Schulz, Designing with intent. in: Florian schulz’s medium, 2016 (Last accessed 28 May 2020). URL: https://medium.com/@getflourish/designing-with-intent-be6664b10ac. [20] J. Gold, Declarative design tools, 2016 (Last accessed 28 May 2020). URL: https://jon.gold/ 2016/06/declarative-design-tools/. [21] Huula, Huulatypesetter: A bot that suggest font sizes for web pages, 2017 (Last accessed 28 May 2020). URL: https://huu.la/ai/typesetter. [22] P. J, Temper, 2020 (Last accessed 28 May 2020). URL: https://temper.one/. [23] S. Orsi, Whole web catalog: Mash-up tools, in: A. Rangel, L. Ribas, M. Verdicchio, M. Car- valhais (Eds.), Proceedings of the 6th Conference on Computation, Communicaton, Aes- thetics & X (xCoAx 2018), University of Porto, Porto, Portugal, 2018, pp. 206–210. [24] J. Otander, A. Morse, Components ai, 2018 (Last accessed 28 May 2020). URL: https:// components.ai/theme-ui. [25] J. Aukia, Uibot, 2019 (Last accessed 28 May 2020). URL: https://www.uibot.app/. [26] T. Beltramelli, Pix2code: Generating code from a graphical user interface screenshot, in: Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Sys- tems, EICS ’18, Association for Computing Machinery, New York, NY, USA, 2018. Article no. 3. [27] A. Robinson, sketch2code: Generating a Website From a Paper Mockup, Master’s thesis, University of Bristol, 2018. [28] S. M. Mohammad, P. D. Turney, Crowdsourcing a Word-Emotion Association Lexicon, Computational Intelligence 29 (2012) 436–465. [29] S. Mohammad, Colourful language: Measuring word-colour associations, in: Proceedings of the Second Workshop on Cognitive Modeling and Computational Linguistics, Associ- ation for Computational Linguistics, 2011, pp. 97–106. [30] AXA Group Operations Spain S.A., Nlp.js, 2020 (Last accessed 9 April 2020). URL: https: //github.com/axa-group/nlp.js/. [31] ArrowType Studio, Recursive typeface, 2019 (Last accessed 28 May 2020). URL: https: //www.recursive.design/. [32] W3.org, W3.org, contrast (minimum): Understanding sc 1.4.3, n.d. (Last ac- cessed 28 May 2020). URL: https://www.w3.org/TR/UNDERSTANDING-WCAG20/ visual-audio-contrast-contrast.html.