Using Image Schemas in the Visual Representation of Concepts João M. CUNHA and Pedro MARTINS and Penousal MACHADO CISUC, Department of Informatics Engineering, University of Coimbra {jmacunha,pjmm,machado}@dei.uc.pt Abstract. Current computational systems that visually represent concepts mostly focus on perceptual characteristics and overlook conceptual ones (e.g. affordances). In this paper, we propose an approach to include affordance-related features in such systems, by using image schemas. We firstly deconstruct often used icons to show the role of image schemas, then we use examples to illustrate how visual represen- tations can be produced using image schemas and discuss existing issues. This ap- proach has great application potential, especially in existing Visual Blending sys- tems from the domain of Computational Creativity. Keywords. Image Schema, Visual Blending, Visual Representation, Computational Creativity 1. Introduction Two types of categorisation processes can be said to take place in concept formation: per- ceptual and conceptual [1]. Perceptual categorisation has to do with perceptual features (what objects look like), whereas conceptual categorisation is related to purpose and us- age (affordances) [2]. When defining a concept (e.g. house), the perceptual features (i.e. how a house looks like) are not enough and conceptual aspects should also be considered (i.e. what it can be used for), as pointed out by Hedblom and Kutz [2]. Despite this, little importance has been given to conceptual processes in the domain of visual representation of concepts. Systems that produce visual representations for concepts (e.g. [3–5]) mostly focus on perceptual features (e.g. shapes, colours, etc.). Kuhn [6] explored the idea that affordances can be modelled using image schemas – learned spatio-temporal relations that can be seen as conceptual building blocks (e.g. C ONTAINMENT, SUPPORT, etc.). This notion has been used in the computational mod- elling of concept invention and conceptual blending (e.g. [7]). Even though image schemas are not visual by nature, several authors have used visualisations in order to make their ideas clearer to the reader. Some examples are: SOURCE PATH GOAL and EQUILIBRIUM [8] (Fig. 1); eight different visualisations for CONTAINMENT [9]; the PATH - FOLLOWING image schema family [10]; CONTACT , SUP - PORT , VERTICALITY and ATTRACTION [11]; and MOVEMENT- ALONG - PATH [12]. These visualisations of image schemas are aligned with spatial relations used by au- thors addressing visual blending (e.g. or inside (x, y) [4] or above (x, y) [13]). However, in the work of such authors, spatial relations are mostly used as an aid for element posi- Figure 1. Visual representation of SOURCE PATH GOAL (left) and EQUILIBRIUM (right), adapted from [8] Figure 2. Pictograms for escalator, luggage trolley and ferryboat tioning. Confalonieri et al. [13] blended computer icons, which were composed of signs (e.g. a magnifying glass) and spatial relations between them. Different meanings were attained depending on the combination of sign and relation (i.e. a downwards-pointing arrow could lead to both download X or download-to X, depending on the used relation). Cunha et al. [4] focused on perceptual aspects and tried to produce visual blends by identifying the prototypical parts of concepts [14] and using previously defined spatial relations. We propose that, in addition to perceptual features (e.g. prototypical parts), affor- dances should be considered in systems for the visual representation of concepts. These can be modelled using image schemas, as suggested by Kuhn [6]. As such, the concept house can be represented using its prototypical parts (e.g. walls and roof) but also by fo- cusing on its affordance of being used as shelter – i.e. to offer protection. The idea of us- ing image schemas in the visual representation of concepts is also addressed by Falomir and Plaza [15]1 , who propose an approach to computationally model the understand- ing of conceptual blends by a receiver agent. Their approach is based on disintegration and decompression of input visual representations of novel concepts (e.g. blended icons) and consequent recreation of the blends, using qualitative spatial descriptors and image schemas. Despite the alignment with our work, marked by the proposal of image schema integration in processes related to the visual representation of concepts, the goal of [15] is different from ours. Whereas they address understanding (a process from form to con- tent or meaning), we focus on generation (from meaning to form). As already mentioned, previous work on the generation of visual representation of concepts (e.g. [4, 13]) does not consider image schemas. The main contributions of this paper are (i) the outline of an approach to include af- fordances in a system for visual blending, (ii) the analysis of a set of illustrative examples and (iii) the identification of implementation issues that should be addressed in the future. The remainder of this paper is organised as follows: section 2 describes our approach and provides an analysis of illustrative examples; section 3 identifies implementation issues; and section 4 presents our conclusions and directions for future work. 1 The authors came across this research work (published online February 2019) upon producing the final version of this paper (March 2019), after its presentation at TriCoLore 2018 (December 2018). 2. Approach In addition to perceptual features of concepts, their affordances can also be observed in pictograms of signage systems. For example, the potential use of an escalator is rep- resented using an arrow (Fig. 2) and the idea of SUPPORT from a luggage trolley or a ferryboat is illustrated through the inclusion the entity that they “support” – a suitcase and a car, respectively (Fig. 2). Moreover, other communication systems – e.g. Blissym- bols [16], a system composed of several hundreds of ideographs – also make use of image schemas for the representation of concepts’ affordances, as identified in [17]. Having these examples as inspiration, we present an approach for the integration of affordances (using image schemas) in systems for visual representation of concepts through visual blending. We believe that image schemas can be used to guide the process and validate the results, minimising the number of “nonsense” solutions [18]. In this section, we firstly explain the approach and then we give some illustrative examples to show the potential of considering conceptual aspects in visual blending. 2.1. Implementation Strategy The proposed approach uses the following 4-step pipeline: 1. Identification of the concept. The first step consists in identifying the concept to be visually represented. Computational approaches to the visual representation of con- cepts often allow the user to freely introduce concepts. In the context of this paper and based on the work conducted in [5], we consider the representation of single-word (e.g. bank) and double-word concepts (e.g. mother ship). The representation of more complex concepts is also possible but we decided not to address it. 2. Identification of image schemas. This step consists in identifying image schemas related to the concepts, which is challenging task. As our main goal is to present an ap- proach for using image schemas in the visual representation of concepts, we give exam- ples of methodologies for the identification of image schemas but we avoid going into much detail on the topic. The methodology presented by Kuhn [6] uses WordNet glosses to extract image schematic structures for concepts (e.g. identifying CONTAINMENT for house). The gathering and analysis of example sentences for each concept would allow the identification of possible image schemas related to them – matching human habita- tion and living quarters with the idea of “containing humans” (see the house descriptions in Fig. 3, retrieved from the Oxford Dictionaries2 and WordNet3 ). Other approaches fo- cus on the extraction of spatial descriptions from text. One example is the Generalised Upper Model ontology (GUM) [19], which facilitates mappings between natural language spatial expressions and spatial calculi – using the preposition on indicates SUPPORT (e.g. “the suitcase is on the luggage trolley” or “the car is on the ferryboat”) and using in in- dicates CONTAINMENT (e.g. “the suitcase is in the car” or “the car is in the garage”). In this model, SUPPORT and CONTAINMENT are seen as subconcepts of “Control”, which is itself a subconcept of “FunctionalSpatialModality’. 3. Gathering input visual representations. In the case of house, two input visual rep- resentations would be needed for its visual representation – the pictograms for building and person (as shown in step 2 of house in Fig. 3). Following the same strategy as the one 2 en.oxforddictionaries.com, retrieved 2018 3 wordnet.princeton.edu, retrieved 2018 house “A building for human habitation” “a dwelling that serves as living quarters for one or more families” 1 2 CONTAINMENT bank “A financial establishment that uses money deposited by customers for investment” “institution that accepts deposits and channels 1 2 the money into lending activities” CONTAINMENT elevator “A platform or compartment housed in a shaft for raising and lowering people or things to different levels” 1 2 3 “platform or cage that is raised and lowered VERTICALITY CONTAINMENT mechanically in a vertical shaft in order to move people from one floor to another” Figure 3. Identification of image schemas for house, bank and elevator, using examples retrieved from Oxford Dictionaries and WordNet. The different types of underline identify the steps taken in each example (best viewed in colour). used by Cunha et al. [5], a dataset of visual representations and corresponding semantic information could be used. Such a dataset allows matching concepts to visual represen- tations (e.g. the word baby leads to the automatic retrieval of the baby icon shown in Fig. 4, using the system described in [5, 20]). 4. Production of visual representations. The last step concerns the use of the gath- ered visual representations (e.g. building and person) in combination with the identified image schema(s) (e.g. CONTAINMENT) to generate visual representations of the concept (e.g. house). This process of generation has several implementation issues of consider- able complexity (positioning of elements, image schema activation, etc.), which will be further detailed in section 3. 2.2. Illustrative examples In order to show the potential of considering image schemas in systems for visual rep- resentation of concepts, we start by presenting three examples of icons often seen in signage systems that show how image schemas are used in icon design (see Fig. 3). The first example is the icon for the concept house. The concept house can be rep- resented using only perceptual features (e.g. the icon shown in step 1 only represents the roof and the walls of a house, see Fig. 3). However, it can also use the affordance of serving as a shelter. In this sense, it is important to mention that the roof shape may also be seen as affordance-indicating and not purely perceptual. By considering the affor- dance of serving as a shelter, one may relate it to the CONTAINMENT image schema [6] – identified in the example descriptions (“human habitation” or “living quarters”). The CONTAINMENT schema implies a container entity and a contained entity, which can be respectively linked to “building” or “dwelling”, and “human”, based on the descriptions provided. This can result in a person sign placed inside a building (see step 2 of house example in Fig. 3). If we consider the concept bank, we reach a similar situation to house. Based on the action of “depositing” from the descriptions, we can also establish a connection to the CONTAINMENT image schema. This connection is further reinforced if we take into consideration other examples, such as the sentence “a bank account may contain funds, and if it is empty we can put some additional funds into the account and take them out again later” presented in [18]. As we already mentioned, the CONTAINMENT schema associates a container with something that it contains – in the case of bank and based on the descriptions, these two entities can be respectively matched with “establishment” or “institution”, and “money”. As such, a possible representation can be a building sign that has a dollar sign inside (see the bank icon in Fig. 3). A third example is the concept elevator, which is more complex as it deals with a combination of two different image schemas. The representation of complex abstract concepts using a combination of several image schemas is also addressed by Kuhn [6]. The main idea behind an elevator is its capability of moving upwards and downwards – based on the descriptions “for raising and lowering” or “raised and lowered mechani- cally” from Fig. 3. This can be translated into the VERTICALITY image schema, which is associated with movement. When dealing with static images, it can be represented using signs such as arrows (see elevator step 2 in Fig. 3). However, VERTICALITY is not the only image schema that can be associated with elevator – consider the question “what exactly does an elevator raise / lower?”. Similarly to what happens with house and bank, elevator is also related to CONTAINMENT. From the descriptions in Fig. 3, one can iden- tify that the contained entity for elevator is related to “people or things”, which justifies the construction of the icon often used to represent elevator (see step 3). 3. Discussion The examples analysed in section 2.2 serve to show that there is potential in considering image schemas in the visual representation of concepts. Despite this, there are several issues regarding the implementation of the proposed approach. Moreover, the examples already presented (house, bank and elevator) are based on existing icons and, as such, they were analysed using a deconstruction method, which was performed at a very su- perficial level and avoided most of the existing issues. As a matter of fact, using image schemas to generate novel visual representations is much more complex than portrayed in the given examples. In this section, we identify issues that have to be considered when using image schemas in a system for visual representation of concepts. The majority of the concepts used in the examples were collected from existing research work. In addition, we con- duct a high-level analysis and interpretation of visual representations. Nonetheless, it is important to mention that decomposing visual representations into meaningful elements in visual perception is a complex process. For further reading on the topic, we refer the reader to [21–24]. 3.1. Image Schemas: Identification Regarding image schema identification, one of the issues is that not all concepts can be associated with image schemas and, as such, this approach will not work in every situa- tion. In fact, for some concepts, the perceptual features are much more important for their visual representation (e.g. dog). Moreover, the actual identification of an image schema from text is complex and a subject of study itself – e.g. words related to CONTAINMENT [9] and extraction of spatial descriptions from text [19]. Several approaches can be ex- plored to identify image schemas, e.g. the use of metaphors associated with the concept being represented (we use this approach in examples given in the following sections). 3.2. Image Schemas: Visual Representation Putting aside the identification of image schemas and focusing on their usage, there are some questions that need to be addressed. First, using image schemas in the vi- sual representation of concepts assumes that image schemas have a visual representa- tion themselves. Despite this being true for some – which are easy to represent (e.g. SOURCE PATH GOAL) and even aligned with spatial relations used in visual blending systems (e.g. CONTAINMENT) [4, 13] – others are much more complex and may not be so straightforward in terms of visual representation (e.g. EQUILIBRIUM in Fig. 1). As such, further study is required to identify the image schemas more suitable for visual representation. In addition, schemas that can be considered as simple may end up having an appli- cation more complex than initially expected. For example, CONTAINMENT only requires two entities which are combined using an inclusion relationship. Despite this, issues may arise when combining these entities – this example will be further detailed in a later section using the concept mother ship. 3.3. Image Schemas: Entities Other image schemas regarded as simple may require extra signs in addition to the en- tities in order to be fully represented. The SOURCE PATH GOAL image schema, for ex- ample, can be visually represented using two entities (A and B) connected by an arrow, which indicates a transition between two points (see Fig. 1). To use this image schema in the visual representation of concepts, two entities need to be identified – A, the source, and B, the goal. This identification is not always easy and may lead to different mean- ings, depending on the entities chosen. Consider, for example, the three representations for the concept life based on the metaphor “life is a journey” [10], as shown in Fig. 4. First, the metaphor associates life to the image schema SOURCE PATH GOAL. As such, the two entities need to be identified and several possibilities exist. The first one (solu- tion a in Fig. 4) consists in considering the SOURCE as the initial stage of life (infancy represented by a baby) and the GOAL as the last (old age represented by an old person). Despite being a possible solution, if we consider the GOAL as the end of life, it is more correct to choose an entity that represents death (portrayed using a skull in solution b). Similarly and in order to be exact, the beginning of life is when the baby is still inside the mother’s womb, which can be represented by assigning a pregnant woman icon to the SOURCE (solution c). This example shows that for the same concept, based on the same a b c Figure 4. Three representations for life based on the metaphor life as a journey [10], using different entities for SOURCE (baby or baby in womb) and GOAL (old person or death) a b c d “look how far we’ve come” “I don’t think this relationship is going anywhere” Figure 5. Four representations for love based on the metaphor love as journey [25], using different examples metaphor, and using the same image schema, several possibilities exist in terms of rep- resentation. However, this variety may also lead to different meanings – solutions a and b represent the development of the baby, whereas solution c can instead be interpreted as the progression of the mother towards death. 3.4. Image Schemas: Concepts and Descriptions On the other hand, the application necessities of one image schema may change depend- ing on the concept being represented. For example, changing the concept from life to love but maintaining the metaphor “as a journey” [25] leads to the same image schema (SOURCE PATH GOAL). The representation a of Fig. 5 follows the same procedure as the one used in life and consists in the SOURCE being two people separate and the GOAL two people holding hands. However, if we consider that the emphasis of journey is the path of each individual towards a state in which they are together, it might make more sense to represent the individual paths – b in Fig. 5 – which is different in terms of representation. Moreover, the representations a and b are based on the assumption that the journey is the path towards being together – which might be based on the description “look how far we’ve come” – but using a different description (e.g. “I don’t think this relationship is going anywhere”) may lead to the exact opposite – as seen in c of Fig. 5. There is even the possibility to use the two descriptions together, which represents the “journey” from two people from being separate to being together and ending up going separate ways again (d in Fig. 5). In this last example, a middle point is added to “the journey”, increas- ing the complexity of the image schema application. These examples serve to show that the application requirements may vary, even using the same image schema and the same concept. The use of different descriptions for the same concept may also lead to different image schemas, which change the visual representation completely. For example, love can also be represented using the metaphor “as unity” [25]. This metaphor infers that love as unity “We were made for each other” "She is my better half” “Theirs is a perfect match” PART-WHOLE Figure 6. Visually representing love using the metaphor love as unity [25] – examples used (left), initial visual representations (middle) and visual representation for love (right) there are two parts that make a whole, which leads to the PART- WHOLE image schema (see examples in Fig. 6). This image schema has a visual representation entirely different from the SOURCE PATH GOAL – two entities are now seen as parts from a whole. In SOURCE PATH GOAL the visual representation was more or less intuitive, whereas in PART- WHOLE it is not so obvious. One possible way to represent PART- WHOLE consists of the following procedure: (i) identify the entities and gather their visual representations (middle of Fig. 6); (ii) conduct a visual transformation to make them be seen as “parts” (e.g. cutting them in half); and then the parts can be put together to make one single entity (right side of Fig. 6). However, the transformation used may not work in every situation and the final result might not even have an easy interpretation. 3.5. Blending: Image Schema Activation The blending process aims to represent the meaning of the concept, which requires (i) the correct usage and activation of the image schema(s), achieved by (ii) a correct com- bination of the input visual representations. In the previous example (love as unity), we already addressed issues that concern how image schemas can be activated in visual blending – transforming the input visual representations (e.g. cut in half) and afterwards merging them into a single element in order to activate the PART- WHOLE image schema. Even using a simple image schema, e.g. CONTAINMENT, its activation may prove to be problematic. The CONTAINMENT image schema can be represented by one of the entities being placed inside the other. Consider for example the visual representation of two concepts – (1) “being inside of a boat” and (2) “being inside of a car” – using input visual representations (a person, a boat, and several versions of car, see Fig. 7). One initial attempt to represent the two concepts might be to use the bounding box of the container entity’s visual representation for placement of the contained entity (row A, Fig. 7) . However, this approach is not guaranteed to work and may lead to unwanted and even opposite meanings – “swimming / drowning” (boat), “being outside of / next to a car” (car 1 and car 3 activate the IN - OUT image schema), and “being run-over” (car 2). Another approach may be to only consider part of the visual representation (e.g. only considering the boat and excluding the water), use its bounding box for placement and apply the necessary transformations (e.g, rotation or scale) in order to place the contained entity inside it (see row B, Fig. 7). In addition to being dependent on context knowledge (knowing which parts to use), this approach only works in some cases (car 3) and may lead to incorrect solutions in others – car 1 and car 2. A solution for “being inside of a boat” can be achieved by placing the person on top of the boat (boat C). Although this works for boat, using it with the car will not activate the CONTAINMENT image schema (car 1 and car 2) and may even activate other image schemas (e.g. UP - DOWN). In the case of the concept “being inside of a car” using the input visual representations car 1 and car 2, the CONTAINMENT image schema is BOAT CAR 1 CAR 2 CAR 3 A B C D Figure 7. Experiments with CONTAINMENT image schema using different versions of boat and car only activated by considering visibility aspects, thoroughly adjusting the layer order and placing the person behind the car structure (see row D). Such adjustments are, however, complex to implement in an automatic computational system, as they require context knowledge of the concept and depend on the input visual representations. The subject of image schema activation is studied by other authors. Hurtienne [25], for example, highlights the importance of “functional geometry” (combining the “appro- priate” objects in the spatial scene) in image schema activation. 3.6. Blending: Combination Having addressed the issue of activating an image schema, we now focus on the combi- nation of entities to achieve a given meaning. Consider, for example, the concept mother ship, addressed in [7], which is highly related to the CONTAINMENT image schema. However, the combination process is far from simple, even assuming that, for the visual representation of a given concept (e.g. mother ship), the adequate image schema is iden- tified (e.g. CONTAINMENT), suitable entities are chosen (e.g. mother, baby and ship, see Fig. 8) and the system has knowledge of how to correctly apply the image schema (e.g. placing the contained entity inside the container entity). The first issue concerns the as- signment of the container and the contained entities. For mother ship this is not trivial as both mother and ship can be seen as containers (mother may “contain” a baby and ship may “contain” cargo). As such, each one of these possible interpretations can lead to a solution (a and b in Fig. 8) but these may not be considered valid – a can be considered as nonsense and b may lead to other meanings (e.g. the ship that carried Superman to earth when he was still a baby). The solutions a and b were produced in a process of visual blending that only con- sidered conceptual aspects from the individual entities (mother and ship), using the spa- cial relation inside to represent CONTAINMENT. In these solutions, the combination was performed without regarding conceptual aspects from the concept (mother ship), result- ing in two “nonsense” blends (a ship baby inside a human mother and a human baby inside a ship mother). Although this may lead to possible solutions in certain situations, a b c Figure 8. Input visual representations for mother, baby and ship (left) and three visual representations for mother ship (right) mother ship can be seen as a conceptual blend between the input spaces mother and ship and its visual representation should take this aspect into consideration. The idea behind the concept mother is not only of CONTAINMENT but CONTAINMENT of individuals of the same class – the mapping between mother and ship as mother ship is a ship that contains other ships (c in Fig. 8). 4. Conclusion Computational systems that generate visual representations of concepts mostly focus on perceptual features. The main goal of this work was to highlight the potential of con- sidering affordances in such systems. We described an approach for their integration in systems such as the ones presented in [5, 26], based on the detection of image schemas behind the concepts being represented. We presented several examples to show the im- portance of image schemas in the design of visual representations (e.g. icons), identified issues that need to be addressed when implementing the proposed approach and com- pared solutions in terms of validity. The main issues concern: (i) the identification of the image schema, (ii) the visual representation of the image schema, (iii) the choice of the adequate entities, (iv) the meaning variation triggered by using different examples, (v) the image schema activation and (vi) suitable combination of elements in the blending process. Despite all these issues (and others that may also exist), we believe that there is potential in the proposed approach for improving existing systems that visually represent concepts. Future work will focus on the implementation of the proposed approach, ad- dressing the identified issues and assessing the system performance in comparison with existing approaches. 5. Acknowledgments João M. Cunha is partially funded by Fundação para a Ciência e Tecnologia (FCT), Por- tugal, under the grant SFRH/BD/120905/2016. The authors would like to thank the re- viewers and the workshop participants for their comments, which unquestionably helped to improve the paper. References [1] J.M. Mandler, Perceptual and conceptual processes in infancy, Journal of cognition and development 1(1) (2000), 3–36. [2] M.M. Hedblom and O. Kutz, Shape up, baby! Perception, Image Schemas, and Shapes in Concept Formation, in: Proceedings of the Third Interdisciplinary Workshop SHAPES 3.0 The Shape of Things, 2015. [3] P. Xiao and S. Linkola, Vismantic: Meaning-making with Images, in: Proceedings of the Sixth Interna- tional Conference on Computational Creativity, (ICCC-15), 2015. [4] J.M. Cunha, J. Gonçalves, P. Martins, P. Machado and A. Cardoso, A Pig, an Angel and a Cactus Walk Into a Blender: A Descriptive Approach to Visual Blending, in: Proceedings of the Eighth International Conference on Computational Creativity, (ICCC-17), 2017. [5] J.M. Cunha, P. Martins and P. Machado, How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji, in: Proceedings of the Ninth International Conference on Computational Creativity, (ICCC-18), 2018. [6] W. Kuhn, An image-schematic account of spatial categories, in: International Conference on Spatial Information Theory, Springer, 2007, pp. 152–168. [7] M.M. Hedblom, O. Kutz and F. Neuhaus, Image schemas in computational conceptual blending, Cogni- tive Systems Research 39 (2016), 42–57. [8] M. Johnson, The body in the mind: The bodily basis of meaning, imagination, and reason, University of Chicago Press, 1987. [9] B. Bennett and C. Cialone, Corpus Guided Sense Cluster Analysis: a methodology for ontology devel- opment (with examples from the spatial domain)., in: FOIS, 2014, pp. 213–226. [10] M.M. Hedblom, O. Kutz and F. Neuhaus, Choosing the right path: image schema theory as a foundation for concept invention, Journal of Artificial General Intelligence 6(1) (2015), 21–54. [11] M.M. Hedblom, O. Kutz, T. Mossakowski and F. Neuhaus, Between Contact and Support: Introduc- ing a Logic for Image Schemas and Directed Movement, in: Conference of the Italian Association for Artificial Intelligence, Springer, 2017, pp. 256–268. [12] T.R. Besold, M.M. Hedblom and O. Kutz, A narrative in three acts: Using combinations of image schemas to model events, Biologically inspired cognitive architectures 19 (2017), 10–20. [13] R. Confalonieri, J. Corneli, A. Pease, E. Plaza and M. Schorlemmer, Using argumentation to evaluate concept blends in combinatorial creativity, in: Proceedings of the Sixth International Conference on Computational Creativity, 2015, pp. 174–181. [14] R. Johnson, Prototype theory, cognitive linguistics and pedagogical grammar, Working Papers in Lin- guistics and Language Training 8 (1985), 12–24. [15] Z. Falomir and E. Plaza, Towards a model of creative understanding: deconstructing and recreating conceptual blends using image schemas and qualitative spatial descriptors, Annals of Mathematics and Artificial Intelligence (2019). [16] C.K. Bliss, Semantography (Blissymbolics): A Logical Writing for an illogical World, Semantography Blissymbolics Publ, 1965. [17] J.M. Cunha, P. Martins, A. Cardoso and P. Machado, Generation of Concept-Representative Symbols, in: Workshop Proceedings of the 23rd International Conference on Case-Based Reasoning (ICCBR-WS 2015), CEUR, 2015. [18] M.M. Hedblom, O. Kutz and F. Neuhaus, Image Schemas and Concept Invention, in: Concept Invention, Springer, 2018, pp. 99–132. [19] J.A. Bateman, J. Hois, R. Ross and T. Tenbrink, A linguistic ontology of space for natural language processing, Artificial Intelligence 174(14) (2010), 1027–1071. [20] J.M. Cunha, P. Martins and P. Machado, Emojinating: Representing Concepts Using Emoji, in: Work- shop Proceedings from The 26th International Conference on Case-Based Reasoning (ICCBR 2018), Stockholm, Sweden, 2018, p. 185. [21] E.R. Tufte, Visual explanations: images and quantities, evidence and narrative, Graphics Press, Cheshire, Connecticut. [22] J. von Engelhardt, The language of graphics: A framework for the analysis of syntax and meaning in maps, charts and diagrams, Yuri Engelhardt, 2002. [23] J. Bateman, J. Wildfeuer and T. Hiippala, Multimodality: Foundations, research and analysis–a problem- oriented introduction, Walter de Gruyter GmbH & Co KG, 2017. [24] A. Black, P. Luna, O. Lund and S. Walker, Information design: research and practice, Taylor & Francis, 2017. [25] J. Hurtienne, Image schemas and design for intuitive use: new guidance for user interface design, PhD thesis, Technische Universität Berlin, 2009. [26] J.M. Cunha, N. Lourenço, J. Correia, P. Martins and P. Machado, Emojinating: Evolving Emoji Blends, in: Proceedings of the 8th International Conference on Computational Intelligence in Music, Sound, Art and Design. (to appear), Springer, 2019.