=Paper= {{Paper |id=Vol-3828/ISWC2024_paper_11 |storemode=property |title=The EASY-AI Symbology |pdfUrl=https://ceur-ws.org/Vol-3828/paper11.pdf |volume=Vol-3828 |authors=Alexis Ellis,Cogan Shimizu |dblpUrl=https://dblp.org/rec/conf/semweb/EllisS24 }} ==The EASY-AI Symbology== https://ceur-ws.org/Vol-3828/paper11.pdf
                                The EASY-AI Symbology
                                Alexis L. Ellis1,∗ , Cogan Shimizu1
                                1
                                    Wright State University


                                                                         Abstract
                                                                         As artificial intelligence (AI) surges into the forefront of research and the lives of everyday people,
                                                                         challenges in understanding and communicating how these systems operate are becoming more prevalent.
                                                                         The need for a common language for AI systems that allows for multidisciplinary understanding and
                                                                         communication is a prevalent topic within the field. In this work, we take the visual framework EASY-AI
                                                                         and create a symbolic system that overlays the framework’s ontology to facilitate such communication
                                                                         and understanding. Poster submission.

                                                                         Keywords
                                                                         Explainable AI, Human Machine Interaction, Human Factors Engineering, Symbology, Education




                                1. Introduction
                                As artificial intelligence (AI) moves into the spotlight of not only major research interests but also
                                its rapid implementation into our daily lives, challenges in understanding and communicating
                                these systems are becoming more prevalent. These challenges will only increase in complexity as
                                advancements in the state-of-the-art [1] continue to evolve. Therefore, constructing a common
                                language that can facilitate inclusive collaboration of all AI users, a fundamental understanding
                                of systems, and a means of explainability is becoming necessary. There are many ways we can
                                attempt to tackle the interpretability, communication, and explainability of AI, however, one
                                way is through a visual framework that utilizes symbols to represent these AI systems. The
                                utilization of symbols to convey information is not a new idea, with the use of symbols being
                                traced back from our earlier recorded history [2] to our present-day use of symbols in the form of
                                emojis, capable of conveying entire thoughts and feelings [3]. Through a visual framework that
                                utilizes symbols, we can break down AI systems ranging from the simplest systems to the higher
                                complexities we will encounter in the future. Our work of sEmantic And compoSable glYphs
                                to represent artificial intelligence systems (EASY-AI) [4] aims to provide such a framework,
                                combining ontology which facilitates in-depth understanding and a symbolic-driven visual
                                component that creates a more digestible representation of AI systems.




                                Posters, Demos, and Industry Tracks at ISWC 2024, November 13–15, 2024, Baltimore, USA
                                ∗
                                    Cogan Shimizu.
                                Envelope-Open ellis.177@wright.edu (A. L. Ellis); cogan.shimizu@wright.edu (C. Shimizu)
                                Orcid 0009-0000-8098-4262 (A. L. Ellis); 0000-0003-4283-8701 (C. Shimizu)
                                                                       © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                    CEUR
                                    Workshop
                                    Proceedings
                                                  http://ceur-ws.org
                                                  ISSN 1613-0073
                                                                       CEUR Workshop Proceedings (CEUR-WS.org)




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. Related Work
Work to develop a visual framework that represents and provides explainability, multidisci-
plinary collaboration, and transparency to AI is starting to surface in the field, with work like
The Boxology framework [5]. The boxology framework takes a broad approach using various
simple geometric shapes to represent parts of an AI system and its algorithms and arrows to
indicate the flow of data from one component to another. The work conducted in our framework
EASY-AI extends boxology by providing a deeper level of explainability with the implementation
of an underlying ontology. Initially, the ontology would provide standardization and constraints
to how the shapes in the boxology could interact using axioms derived from [6, 7] which can be
seen in the top right corner of figure 2. However, the issue in the organization and presentation
of systems became apparent when looking to fit the EASY-AI framework on a complex system
using only the boxology as the visual representation. Therefore, we have extended EASY-AI
by creating a symbology that can condense the visual representation of simple and complex
systems to aid in overcoming this challenge.


3. The EASY-AI Glyphs
The symbology of EASY-AI is inspired by the drag-and-drop symbolic widget interface of
the Orange Data Mining tool [8]. EASY-AI’s symbolic widgets will represent the different
instances of data, processes, and models that the boxology framework distinguishes, but these
new symbols can connect and collapse to lower the visual load of the user to reduce the
chances of inattentional blindness which a high visual load can cause [9]. Represented in
Figure 1 is each of the symbols that will be used in the next section 3.1 with a Use Case.
Beginning at the top left and moving right
we have the input-type-data, process-type-
train, model-type-statistical, and process-
type-deduce. Moving down to the next
row starting from the left side, moving
right is model-type-semantic, process-type-
reinforced-learning, input-type-symbol, and
output.

3.1. Use Case
                                                         Figure 1: The EASY-AI Glyphs.
To demonstrate how the glyphs from figure 1
work in the framework we have applied them
to the Roomba use case [10] which originally utilized the initial EASY-AI framework with the
boxology as the visual component depicted in the bottom right of figure 2. The Roomba use
case is not a very complex system to represent, however, from this visualization, it would not
be difficult to imagine how chaotic and confusing this framework can be if it were to attempt to
represent a highly complex AI system. This initial layer would be considered the foundational
layer where we have a more technical representation of AI systems. Using the new symbology,
we swapped the boxology representations of the Roomba as seen in the bottom depiction in
Figure 2: The EASY-AI Roomba Use Case


figure 2 to the symbology reconstruction on the bottom left of figure 2. To compose the amount
of visual information that the user ingests, the framework can then be collapsed if necessary, as
demonstrated in the top left depiction in figure 2. The framework is meant to be able to expand
or collapse as the user’s needs vary as well as how in-depth they wish to understand the system.


4. Conclusion
With the rapid advancements in artificial intelligence (AI) a way to explain, understand, and
communicate these systems is becoming a prevalent challenge not only between domain experts
but also to casual users. The EASY-AI framework is set to overcome these challenges by
providing a common language. Paired with EASY-AI’s underlying ontology for explainability,
we have now provided a simplistic and easily digestible way to communicate systems to one
another by adapting a symbology to the EASY-AI framework. While these symbols have not
undergone user testing, we hope to conduct such testing in the near future. Additionally, we
will implement the symbolic nesting for SNOOP-AI [11] in CoModIDE [12].


Acknowledgments
This work was partially funded by DAGSI/AFRL under award RX23-8, The Advanced Air
Mobility (AAM) grant, and NSF Award #2333532.
References
 [1] A. Teije, F. Harmelen, Chapter 3. Architectural Patterns for Neuro-Symbolic AI, 2023.
     doi:10.3233/FAIA230135 .
 [2] C. Mühlenbeck, T. Jacobsen, On the origin of visual symbols., Journal of Comparative
     Psychology 134 (2020) 435–452. URL: https://doi.apa.org/doi/10.1037/com0000229. doi:10.
     1037/com0000229 .
 [3] M. A. Riordan, Emojis as Tools for Emotion Work: Communicating Affect in Text Messages,
     Journal of Language and Social Psychology 36 (2017) 549–567. URL: https://doi.org/10.
     1177/0261927X17704238. doi:10.1177/0261927X17704238 , publisher: SAGE Publications
     Inc.
 [4] A. Ellis, B. Dave, H. Salehi, S. Ganapathy, C. Shimizu, Semantic and composable glyphs to
     represent ai systems, Hybrid Human Artifical Intelligence, 2024.
 [5] M. van Bekkum, M. de Boer, F. van Harmelen, A. Meyer-Vitali, A. t. Teije, Modular design
     patterns for hybrid learning and reasoning systems: a taxonomy, patterns and use cases,
     Applied Intelligence 51 (2021) 6528–6546. Publisher: Springer.
 [6] A. Eberhart, C. Shimizu, S. Chowdhury, M. K. Sarker, P. Hitzler, Expressibility of OWL Ax-
     ioms with Patterns, in: R. Verborgh, K. Hose, H. Paulheim, P.-A. Champin, M. Maleshkova,
     O. Corcho, P. Ristoski, M. Alam (Eds.), The Semantic Web, volume 12731, Springer In-
     ternational Publishing, Cham, 2021, pp. 230–245. URL: https://link.springer.com/10.1007/
     978-3-030-77385-4_14. doi:10.1007/978- 3- 030- 77385- 4_14 , series Title: Lecture Notes
     in Computer Science.
 [7] C. Shimizu, P. Hitzler, Q. Hirt, D. Rehberger, S. G. Estrecha, C. Foley, A. M.
     Sheill, W. Hawthorne, J. Mixter, E. Watrall,            The enslaved ontology: Peoples
     of the historic slave trade,        Journal of Web Semantics 63 (2020) 100567. URL:
     https://www.sciencedirect.com/science/article/pii/S1570826820300135?casa_token=
     MKa6y-Eap20AAAAA:qoFPAkBVQ4glQq40GtKjS_C6LlkouPxg6aL1hLVfPokAF5IapNRu_
     mwpehhv6pUZqYodG1B5, publisher: Elsevier.
 [8] B. L. University of Ljubljana, Orange data mining - data mining fruitful & fun, https:
     //orangedatamining.com/, 2023. Version 3.33.0.
 [9] C. M. Greene, G. Murphy, J. Januszewski, Under High Perceptual Load, Observers Look but
     Do Not See, Applied Cognitive Psychology 31 (2017) 431–437. URL: https://onlinelibrary.
     wiley.com/doi/abs/10.1002/acp.3335. doi:10.1002/acp.3335 , _eprint: https://onlineli-
     brary.wiley.com/doi/pdf/10.1002/acp.3335.
[10] T. D. Hawkes, T. J. Bihl, Symbols to represent AI systems, in: NAECON 2021 - IEEE National
     Aerospace and Electronics Conference, IEEE, Dayton, OH, USA, 2021, pp. 61–68. URL: https:
     //ieeexplore.ieee.org/document/9696419/. doi:10.1109/NAECON49338.2021.9696419 .
[11] A. Ellis, B. Dave, H. Salehi, S. Ganapathy, C. Shimizu, Implementing snoop-ai in comodide,
     National Aerospace and Electronics Conference, 2024.
[12] C. Shimizu, K. Hammar, Comodide–the comprehensive modular ontology engineering
     ide, in: ISWC 2019 Satellite Tracks (Posters & Demonstrations, Industry, and Outrageous
     Ideas) co-located with 18th International Semantic Web Conference (ISWC 2019) Auckland,
     New Zealand, October 26-30, 2019., volume 2456, CEUR-WS, 2019, pp. 249–252. URL:
     https://www.diva-portal.org/smash/get/diva2:1355737/FULLTEXT01.pdf.