=Paper= {{Paper |id=Vol-2542/MOD-KI-preface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2542/MOD-KI-preface.pdf |volume=Vol-2542 |dblpUrl=https://dblp.org/rec/conf/modellierung/ReimerBFT20 }} ==None== https://ceur-ws.org/Vol-2542/MOD-KI-preface.pdf
Joint Proceedings of Modellierung 2020 Short, Workshop and Tools & Demo Papers
128 Workshop on Models in AI


Preface of the First Workshop “Models in AI”

Ulrich Reimer,1 Dominik Bork,2 Peter Fettke,3 Marina Tropmann-Frick4




Preface

With the increasing availability of large amounts of data in practically all application areas,
the field of artificial intelligence (AI) has been attracting increasing attention for some
time now. Earlier approaches to AI were primarily associated with the knowledge-based
paradigm where systems include domain-specific knowledge bases that provide the required
(background) knowledge. The construction and maintenance of such domain models has to
be done largely manually and therefore requires a great deal of time and money, which is
why these approaches scale up poorly.
The new paradigm of data-driven AI, i.e. learning domain models and keeping them
up-to-date by using data mining techniques, can help overcome these disadvantages. The
models learned may be either symbolic (e.g. rules, decision trees) or sub-symbolic (neural
networks). Classical data mining, however, also implies considerable manual effort, in
particular for feature engineering. The rise of deep learning approaches, which embed
the identification of relevant features into the learning processes themselves, may well
contribute further to automating the creation of AI systems. In these cases, the generated
models are of a sub-symbolic nature.
Data-driven AI is based on models derived from data, which usually cannot be inspected
and understood by a human being: Models of a symbolic nature tend to be too complex,
whereas sub-symbolic models do not contain structural elements that can be understood
by humans. In many application scenarios, however, it is important that recommendations
or diagnoses suggested by an AI system can be understood and assessed by humans. The
ability of an AI system to explain concrete decisions, but also the possibility to inspect and
thus comprehend the underlying models are therefore of central importance for the future
use of such systems.
The workshop “Models in AI” focuses on the above mentioned topics, the papers highlighting
various aspects of the role of models in data-driven AI systems. The workshop is set up to
1 Fachhochschule St. Gallen, ulrich.reimer@fhsg.ch
2 Universität Wien, Fakultät für Informatik, Währinger Strasse 29, 1090 Wien dominik.bork@univie.ac.at
3 German Research Center for Artificial Intelligence (DFKI) and Saarland University peter.fettke@dfki.de
4 Hochschule für Angewandte Wissenschaften Hamburg, Berliner Tor 7, 20099 Hamburg, Germany Marina.

 Tropmann-Frick@haw-hamburg.de


Copyright © 2020 for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                                                               Preface 129

allow for ample time for discussion in order to consider a range of approaches and identify
the current gaps as well as needs for further research. The outcomes of the workshop should
therefore lay the foundation for future workshops on this topic.
Progam Comittee: Klaus-Dieter Althoff (DFKI), Kerstin Bach (Norwegian University of
Science and Technology), Ralph Bergmann (Universität Trier), Michael Fellmann (Universi-
tät Rostock), Michael Guckert (Technische Hochschule Mittelhessen), Udo Hahn (Universität
Jena), Siegfried Handschuh (Universität St. Gallen), Knut Hinkelmann (Fachhochschule
Nordwestschweiz), Dimitris Karagiannis (Universität Wien), Mirjam Minor (Universität
Frankfurt), York Sure (Karlsruhe Institute of Technology), Bernhard Thalheim (Universität
Kiel), Mathias Weske (Universität Potsdam, HPI), Stefan Wrobel (Fraunhofer IAIS)