=Paper=
{{Paper
|id=Vol-3215/preface
|storemode=property
|title=None
|pdfUrl=https://ceur-ws.org/Vol-3215/preface.pdf
|volume=Vol-3215
}}
==None==
The IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety
(AISafety2022)
Gabriel Pedroza1, Xin Cynthia Chen2, José Hernández-Orallo3, Xiaowei Huang4, Huascar
Espinoza5, Richard Mallah6, John McDermid7, Mauricio Castillo-Effen8
1
CEA LIST, France
gabriel.pedroza@cea.fr
2
University of Hong Kong, China
cyn0531@connect.hku.hk
3
Universitat Politècnica de València, Spain
jorallo@upv.es
4
University of Liverpool, Liverpool, United Kingdom
xiaowei.huang@liverpool.ac.uk
5
KDT JU, Belgium
Huascar.Espinoza@ecsel.europa.eu
6
Future of Life Institute, USA
richard@futureoflife.org
7
University of York, United Kingdom
john.mcdermid@york.ac.uk
8
Lockheed Martin, Advanced Technology Laboratories, Arlington, VA, USA
mauricio.castillo-effen@lmco.com
Abstract a design principle, there are varying levels of safety,
We summarize the IJCAI-ECAI-22 Workshop on diverse sets of ethical standards and values, and varying
Artificial Intelligence Safety (AISafety 2022)1, held at degrees of liability, for which we need to deal with
the 31st International Joint Conference on Artificial trade-offs or alternative solutions. These choices can only
Intelligence and the 25th European Conference on be analyzed holistically if we integrate technological and
Artificial Intelligence (IJCAI-ECAI-22) on July
24-25, 2022 in Vienna, Austria. ethical perspectives into the engineering problem, and
consider both the theoretical and practical challenges for
AI safety. This view must cover a wide range of AI
Introduction paradigms, considering systems that are specific for a
particular application, and also those that are more general,
Safety in Artificial Intelligence (AI) is increasingly which may lead to unanticipated risks. We must bridge the
becoming a substantial part of AI research, deeply short-term with the long-term perspectives, idealistic goals
intertwined with the ethical, legal and societal issues with pragmatic solutions, operational with policy issues,
associated with AI systems. Even if AI safety is considered and industry with academia, in order to build, evaluate,
deploy, operate and maintain AI-based systems that are
1
Workshop series website: https://www.aisafetyw.org/
truly safe.
Copyright © 2022 for this paper by its authors. Use permitted under The IJCAI-ECAI-22 Workshop on Artificial Intelligence
Creative Commons License Attribution 4.0 International (CC
BY 4.0). Safety (AISafety 2022) seeks to explore new ideas in AI
1
safety with a particular focus on addressing the following monitored time, and moderated questions and
questions: discussions from the audience.
● Presenters gave a 10-minute paper talk and participated
● What is the status of existing approaches for ensuring in the debate slot. The short presentations are given 5
AI and Machine Learning (ML) safety and what are the minutes for each paper.
gaps? ● Session Discussants gave a critical review of the
● How can we engineer trustworthy AI software session papers, and participated in the plenary debate.
architectures?
● How can we make AI-based systems more ethically Presentations and papers were grouped by topic as follows:
aligned?
Session 1: AI Ethics: Fairness, Bias, and Accountability
● What safety engineering considerations are required to
● Let it RAIN for Social Good, Mattias Brännström,
develop safe human-machine interaction?
Andreas Theodorou and Virginia Dignum.
● What AI safety considerations and experiences are
● Accountability and Responsibility of Artificial
relevant from industry?
Intelligence Decision-making Models in Indian Policy
● How can we characterize or evaluate AI systems
Landscape, Palak Malhotra and Amita Misra.
according to their potential risks and vulnerabilities?
● Assessing Demographic Bias Transfer from Dataset to
● How can we develop solid technical visions and new
Model: A Case Study in Facial Expression Recognition,
paradigms about AI safety?
Iris Dominguez-Catena, Daniel Paternain and Mikel
● How do metrics of capability and generality, and
Galar.
trade-offs with performance, affect safety?
These are the main topics of the series of AISafety Session 2: Short Presentations - Safety Assessment of
workshops. They aim to achieve a holistic view of AI and AI-enabled systems
safety engineering, taking ethical and legal issues into ● A Hierarchical HAZOP-Like Safety Analysis for
account, in order to build trustworthy intelligent Learning-Enabled Systems, Yi Qi, Philippa Ryan
autonomous machines. The first edition of AISafety was Conmy, Wei Huang, Xingyu Zhao and Xiaowei Huang.
held in August 10-12, 2019, in Macao (China) as part of ● Increasingly Autonomous CPS: Taming Emerging
the 28th International Joint Conference on Artificial Behaviors from an Architectural Perspective, Jerome
Intelligence (IJCAI-19), and the second edition was held in Hugues and Daniela Cancila.
January 7-8, 2021 virtually also as part of IJCAI. This ● CAISAR: A platform for Characterizing Artificial
fourth edition was held in Vienna at the 31st International Intelligence Safety and Robustness, Julien
Joint Conference on Artificial Intelligence Girard-Satabin, Michele Alberti, François Bobot,
(IJCAI-ECAI-22) on July 24-25th. Zakaria Chihani and Augustin Lemesle.
Program Session 3: Machine learning for safety-critical AI
● Revisiting the Evaluation of Deep Neural Networks for
The Program Committee (PC) received 26 submissions. Pedestrian Detection, Patrick Feifel, Benedikt Franke,
Each paper was peer-reviewed by at least two PC Arne Raulf, Friedhelm Schwenker, Frank Bonarens and
members, by following a single-blind reviewing process. Frank Köster.
The committee decided to accept 13 full papers and 6 short ● Improvement of Rejection for AI Safety through
presentations, resulting in a full-paper acceptance rate of Loss-Based Monitoring, Daniel Scholz, Florian
50% and an overall acceptance rate of 73%. Hauer, Klaus Knobloch and Christian Mayr.
The AISafety 2022 program was organized in six
thematic sessions, one (invited) special session, two Special Session : TAILOR - Towards Trustworthy AI
keynote and four (invited) talks. The special session was
given flexibility to structure its program and format. ● Foundations of Trustworthy AI*, Francesca Pratesi.
The thematic sessions followed a highly interactive ● Panel on Trustworthy AI*, Fosca Giannotti, Pilipp
format. They were structured into short pitches and a group Slusallek, Giuseppe De Giacomo, Hector Geffner,
debate panel slot to discuss both individual paper Holger Hoos.
contributions and shared topic issues. Three specific roles
were part of this format: session chairs, presenters and *Presentations without papers.
session discussants.
● Session Chairs introduced sessions and participants.
The Chair moderated sessions and plenary discussions,
2
Session 4: Short Presentations - ML Robustness, Keynotes
Criticality and Uncertainty ● Gary Marcus, Towards a Proper Foundation for Robust
Artificial Intelligence
● Utilizing Class Separation Distance for the Evaluation ● Thomas A. Henzinger, Formal Methods meet Neural
of Corruption Robustness of Machine Learning Networks: A Selection
Classifiers, Georg Siedel, Silvia Vock, Andrey Invited Talks
Morozov and Stefan Voß. ● Elizabeth Adams, Leadership of Responsible AI –
● Safety-aware Active Learning with Perceptual Representation Matters
Ambiguity and Criticality Assessment, Prajit T ● Luis Aranda, Enabling AI governance: OECD’s work
Rajendran, Guillaume Ollier, Huascar Espinoza, on moving from Principles to practice
Morayo Adedjouma, Agnes Delaborde and Chokri ● Simos Gerasimou, SESAME: Secure and Safe
Mraidha. AI-Enabled Robotics Systems
● Understanding Adversarial Examples Through Deep ● Zakaria Chihani, A selected view of AI trustworthiness
Neural Network's Classification Boundary and methods: How far can we go?
Uncertainty Regions, Juan Shu, Bowei Xi and Charles
Kamhoua.
Acknowledgements
● Session 5: AI Robustness, Generative models and
Adversarial learning We thank all researchers who submitted papers to AISafety
2022 and congratulate the authors whose papers were
● Leveraging generative models to characterize the
selected for inclusion into the workshop program and
failure conditions of image classifiers, Adrien Le
proceedings.
Coz, Stéphane Herbin and Faouzi Adjed.
We especially thank our distinguished PC members for
● Feasibility of Inconspicuous GAN-generated
reviewing the submissions and providing useful feedback
Adversarial Patches against Object Detection, Svetlana
to the authors:
Pavlitskaya, Bianca-Marina Codău and J. Marius
Zöllner.
● Simos Gerasimou, University of York, UK
● Privacy Safe Representation Learning via Frequency
● Jonas Nilson, NVIDIA, USA
Filtering Encoder, Jonghu Jeong, Minyong Cho,
● Morayo Adedjouma, CEA LIST, France
Philipp Benz, Jinwoo Hwang, Jeewook Kim,
● Brent Harrison, University of Kentucky, USA
Seungkwan Lee and Tae-hoon Kim.
● Alessio R. Lomuscio, Imperial College London, UK
● Benchmarking and deeper analysis of adversarial patch
● Brian Tse, Affiliate at University of Oxford, China
attack on object detectors, Pol Labarbarie, Adrien Chan ● Michael Paulitsch, Intel, Germany
Hon Tong, Stéphane Herbin and Milad Leyli-Abadi. ● Ganesh Pai, NASA Ames Research Center, USA
● Rob Alexander, University of York, UK
Session 6: AI Accuracy, Diversity, Causality and ● Vahid Behzadan, University of New Haven, USA
● Chokri Mraidha, CEA LIST, France
Optimization
● Ke Pei, Huawei, China
● The impact of averaging logits over probabilities on ● Orlando Avila-García, Arquimea Research Center,
ensembles of neural networks, Cedrique Rovile Spain
Njieutcheu Tassi, Jakob Gawlikowski, Auliya Unnisa ● I-Jeng Wang, Johns Hopkins University, USA
Fitri and Rudolph Triebel. ● Chris Allsopp, Frazer-Nash Consultancy, UK
● Exploring Diversity in Neural Architectures for Safety, ● Andrea Orlandini, ISTC-CNR, Italy
Michał Filipiuk and Vasu Singh. ● Agnes Delaborde, LNE, France
● Constrained Policy Optimization for Controlled ● Rasmus Adler, Fraunhofer IESE, Germany
Contextual Bandit Exploration, Mohammad ● Roel Dobbe, TU Delft, The Netherlands
Kachuee and Sungjin Lee. ● Vahid Hashemi, Audi, Germany
● A causal perspective on AI deception in games, ● Juliette Mattioli, Thales, France
Francis Rhys Ward, Francesco Belardinelli and ● Bonnie W. Johnson, Naval Postgraduate School, USA
Francesca Toni. ● Roman V. Yampolskiy, University of Louisville, USA
● Jan Reich, Fraunhofer IESE, Germany
AISafety was pleased to have several additional ● Fateh Kaakai, Thales, France
inspirational researchers as invited speakers: ● Francesca Rossi, IBM and University of Padova, USA
● Javier Ibañez-Guzman, Renault, France
● Jérémie Guiochet, LAAS-CNRS, France
● Raja Chatila, Sorbonne University, France
3
● François Terrier, CEA LIST, France
● Mehrdad Saadatmand, RISE Research Institutes of
Sweden, Sweden
● Alec Banks, Defence Science and Technology
Laboratory, UK
● Roman Nagy, Argo AI, Germany
● Nathalie Baracaldo, IBM Research, USA
● Toshihiro Nakae, DENSO Corporation, Japan
● Gereon Weiss, Fraunhofer ESK, Germany
● Philippa Ryan Conmy, Adelard, UK
● Stefan Kugele, Technische Hochschule Ingolstadt,
Germany
● Colin Paterson, University of York, UK
● Davide Bacciu, Università di Pisa, Italy
● Timo Sämann, Valeo, Germany
● Sylvie Putot, Ecole Polytechnique, France
● John Burden, University of Cambridge, UK
● Sandeep Neema, DARPA, USA
● Fredrik Heintz, Linköping University, Sweden
● Simon Fürst, BMW Group, Germany
● Mario Gleirscher, University of Bremen, Germany
● Mandar Pitale, NVIDIA, USA
● Leon Kester, TNO, The Netherlands
● Gabriel Pedroza, CEA LIST, France
● Huáscar Espinoza, KDT JU, Belgium
● Xiaowei Huang, University of Liverpool, UK
● José Hernández-Orallo, Universitat Politècnica de
València, Spain
● Mauricio Castillo-Effen, Lockheed Martin, USA
● Xin Cynthia Chen, University of Hong Kong, China
● Richard Mallah, Future of Life Institute, USA
● John McDermid, University of York, United Kingdom
We thank Gary Marcus, Thomas A. Henzinger, Elizabeth
Adams, Luis Aranda, Simos Gerasimou, and Zakaria
Chihani for their inspiring talks.
Finally we thank the IJCAI-ECAI-22 organization for
providing an excellent framework for AISafety 2022.
4