<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety (AISafety2022)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriel Pedroza</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xin Cynthia Chen</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José Hernández-Orallo</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaowei Huang</string-name>
          <email>xiaowei.huang@liverpool.ac.uk</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Huascar</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Espinoza</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Richard Mallah</string-name>
          <email>richard@futureoflife.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>John McDermid</string-name>
          <email>john.mcdermid@york.ac.uk</email>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mauricio Castillo-Effen</string-name>
          <email>mauricio.castillo-effen@lmco.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>CEA LIST</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France gabriel.pedroza@cea.fr</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Future of Life Institute</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lockheed Martin, Advanced Technology Laboratories</institution>
          ,
          <addr-line>Arlington, VA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universitat Politècnica de València</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Hong Kong</institution>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Liverpool</institution>
          ,
          <addr-line>Liverpool</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of York</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We summarize the IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety (AISafety 2022)1, held at the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI-22) on July 24-25, 2022 in Vienna, Austria.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>5 KDT JU, Belgium
Huascar.Espinoza@ecsel.europa.eu</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>Safety in Artificial Intelligence (AI) is increasingly
becoming a substantial part of AI research, deeply
intertwined with the ethical, legal and societal issues
associated with AI systems. Even if AI safety is considered
1 Workshop series website: https://www.aisafetyw.org/
Copyright © 2022 for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC
BY 4.0).
a design principle, there are varying levels of safety,
diverse sets of ethical standards and values, and varying
degrees of liability, for which we need to deal with
trade-offs or alternative solutions. These choices can only
be analyzed holistically if we integrate technological and
ethical perspectives into the engineering problem, and
consider both the theoretical and practical challenges for
AI safety. This view must cover a wide range of AI
paradigms, considering systems that are specific for a
particular application, and also those that are more general,
which may lead to unanticipated risks. We must bridge the
short-term with the long-term perspectives, idealistic goals
with pragmatic solutions, operational with policy issues,
and industry with academia, in order to build, evaluate,
deploy, operate and maintain AI-based systems that are
truly safe.</p>
      <p>The IJCAI-ECAI-22 Workshop on Artificial Intelligence
Safety (AISafety 2022) seeks to explore new ideas in AI
safety with a particular focus on addressing the following
questions:
● What is the status of existing approaches for ensuring
AI and Machine Learning (ML) safety and what are the
gaps?
● How can we engineer trustworthy AI software
architectures?
● How can we make AI-based systems more ethically
aligned?
● What safety engineering considerations are required to
develop safe human-machine interaction?
● What AI safety considerations and experiences are
relevant from industry?
● How can we characterize or evaluate AI systems
according to their potential risks and vulnerabilities?
● How can we develop solid technical visions and new
paradigms about AI safety?
● How do metrics of capability and generality, and
trade-offs with performance, affect safety?
These are the main topics of the series of AISafety
workshops. They aim to achieve a holistic view of AI and
safety engineering, taking ethical and legal issues into
account, in order to build trustworthy intelligent
autonomous machines. The first edition of AISafety was
held in August 10-12, 2019, in Macao (China) as part of
the 28th International Joint Conference on Artificial
Intelligence (IJCAI-19), and the second edition was held in
January 7-8, 2021 virtually also as part of IJCAI. This
fourth edition was held in Vienna at the 31st International
Joint Conference on Artificial Intelligence
(IJCAI-ECAI-22) on July 24-25th.</p>
    </sec>
    <sec id="sec-3">
      <title>Program</title>
      <p>The Program Committee (PC) received 26 submissions.
Each paper was peer-reviewed by at least two PC
members, by following a single-blind reviewing process.
The committee decided to accept 13 full papers and 6 short
presentations, resulting in a full-paper acceptance rate of
50% and an overall acceptance rate of 73%.</p>
      <p>The AISafety 2022 program was organized in six
thematic sessions, one (invited) special session, two
keynote and four (invited) talks. The special session was
given flexibility to structure its program and format.</p>
      <p>The thematic sessions followed a highly interactive
format. They were structured into short pitches and a group
debate panel slot to discuss both individual paper
contributions and shared topic issues. Three specific roles
were part of this format: session chairs, presenters and
session discussants.
● Session Chairs introduced sessions and participants.</p>
      <p>The Chair moderated sessions and plenary discussions,
monitored time, and moderated questions and
discussions from the audience.
● Presenters gave a 10-minute paper talk and participated
in the debate slot. The short presentations are given 5
minutes for each paper.
● Session Discussants gave a critical review of the
session papers, and participated in the plenary debate.
Presentations and papers were grouped by topic as follows:</p>
      <sec id="sec-3-1">
        <title>Session 1: AI Ethics: Fairness, Bias, and Accountability</title>
        <p>● Let it RAIN for Social Good, Mattias Brännström,</p>
        <p>Andreas Theodorou and Virginia Dignum.
● Accountability and Responsibility of Artificial
Intelligence Decision-making Models in Indian Policy
Landscape, Palak Malhotra and Amita Misra.
● Assessing Demographic Bias Transfer from Dataset to
Model: A Case Study in Facial Expression Recognition,
Iris Dominguez-Catena, Daniel Paternain and Mikel
Galar.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Session 2: Short Presentations - Safety Assessment of</title>
      </sec>
      <sec id="sec-3-3">
        <title>AI-enabled systems</title>
        <p>● A Hierarchical HAZOP-Like Safety Analysis for
Learning-Enabled Systems, Yi Qi, Philippa Ryan
Conmy, Wei Huang, Xingyu Zhao and Xiaowei Huang.
● Increasingly Autonomous CPS: Taming Emerging
Behaviors from an Architectural Perspective, Jerome
Hugues and Daniela Cancila.
● CAISAR: A platform for Characterizing Artificial
Intelligence Safety and Robustness, Julien
Girard-Satabin, Michele Alberti, François Bobot,
Zakaria Chihani and Augustin Lemesle.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Session 3: Machine learning for safety-critical AI</title>
        <p>● Revisiting the Evaluation of Deep Neural Networks for
Pedestrian Detection, Patrick Feifel, Benedikt Franke,
Arne Raulf, Friedhelm Schwenker, Frank Bonarens and
Frank Köster.
● Improvement of Rejection for AI Safety through
Loss-Based Monitoring, Daniel Scholz, Florian
Hauer, Klaus Knobloch and Christian Mayr.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Special Session : TAILOR - Towards Trustworthy AI</title>
        <p>● Foundations of Trustworthy AI*, Francesca Pratesi.
● Panel on Trustworthy AI*, Fosca Giannotti, Pilipp
Slusallek, Giuseppe De Giacomo, Hector Geffner,
Holger Hoos.
*Presentations without papers.</p>
      </sec>
      <sec id="sec-3-6">
        <title>Session 4: Short Presentations - ML Robustness,</title>
      </sec>
      <sec id="sec-3-7">
        <title>Criticality and Uncertainty</title>
        <p>● Utilizing Class Separation Distance for the Evaluation
of Corruption Robustness of Machine Learning
Classifiers, Georg Siedel, Silvia Vock, Andrey
Morozov and Stefan Voß.
● Safety-aware Active Learning with Perceptual
Ambiguity and Criticality Assessment, Prajit T
Rajendran, Guillaume Ollier, Huascar Espinoza,
Morayo Adedjouma, Agnes Delaborde and Chokri
Mraidha.
● Understanding Adversarial Examples Through Deep
Neural Network's Classification Boundary and
Uncertainty Regions, Juan Shu, Bowei Xi and Charles
Kamhoua.
● Session 5: AI Robustness, Generative models and</p>
      </sec>
      <sec id="sec-3-8">
        <title>Adversarial learning</title>
        <p>● Leveraging generative models to characterize the
failure conditions of image classifiers, Adrien Le
Coz, Stéphane Herbin and Faouzi Adjed.
● Feasibility of Inconspicuous GAN-generated
Adversarial Patches against Object Detection, Svetlana
Pavlitskaya, Bianca-Marina Codău and J. Marius
Zöllner.
● Privacy Safe Representation Learning via Frequency
Filtering Encoder, Jonghu Jeong, Minyong Cho,
Philipp Benz, Jinwoo Hwang, Jeewook Kim,
Seungkwan Lee and Tae-hoon Kim.
● Benchmarking and deeper analysis of adversarial patch
attack on object detectors, Pol Labarbarie, Adrien Chan
Hon Tong, Stéphane Herbin and Milad Leyli-Abadi.</p>
      </sec>
      <sec id="sec-3-9">
        <title>Session 6: AI Accuracy, Diversity, Causality and</title>
      </sec>
      <sec id="sec-3-10">
        <title>Optimization</title>
        <p>●
●
●
●</p>
        <p>The impact of averaging logits over probabilities on
ensembles of neural networks, Cedrique Rovile
Njieutcheu Tassi, Jakob Gawlikowski, Auliya Unnisa
Fitri and Rudolph Triebel.</p>
        <p>Exploring Diversity in Neural Architectures for Safety,
Michał Filipiuk and Vasu Singh.</p>
        <p>Constrained Policy Optimization for Controlled
Contextual Bandit Exploration, Mohammad
Kachuee and Sungjin Lee.</p>
        <p>A causal perspective on AI deception in games,
Francis Rhys Ward, Francesco Belardinelli and
Francesca Toni.</p>
        <p>AISafety was pleased to have several additional
inspirational researchers as invited speakers:</p>
      </sec>
      <sec id="sec-3-11">
        <title>Keynotes</title>
        <p>● Gary Marcus, Towards a Proper Foundation for Robust</p>
        <p>Artificial Intelligence
● Thomas A. Henzinger, Formal Methods meet Neural</p>
        <p>Networks: A Selection</p>
      </sec>
      <sec id="sec-3-12">
        <title>Invited Talks</title>
        <p>● Elizabeth Adams, Leadership of Responsible AI –</p>
        <p>Representation Matters
● Luis Aranda, Enabling AI governance: OECD’s work
on moving from Principles to practice
● Simos Gerasimou, SESAME: Secure and Safe</p>
        <p>AI-Enabled Robotics Systems
● Zakaria Chihani, A selected view of AI trustworthiness
methods: How far can we go?</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>We thank all researchers who submitted papers to AISafety
2022 and congratulate the authors whose papers were
selected for inclusion into the workshop program and
proceedings.</p>
      <p>We especially thank our distinguished PC members for
reviewing the submissions and providing useful feedback
to the authors:
● Simos Gerasimou, University of York, UK
● Jonas Nilson, NVIDIA, USA
● Morayo Adedjouma, CEA LIST, France
● Brent Harrison, University of Kentucky, USA
● Alessio R. Lomuscio, Imperial College London, UK
● Brian Tse, Affiliate at University of Oxford, China
● Michael Paulitsch, Intel, Germany
● Ganesh Pai, NASA Ames Research Center, USA
● Rob Alexander, University of York, UK
● Vahid Behzadan, University of New Haven, USA
● Chokri Mraidha, CEA LIST, France
● Ke Pei, Huawei, China
● Orlando Avila-García, Arquimea Research Center,</p>
      <p>Spain
● I-Jeng Wang, Johns Hopkins University, USA
● Chris Allsopp, Frazer-Nash Consultancy, UK
● Andrea Orlandini, ISTC-CNR, Italy
● Agnes Delaborde, LNE, France
● Rasmus Adler, Fraunhofer IESE, Germany
● Roel Dobbe, TU Delft, The Netherlands
● Vahid Hashemi, Audi, Germany
● Juliette Mattioli, Thales, France
● Bonnie W. Johnson, Naval Postgraduate School, USA
● Roman V. Yampolskiy, University of Louisville, USA
● Jan Reich, Fraunhofer IESE, Germany
● Fateh Kaakai, Thales, France
● Francesca Rossi, IBM and University of Padova, USA
● Javier Ibañez-Guzman, Renault, France
● Jérémie Guiochet, LAAS-CNRS, France
● Raja Chatila, Sorbonne University, France
● François Terrier, CEA LIST, France
● Mehrdad Saadatmand, RISE Research Institutes of</p>
      <p>Sweden, Sweden
● Alec Banks, Defence Science and Technology</p>
      <p>Laboratory, UK
● Roman Nagy, Argo AI, Germany
● Nathalie Baracaldo, IBM Research, USA
● Toshihiro Nakae, DENSO Corporation, Japan
● Gereon Weiss, Fraunhofer ESK, Germany
● Philippa Ryan Conmy, Adelard, UK
● Stefan Kugele, Technische Hochschule Ingolstadt,</p>
      <p>Germany
● Colin Paterson, University of York, UK
● Davide Bacciu, Università di Pisa, Italy
● Timo Sämann, Valeo, Germany
● Sylvie Putot, Ecole Polytechnique, France
● John Burden, University of Cambridge, UK
● Sandeep Neema, DARPA, USA
● Fredrik Heintz, Linköping University, Sweden
● Simon Fürst, BMW Group, Germany
● Mario Gleirscher, University of Bremen, Germany
● Mandar Pitale, NVIDIA, USA
● Leon Kester, TNO, The Netherlands
● Gabriel Pedroza, CEA LIST, France
● Huáscar Espinoza, KDT JU, Belgium
● Xiaowei Huang, University of Liverpool, UK
● José Hernández-Orallo, Universitat Politècnica de</p>
      <p>València, Spain
● Mauricio Castillo-Effen, Lockheed Martin, USA
● Xin Cynthia Chen, University of Hong Kong, China
● Richard Mallah, Future of Life Institute, USA
● John McDermid, University of York, United Kingdom</p>
      <p>We thank Gary Marcus, Thomas A. Henzinger, Elizabeth
Adams, Luis Aranda, Simos Gerasimou, and Zakaria
Chihani for their inspiring talks.</p>
      <p>Finally we thank the IJCAI-ECAI-22 organization for
providing an excellent framework for AISafety 2022.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>