<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The IJCAI-21 Workshop on Artificial Intelligence Safety (AISafety2021)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Huáscar Espinoza</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gabriel Pedroza</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José Hernández-Orallo</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xin Cynthia Chen</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Seán S. ÓhÉigeartaigh</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaowei Huang</string-name>
          <email>xiaowei.huang@liverpool.ac.uk</email>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mauricio Castillo-Effen</string-name>
          <email>mauricio.castillo-effen@lmco.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Richard Mallah</string-name>
          <email>richard@futureoflife.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>John McDermid</string-name>
          <email>john.mcdermid@york.ac.uk</email>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ECSEL JU</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Belgium Huascar.Espinoza@ecsel.europa.eu</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>CEA LIST</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France gabriel.pedroza@cea.fr</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Future of Life Institute</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lockheed Martin, Advanced Technology Laboratories</institution>
          ,
          <addr-line>Arlington, VA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universitat Politècnica de València</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Cambridge</institution>
          ,
          <addr-line>Cambridge</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Hong Kong</institution>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of Liverpool</institution>
          ,
          <addr-line>Liverpool</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>University of York</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We summarize the IJCAI-21 Workshop on Artificial Intelligence Safety (AISafety 2021)1, virtually held at the 30th International Joint Conference on Artificial Intelligence (IJCAI-21) on August 19-20, 2021.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Safety in Artificial Intelligence (AI) is increasingly
becoming a substantial part of AI research, deeply
intertwined with the ethical, legal and societal issues
associated with AI systems. Even if AI safety is considered
1 Workshop series website: https://www.aisafetyw.org/
Copyright © 2021 for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC
BY 4.0).
a design principle, there are varying levels of safety,
diverse sets of ethical standards and values, and varying
degrees of liability, for which we need to deal with
trade-offs or alternative solutions. These choices can only
be analyzed holistically if we integrate technological and
ethical perspectives into the engineering problem, and
consider both the theoretical and practical challenges for
AI safety. This view must cover a wide range of AI
paradigms, considering systems that are specific for a
particular application, and also those that are more general,
which may lead to unanticipated risks. We must bridge the
short-term with the long-term perspectives, idealistic goals
with pragmatic solutions, operational with policy issues,
and industry with academia, in order to build, evaluate,
deploy, operate and maintain AI-based systems that are
truly safe.</p>
      <p>The IJCAI-21 Workshop on Artificial Intelligence Safety
(AISafety 2021) seeks to explore new ideas in AI safety
with a particular focus on addressing the following
questions:</p>
      <p>What is the status of existing approaches for ensuring
AI and Machine Learning (ML) safety and what are the
gaps?
How can we engineer trustworthy AI software
architectures?
How can we make AI-based systems more ethically
aligned?
What safety engineering considerations are required to
develop safe human-machine interaction?
What AI safety considerations and experiences are
relevant from industry?
How can we characterize or evaluate AI systems
according to their potential risks and vulnerabilities?
How can we develop solid technical visions and new
paradigms about AI safety?
How do metrics of capability and generality, and
trade-offs with performance, affect safety?
These are the main topics of the series of AISafety
workshops. They aim to achieve a holistic view of AI and
safety engineering, taking ethical and legal issues into
account, in order to build trustworthy intelligent
autonomous machines. The first edition of AISafety was
held in August 10-12, 2019, in Macao (China) as part of
the 28th International Joint Conference on Artificial Intelligence
(IJCAI-19), and the second edition was held in
January 7-8, 2021 virtually also as part of IJCAI. This third
edition was held online (because of the COVID-19
situation) at the 30th International Joint Conference on
Artificial Intelligence (IJCAI-21) on August 19-20, virtually.
●
●
●
●
●
●
●
●</p>
    </sec>
    <sec id="sec-2">
      <title>Program</title>
      <p>The Program Committee (PC) received 25 submissions.
Each paper was peer-reviewed by at least two PC
members, by following a single-blind reviewing process.
The committee decided to accept 11 full papers and 7
posters, resulting in a full-paper acceptance rate of 44%
and an overall acceptance rate of 72%.</p>
      <p>The AISafety 2021 program was organized in four
thematic sessions, two keynote and two (invited) talks.</p>
      <p>The thematic sessions followed a highly interactive
format. They were structured into short pitches and a group
debate panel slot to discuss both individual paper
contributions and shared topic issues. Three specific roles
were part of this format: session chairs, presenters and
session discussants.
●
●
●
●
●
●
●
●
● Session Chairs introduced sessions and participants.</p>
      <p>The Chair moderated sessions and plenary discussions,
monitored time, and moderated questions and
discussions from the audience.
● Presenters gave a 10 minute paper talk and participated
in the debate slot.
● Session Discussants gave a critical review of the
session papers, and participated in the plenary debate.
Papers were grouped by topic as follows:</p>
      <sec id="sec-2-1">
        <title>Session 1: Trustworthiness of Knowledge-Based AI</title>
        <p>● Applying Strategic Reasoning for Accountability
Ascription in Multiagent Teams, Vahid Yazdanpanah,
Sebastian Stein, Enrico Gerding and Nicholas R.</p>
        <p>Jennings.
● Impossibility of Unambiguous Communication as a
Source of Failure in AI Systems, William Howe and
Roman Yampolskiy.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Session 2:</title>
      </sec>
      <sec id="sec-2-3">
        <title>Approaches</title>
      </sec>
      <sec id="sec-2-4">
        <title>Robustness of</title>
      </sec>
      <sec id="sec-2-5">
        <title>Machine</title>
      </sec>
      <sec id="sec-2-6">
        <title>Learning</title>
        <p>Assessing the Reliability of Deep Learning Classifiers
Through Robustness Evaluation and Operational
Profiles, Xingyu Zhao, Wei Huang, Alec Banks,
Victoria Cox, David Flynn, Sven Schewe and Xiaowei
Huang.</p>
        <p>Towards Robust Perception Using Topological
Invariants, Romie Banerjee, Feng Liu and Pei Ke.
Measuring Ensemble Diversity and Its Effects on
Model Robustness, Lena Heidemann, Adrian
Schwaiger and Karsten Roscher.</p>
      </sec>
      <sec id="sec-2-7">
        <title>Session 3: Perception and Adversarial Attacks</title>
        <p>Deep Neural Network Loses Attention to Adversarial
Images, Shashank Kotyan and Danilo Vasconcellos
Vargas.</p>
        <p>An Adversarial Attacker for Neural Networks in
Regression Problems, Kavya Gupta, Jean-Christophe
Pesquet, Beatrice Pesquet-Popescu, Fateh Kaakai and
Fragkiskos Malliaros.</p>
        <p>Coyote: A Dataset of Challenging Scenarios in Visual
Perception for Autonomous Vehicles, Suruchi Gupta,
Ihsan Ullah and Michael Madden.</p>
      </sec>
      <sec id="sec-2-8">
        <title>Session 4: Qualification / Certification of AI-Based</title>
      </sec>
      <sec id="sec-2-9">
        <title>Systems</title>
        <p>Towards a Safety Case for Hardware Fault Tolerance in
Convolutional Neural Networks Using Activation
Range Supervision, Florian Geissler, Syed Qutub,
Sayanta Roychowdhury, Ali Asgari, Yang Peng, Akash
Dhamasia, Ralf Graefe, Karthik Pattabiraman and
Michael Paulitsch.</p>
        <p>Artificial Intelligence for Future Skies: On-going
Standardization Activities to Build the Next
Certification/Approval Framework for Airborne and
Ground Aeronautic Products, Christophe Gabreau,
Béatrice Pesquet-Popescu, Fateh Kaakai and Baptiste
Lefevre.
● Using Complementary Risk Acceptance Criteria to
Structure Assurance Cases for Safety-Critical AI
Components, Michael Klaes, Rasmus Adler, Lisa
Jöckel, Janek Groß and Jan Reich.</p>
        <p>AISafety was pleased to have several additional
inspirational researchers as invited speakers:</p>
      </sec>
      <sec id="sec-2-10">
        <title>Keynotes</title>
        <p>● Emily Dinan (Facebook AI Research, USA), Safety for</p>
        <p>E2E Conversational AI
● Simon Burton (Fraunhofer IKS, Germany), Safety,
Complexity, AI and Automated Driving - Holistic
Perspectives on Safety Assurance</p>
      </sec>
      <sec id="sec-2-11">
        <title>Invited Talks</title>
        <p>● The Anh Han (Teesside University, UK), Modelling and
Regulating Safety Compliance: Game Theory Lessons
from AI Development Races Analyses
● Umut Durak (German Aerospace Center - DLR,
Germany), Simulation Qualification for Safety Critical
AI-Based Systems
Posters were presented with 3-minute pitches. Most posters
have also been included as short papers within this volume.</p>
      </sec>
      <sec id="sec-2-12">
        <title>Posters</title>
        <p>● Uncontrollability of Artificial Intelligence, Roman</p>
        <p>Yampolskiy.
● Domain Shifts in Reinforcement Learning: Identifying
Disturbances in Environments, Tom Haider, Felippe
Schmoeller Roza, Dirk Eilers, Karsten Roscher and
Stephan Günnemann.
● Chess as a Testing Grounds for the Oracle Approach to
AI Safety, James Miller, Roman Yampolskiy, Olle
Häggström and Stuart Armstrong.
● Socio-technical co-Design for Accountable
Autonomous Software, Ayan Banerjee, Imane Lamrani,
Katina Michael, Diana Bowman and Sandeep Gupta.
● Epistemic Defenses against Scientific and Empirical
Adversarial AI Attacks, Nadisha-Marie Aliman and
Leon Kester.
● On the Differences between Human and Machine</p>
        <p>Intelligence, Roman Yampolskiy.
● A Mixed Integer Programming Approach for Verifying
Properties of Binarized Neural Networks, Christopher
Lazarus and Mykel Kochenderfer.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgements</title>
      <p>We thank all researchers who submitted papers to AISafety
2021 and congratulate the authors whose papers and
posters were selected for inclusion into the workshop
program and proceedings.</p>
      <p>We especially thank our distinguished PC members for
reviewing the submissions and providing useful feedback
to the authors:
● Stuart Russell, UC Berkeley, USA
● Emmanuel Arbaretier, Apsys-Airbus, France
● Ann Nowé, Vrije Universiteit Brussel, Belgium
● Simos Gerasimou, University of York, UK
● Gereon Weiss, Fraunhofer ESK, Germany
● Jonas Nilson, NVIDIA, USA
● Morayo Adedjouma, CEA LIST, France
● Brent Harrison, University of Kentucky, USA
● Alessio R. Lomuscio, Imperial College London, UK
● Brian Tse, Affiliate at University of Oxford, China
● Michael Paulitsch, Intel, Germany
● Ganesh Pai, NASA Ames Research Center, USA
● Hélène Waeselynck, CNRS LAAS, France
● Rob Alexander, University of York, UK
● Vahid Behzadan, Kansas State University, USA
● Chokri Mraidha, CEA LIST, France
● Ke Pei, Huawei, China
● Orlando Avila-García, Arquimea Research Center,</p>
      <p>Spain
● Rob Ashmore, Defence Science and Technology</p>
      <p>Laboratory, UK
● I-Jeng Wang, Johns Hopkins University, USA
● Chris Allsopp, Frazer-Nash Consultancy, UK
● Andrea Orlandini, ISTC-CNR, Italy
● Rasmus Adler, Fraunhofer IESE, Germany
● Roel Dobbe, TU Delft, The Netherlands
● Vahid Hashemi, Audi, Germany
● Feng Liu, Huawei Munich Research Center, Germany
● Yogananda Jeppu, Honeywell Technology Solutions,</p>
      <p>India
● Francesca Rossi, IBM and University of Padova, USA
● Ramana Kumar, Google DeepMind, UK
● Javier Ibañez-Guzman, Renault, France
● Jérémie Guiochet, LAAS-CNRS, France
● Raja Chatila, Sorbonne University, France
● François Terrier, CEA LIST, France
● Mehrdad Saadatmand, RISE Research Institutes of</p>
      <p>Sweden, Sweden
● Alec Banks, Defence Science and Technology</p>
      <p>Laboratory, UK
● Gopal Sarma, Broad Institute of MIT and Harvard,</p>
      <p>USA
● Roman Nagy, Argo AI, Germany
● Nathalie Baracaldo, IBM Research, USA
● Toshihiro Nakae, DENSO Corporation, Japan
● Richard Cheng, California Institute of Technology,</p>
      <p>USA
● Ramya Ramakrishnan, Massachusetts Institute of</p>
      <p>Technology, USA
● Gereon Weiss, Fraunhofer ESK, Germany
● Douglas Lange, Space and Naval Warfare Systems</p>
      <p>Center Pacific, USA
● Philippa Ryan Conmy, Adelard, UK
● Stefan Kugele, Technische Hochschule Ingolstadt,</p>
      <p>Germany
● Colin Paterson, University of York, UK
● Javier Garcia, Universidad Carlos III de Madrid, Spain
● Davide Bacciu, Università di Pisa, Italy
● Timo Sämann, Valeo, Germany
● Vincent Aravantinos, Argo AI, Germany
● Mohamed Ibn Khedher, IRT SystemX, France
● Umut Durak, German Aerospace Center (DLR),</p>
      <p>Germany
● Huáscar Espinoza, ECSEL JU
● Seán Ó hÉigeartaigh, University of Cambridge, UK
● Xiaowei Huang, University of Liverpool, UK
● José Hernández-Orallo, Universitat Politècnica de</p>
      <p>València, Spain
● Mauricio Castillo-Effen, Lockheed Martin, USA
● Xin Cynthia Chen, University of Hong Kong, China
● Richard Mallah, Future of Life Institute, USA
● John McDermid, University of York, United Kingdom
● Gabriel Pedroza, CEA LIST, France
As well as the additional reviewers:
● Fabio Arnez, CEA LIST, France
● Emmanouil Seferis, Audi, Germany
● Joris Guerin, LAAS CNRS, France</p>
      <p>We thank Emily Dinan, Simon Burton, The Anh Han and
Umut Durak for their inspiring talks.</p>
      <p>We would like to specially thank our sponsor, Partnership
on AI, which funded the Best Paper Award.</p>
      <p>Finally we thank the IJCAI-21 organization
providing an excellent framework for AISafety 2021.
for</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>