=Paper= {{Paper |id=Vol-2640/preface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2640/preface.pdf |volume=Vol-2640 }} ==None== https://ceur-ws.org/Vol-2640/preface.pdf
    The IJCAI-PRICAI-20 Workshop on Artificial Intelligence Safety (AISafety 2020)

           Huáscar Espinoza1, John McDermid2, Xiaowei Huang3, Mauricio Castillo-Effen4,
       Xin Cynthia Chen5, José Hernández-Orallo6, Seán Ó hÉigeartaigh7, and Richard Mallah8
                             1
                               Commissariat à l´Energie Atomique, France
                                           2
                                             Univeristy of York, UK
                                       3
                                         University of Liverpool, UK
                                          4
                                            Loockheed Martin, USA
                                    5
                                      University of Hong Kong, China
                               6
                                 Universitat Politècnica de València, Spain
                                      7
                                        University of Cambridge, UK
                                       8
                                         Future of Life Institute, USA

                          Abstract                                 • What AI safety considerations and experiences are rele-
      This preface introduces the Second Workshop on                 vant from industry?
      Artificial Intelligence Safety (AISafety 2020), held         • What safety engineering considerations are required to
      at the 29th International Joint Conference on Arti-            develop safe human-machine interaction in automated
      ficial Intelligence and the 17th Pacific Rim Inter-            decision-making systems?
      national Conference on Artificial Intelligence
      (IJCAI-PRICAI) in January 2021, Japan.                       • How can we characterise or evaluate AI systems ac-
                                                                     cording to their potential risks and vulnerabilities?
1     Introduction                                                 • How can we develop solid technical visions and paradigm
In the last decade, there has been a growing concern on risks        shift articles about AI Safety?
of Artificial Intelligence (AI). Safety is becoming increas-       • How do metrics of capability and generality affect the
ingly relevant as humans are progressively side-lined from           level of risk of a system and how trade-offs can be found
the decision/control loop of intelligent and learning-enabled        with performance?
machines. In particular, the technical foundations and as-
sumptions on which traditional safety engineering principles       • How do AI system feature for example ethics, explaina-
are based, are inadequate for systems in which AI algorithms,        bility, transparency, and accountability relate to, or con-
and in particular Machine Learning (ML) algorithms, are              tribute to, its safety?
interacting with people and/or the environment at increas-       The main interest of AISafety 2020 is to look holistically at
ingly higher levels of autonomy. We must also consider the       AI and safety engineering, jointly with the ethical and legal
connection between the safety challenges posed by pre-           issues, to build trustable intelligent autonomous machines.
sent-day AI systems, and more forward-looking research           The second edition of AISafety will be held in January 2021,
focused on more capable future AI systems, up to and in-         in Japan, as part of the 29th International Joint Conference on
cluding Artificial General Intelligence (AGI).                   Artificial Intelligence and the 17th Pacific Rim International
  The IJCAI-PRICAI-20 Workshop on Artificial Intelligence        Conference on Artificial Intelligence (IJCAI-PRICAI). The
Safety (AISafety 2020) seeks to explore new ideas on AI          AISafety workshop is organized as a “sister workshop” to
safety with particular focus on addressing the following         two other workshops: WAISE1 and SafeAI2.
questions:
  • How can we engineer trustworthy AI system architec-
    tures?
    • Do we need to specify and use bounded morality in en-
      gineering more ethically-aligned AI-based systems?         Copyright © 2020 for the individual papers by the papers' authors.
                                                                 Copyright © 2020 for the volume as a collection by its editors. This
    • What is the status of existing approaches in ensuring AI   volume and its papers are published under the Creative Commons
      and ML safety and what are the gaps?                       License Attribution 4.0 International (CC BY 4.0)
                                                                 1
                                                                   https://www.waise.org
    • How to evaluate AI safety?                                 2
                                                                   http://www.safeaiw.org
   As part of this workshop, we also host discussions related       Session 1: Adversarial Machine Learning
to the AI Safety Landscape initiative3. This initiative aims at
defining an AI safety landscape providing a “view” of the           • Understanding the One Pixel Attack: Propagation Maps
current needs, challenges and state of the art and the practice       and Locality Analysis. Danilo Vasconcellos Vargas and
of this field.                                                        Jiawei Su.
                                                                    • Error-Silenced Quantization: Bridging Robustness and
2      Programme                                                      Compactness. Zhicong Tang, Yinpeng Dong and Hang
The Programme Committee (PC) received 25 submissions, in              Su.
the following categories:                                           • Evolving Robust Neural Architectures to Defend from
                                                                      Adversarial Attacks. Shashank Kotyan and Danilo
• Short position papers – 4 submission.                               Vasconcellos Vargas.
• Full scientific contributions – 20 submissions.
• Proposals of technical talks – 1 submission.                      Session 2: AI Safety Landscape

Each of the papers was peer-reviewed by at least three PC           • Update Report: AI Safety Landscape Initiative, by
members, by following a single-blind reviewing process. The           Landscape Chairs [without paper].
committee decided to accept 11 papers and 1 talk, resulting in      • Safety of Artificial Intelligence: A Collaborative Model.
an overall acceptance rate of 48%. We additionally accepted           John McDermid and Yan Jia.
6 submissions as poster presentations, (5 of which are in-
cluded in this proceedings, as poster papers.                       Session 3: Safe and Value-Aligned Learning in Decision
   AISafety 2020 has been planned as a two-day workshop
                                                                    Making
with general AI Safety topics in the first day and AI Safety
Landscape talks and discussions during the second day. Since        • Choice Set Misspecification in Reward Inference. Rachel
the workshop has been delayed, together with                          Freedman, Rohin Shah and Anca Dragan.
IJCAI-PRICAI-20, from July 2020 to January 2021, due to
                                                                    • Safety Augmentation in Decision Trees. Sumanta Dey,
the COVID-19 pandemic, we do not have yet the full list of
invited talks for the first day and no specific talk allocated to     Pallab Dasgupta and Briti Gangopadhyay.
the second day. The exact date and format (in-person or             • Aligning with Heterogenous Preferences for Kidney Ex-
virtual conference) is still under discussion by                      change. Rachel Freedman.
IJCAI-PRICAI organizers at the date of publication of
AISafety-20 Proceedings.                                            Session 4: DNN Testing and Runtime Monitoring

                                                                    • Is Uncertainty Quantification in Deep Learning Sufficient
2.1. First Workshop Day
                                                                      for Out-of-Distribution Detection? Adrian Schwaiger,
   The AISafety 2020 programme will be organized in four              Poulami Sinhamahapatra, Jens Gansloser and Karsten
thematic sessions, one keynote and at least three invited
                                                                      Roscher.
talks.
   The thematic sessions will be structured into short talks        • DeepSmartFuzzer: Reward Guided Test Generation For
and a common panel slot to discuss both individual paper              Deep Learning. Samet Demir, Hasan Ferit Eniser and
contributions and shared topic issues. Three specific roles are       Alper Sen.
part of this format: session chairs, presenters and session         • A Comparison of Uncertainty Estimation Approaches in
discussants.                                                          Deep Learning Components for Autonomous Vehicle
                                                                      Applications. Fabio Arnez, Huascar Espinoza, Ansgar
• Session Chairs introduce sessions and participants. The             Radermacher and François Terrier.
  Chair moderates session and plenary discussions, takes            • Increasing the Trustworthiness of Deep Neural Networks
  care of the time, and gives the word to speakers in the             via Accuracy Monitoring. Zhihui Shao, Jianyi Yang and
  audience during discussions.                                        Shaolei Ren..
• Presenters give a paper talk in 10 minutes and then par-
  ticipate in the debate slot.
• Session Discussants prepare the discussion of individual          Additionally, AISafety has currently allocated one invited
  papers and the plenary debate. The discussant gives a             talk, and plans to invite one Keynote speaker and at least two
  critical review of the session papers.                            additional invited talks.

The mixture of topics has been carefully balanced, as fol-          Invited Talk
lows:
                                                                    • Nathalie Baracaldo (IBM Research). Security and Privacy
                                                                      Challenges in Federated Learning.
3
    https://www.ai-safety.org
Six posters will be presented in 2-minutes pitches. Five of     • Rick Salay, University of Toronto, Canada
them will also be part of this volume as poster papers.         • Ganesh Pai, NASA Ames Research Center, USA
Posters                                                         • Hélène Waeselynck, CNRS LAAS, France
                                                                • Rob Alexander, University of York, UK
• Robustness as inherent property of datapoints. Cata-          • Vahid Behzadan, Kansas State University, USA
  lin-Andrei Ilie, Marius Popescu, and Alin Stefanescu.         • Simon Fürst, BMW, Germany
• An Efficient Adversarial Attack on Graph Structured           • Chokri Mraidha, CEA LIST, France
  Data. Zhengyi Wang and Hang Su [without paper].               • Orlando Avila-García, Atos, Spain
• Towards Safe and Reliable Robot Task Planning. Snehasis       • Rob Ashmore, Defence Science and Technology Labor-
  Banerje.                                                        atory, UK
• Extracting Money from Causal Decision Theorists. Cas-         • I-Jeng Wang, Johns Hopkins University, USA
  par Oesterheld and Vincent Conitzer.                          • Chris Allsopp, Frazer-Nash Consultancy, UK
• Ethically Compliant Planning in Moral Autonomous              • Francesca Rossi, IBM and University of Padova, Italy
  Systems. Justin Svegliato, Samer Nashed and Shlomo            • Ramana Kumar, Google DeepMind, UK
  Zilberstein.                                                  • Javier Ibañez-Guzman, Renault, France
• Bayesian Model for Trustworthiness Analysis of Deep           • Jérémie Guiochet, LAAS-CNRS, France
  Learning Classifiers. Andrey Morozov, Emil Valiev,            • Raja Chatila, Sorbonne University, France
  Michael Beyer, Kai Ding, Lydia Gauerhof and Christoph
                                                                • Hang Su, Tsinghua University, China
  Schorn.
                                                                • François Terrier, CEA LIST, France
                                                                • Mehrdad Saadatmand, RISE SICS, Sweden
2.2. Second Workshop Day: Landscape                             • Alec Banks, Defence Science and Technology Laborato-
The second-day workshop (AI Safety Landscape) sessions            ry, UK
will be organized into by-invitation talks and panels with      • Gopal Sarma, Broad Institute of MIT and Harvard, USA
structured discussions. The by-invitation talks will focus on   • Philip Koopman, Carnegie Mellon University, USA
diverse topics contributing to understand the AI Safety         • Roman Nagy, Autonomous Intelligent Driving, Germany
Landscape scientific and technical challenges, industrial and   • Nathalie Baracaldo, IBM Research, USA
academic opportunities, as well as gaps and pitfalls.           • Toshihiro Nakae, DENSO Corporation, Japan
   One important ambition of this initiative is to align and    • Peter Flach, University of Bristol, UK
synchronize the proposed activities and outcomes with other     • Richard Cheng, California Institute of Technology, USA
related initiatives. The AI Safety Landscape work will fol-     • José M. Faria, Safe Perspective, UK
low-up in future meetings and workshops.
                                                                • Ramya Ramakrishnan, Massachusetts Institute of Tech-
                                                                  nology, USA
3   Acknowledgements                                            • Gereon Weiss, Fraunhofer ESK, Germany
We thank all those who submitted papers to AISafety 2020        • Douglas Lange, Space and Naval Warfare Systems Center
and congratulate the authors whose papers and posters were        Pacific, USA
selected for inclusion into the workshop program and pro-       • Philippa Ryan Conmy, Adelard, UK
ceedings.
                                                                • Stefan Kugele, Technical University of Munich, Germany
   We specially thank our distinguished PC members, for
reviewing the submissions and providing useful feedback to      • Colin Paterson, University of York, UK
the authors:                                                    • Ashley Llorens, Johns Hopkins University, USA
                                                                • Huáscar Espinoza, Commissariat à l´Energie Atomique,
• Stuart Russell, UC Berkeley, USA                                France
• Simos Gerasimou, University of York, UK                       • John McDermid, University of York, UK
• Jonas Nilson, NVIDIA, USA                                     • Xiaowei Huang, University of Liverpool, UK
• Brent Harrison, University of Kentucky, USA                   • Mauricio Castillo-Effen, Lockheed Martin, USA
• Siddartha Khastgir, University of Warwick, UK                 • Xin Cynthia Chen, University of Hong Kong, China
• Carroll Wainwright, Partnership on AI, USA                    • José Hernández-Orallo, Universitat Politècnica de Valèn-
• Martin Vechev, ETH Zurich, Switzerland                          cia, Spain
• Sandhya Saisubramanian, University of Massachusetts           • Seán Ó hÉigeartaigh, University of Cambridge, UK
  Amherst, USA                                                  • Richard Mallah, Future of Life Institute, USA
• Alessio R. Lomuscio, Imperial College London, UK
• Rachel Freedman, UC Berkeley, USA                                 Finally,   yet    importantly,  we     thank    the
• Brian Tse, Affiliate at University of Oxford, China           IJCAI-PRICAI-20 organization for providing an excellent
                                                                framework for AISafety 2020.
• Michael Paulitsch, Intel, Germany