=Paper= {{Paper |id=Vol-2808/Paper_14 |storemode=property |title=Safety Properties of Inductive Logic Programming |pdfUrl=https://ceur-ws.org/Vol-2808/Paper_14.pdf |volume=Vol-2808 |authors=Gavin Leech,Nandi Schoots,Joar Skalse |dblpUrl=https://dblp.org/rec/conf/aaai/LeechSS21 }} ==Safety Properties of Inductive Logic Programming== https://ceur-ws.org/Vol-2808/Paper_14.pdf
                           Safety Properties of Inductive Logic Programming
                                     Gavin Leech, * 1 Nandi Schoots, * 2 Joar Skalse 3
                                                           1
                                                             University of Bristol
                                        2
                                            King’s College London and Imperial College London
                                                          3
                                                            University of Oxford
                                                            *
                                                              Equal contribution
                                                     g.leech@bristol.ac.uk


                            Abstract                                  be a natural fit for the assurance side of safety: often, not just
                                                                      the output model, but also the learning process takes place
  This paper investigates the safety properties of inductive logic    at a relatively high level (that is, at the level of symbolic
  programming (ILP), particularly as compared to deep learn-
  ing systems. We consider the following properties: ease of
                                                                      inference). Similarly, ILP plausibly satisfies multiple impor-
  model specification; robustness to input change; control over       tant specification and robustness properties. We assess ILP
  inductive bias; verification of specifications; post-hoc model      on: Specification properties (ease of model specification and
  editing; and interpretability. We find that ILP could satisfy       value loading; ease of adjusting the learned model to satisfy
  many of these properties in its home domains. Lastly, we pro-       specifications; and control over inductive bias); Robustness
  pose a hybrid system using ILP as a preprocessor to generate        properties (robustness to input change and to post-training
  specifications for other ML systems.                                model edits); Assurance properties (interpretability and ex-
                                                                      plainability; verification of specifications; and control over
                                                                      the inductive bias).
                        Introduction                                     Many safety properties await formalisation, preventing
Symbolic approaches to AI are sometimes considered safer              quantitative comparisons. Where a formal metric is lacking,
than neural approaches (Condry 2016; Anderson et al.                  we qualitatively compare ILP to deep learning (DL).
2020). We investigate this by analysing how one symbolic                 In the following we refer to ‘ILP’ as if it was mono-
approach, inductive logic programming (ILP), fares on spe-            lithic, but ILP systems differ widely in search strategy, ex-
cific safety properties.                                              actness, completeness, target logic (e.g. Prolog, Datalog,
   ILP is a declarative subfield of ML for learning from ex-          ASP), noise-handling, ability to invent predicates, and the
amples and encoded “background knowledge” (predicates                 order of the output theory (Boytcheva 2002). This diversity
and constraints), using logic programs to represent both              limits the general statements we can make, but some remain.
these inputs and the output model (Muggleton 1999). (We
use ‘output model’ and ‘ILP hypothesis’ interchangeably.)                            Safety properties of ILP
   We find ILP to have potential to satisfy an array of safety
properties. To arrive at this, we survey existing work in ILP
                                                                      Model Specification
and deep learning in light of the safety properties defined           The specification of an ML system serves to define its pur-
in the framework of Ortega and Maini (2018). We also for-             pose. When this purpose is successfully implemented in hard
malise robustness to input change and model editing. We               constraints, we may obtain guarantees about the system’s be-
suggest a hybrid system, in which ILP is used as a prepro-            haviour. A defining feature of ILP systems is user-specified
cessing step to generate specifications for other ML systems.         background knowledge. This provides a natural way to im-
   To our knowledge, this is the first analysis of ILP’s safety       pose specifications on ILP systems. An ILP problem specifi-
potential, and of ILP’s differences from deep learning. Re-           cation is a set of positive examples, negative examples, and
lated work includes Cropper, Dumančić, and Muggleton                background knowledge.
(2020)’s recent survey of ILP, the interpretability work of              Consider two important properties of classical ILP. Given
Muggleton et al. (2018b), and Powell and Thévenod-Fosse              a background B, an output model M , and positive examples
(2002)’s study of rule-based safety-critical systems.                 E + , the model M is
   Consider a machine learning system ‘safe’ when the sys-            • weakly consistent if: B ∧ M 6|= False; and
tem’s goals are specified correctly, when it acts robustly ac-        • strongly consistent if: B ∧ M ∧ E + 6|= False.
cording to those goals, when we are assured about these two
                                                                         Weak consistency forbids the generation of models that
properties (Ortega and Maini 2018), such that the risk of
                                                                      contradict any clause in B (Muggleton 1999). In general,
harm from deploying the system is greatly reduced. ILP may
                                                                      ILP algorithms must satisfy weak consistency (Muggleton
Copyright c 2021 for this paper by its authors. Use permitted un-     1999), though probabilistic systems allow its violation; see
der Creative Commons License Attribution 4.0 International (CC        below. Hence, to guarantee that the learned model M sat-
BY 4.0).                                                              isfies some specification s, all we need to do is encode s in
first-order logic (FOL) and add it to the background B. How-     tive parameters, it is not practical to use ordinary constrained
ever, there are still some specification challenges for ILP.     optimisation methods to impose them (Márquez-Neila, Salz-
    Not all systems respect strong consistency. Many mod-        mann, and Fua 2017).
ern implementations of ILP are designed to handle noise in
the example set (Srinivasan 2006; Muggleton et al. 2018a).       Robustness to Input Change
For specifications encoded in the example set, noise han-
                                                                 Robustness concerns smooth output change: If we change
dling means that the system is only nudged in the direction
                                                                 the input slightly, will the output (of the learning algorithm
of the specification. Furthermore, probabilistic ILP systems
                                                                 or of the learned model) change only slightly? To formalize
can specify the background as probabilistic facts (De Raedt
                                                                 this, we define similarity of inputs and output hypotheses.
et al. 2015). This means that even weak consistency can be
violated. As such, these systems may not offer specification        In DL input datasets, the problem description is usually
guarantees.                                                      very correlated with the semantics of the problem. For ex-
                                                                 ample, Gaussian noise usually does not affect the seman-
    Incompleteness. Even though a model satisfying our spec-
                                                                 tics of the problem. DL models are often insensitive to small
ification exists, an incomplete ILP algorithm might not find
                                                                 changes in the description of the input. However, adversar-
it. Some leading implementations of ILP are incomplete, i.e.
                                                                 ial changes induce large changes in output, despite the input
a solution may exist even though the system does not find
                                                                 changes being trivial to the human eye (Szegedy et al. 2014).
one (Cropper and Tourret 2018)
                                                                    For Horn clauses (a typical form in ILP output hypothe-
    Specifications may be hard to encode as FOL formulae. In
                                                                 ses), one distance measure is the ‘rewrite distance’ (the min-
computer vision, a long tradition of manually encoding vi-
                                                                 imum number of syntactic edits that transform one clause
sual concepts was rapidly outperformed by learned represen-
                                                                 into another) (Edelmann and Kunčak 2019). For our pur-
tations (Goodfellow, Bengio, and Courville 2016): it proved
                                                                 poses, this is inappropriate, since it neglects the semantic
possible to learn these improved concepts, but intractable to
                                                                 distances we are targeting: a negation of the whole clause
hand-code them. Insofar as ILP backgrounds must at present
                                                                 would count as a rewrite distance of 1, despite being maxi-
be manually encoded (as opposed to learned via predicate
                                                                 mally semantically different.
invention), we infer that some specifications are not practi-
cally possible to impose on ILP.                                 Definition 1 (Similarity of Datasets) Given two datasets
    Human values are hard to encode as FOL formulae. A           D1 , D2 , let H1 and H2 be the sets of hypotheses compat-
particularly interesting kind of specification are those that    ible with D1 and D2 respectively. Let the weight of a set of
concern norms or values, i.e. specifications that aim to en-     hypotheses H be defined as a weighed sum of the hypothe-
sure that the output respects ethical considerations. There is   ses in H, where more complex hypotheses are given lower
precedent for formalizing norms and moral obligations us-        weight (so that hypothesis h has weight 0.52c(h) , where c(h)
ing logic – deontic logic is an area of philosophical logic      is the complexity of h). We then say that D1 and D2 are
that aims to formalise and deduce moral claims (McNamara         similar if H1 ∩ H2 has a large weight.
2019). This has been used to partially formalise some ethi-
cal frameworks (Kroy 1976; Peterson 2014). However, en-          Definition 2 (Similarity of Hypotheses) We say that two
coding general normative requirements in formal logic is an      hypotheses h1 and h2 are similar if the probability that they
open problem. Further, we do not have a complete articu-         will agree on an instance x sampled from the underlying
lation of all such requirements in any formalism. It seems       data distribution is high.
unlikely that in the near future we will obtain a complete       Definition 3 (Robustness to Input Change) Let L: D →
encoding, owing to deep inconsistencies across people and        M be a learning algorithm. We say that L is robust to input
the contextual nature of value (Yudkowsky 2011). Further-        change if it is the case that L(D1 ) and L(D2 ) are similar
more, it may be impossible to learn a representation of these    whenever D1 and D2 are similar. More specifically, we say
preferences, in the absence of a strong model of human error     that L has robustness parameters rD , rM if: for any D1 and
(Armstrong and Mindermann 2018).                                 D2 such that they have similarity rD or higher, the similarity
    Model specification in DL. Methods exist for limited         between L(D1 ) and L(D2 ) is at least rM .
model specification in DL (Platt and Barr 1988), many of
which focus on specific domains (Kashinath, Marcus et al.           We note that, for this notion of similarity between
2019; Zhang et al. 2020). However, if we interpret a spec-       datasets, the distance between two ILP problems may be
ification as a hard constraint on outputs, then most current     very large even if their descriptions are almost the same. For
DL methods do not allow specification. Instead they im-          example, adding a negation somewhere in the description of
pose soft constraints, modifying the loss to discourage out-     D1 may completely change its distance to D2 .
of-specification behaviour. Imposing hard constraints in DL         ILP is robust to syntactic input change. ILP is largely in-
amounts to imposing a linear set of constraints on the output    variant to how the input problem is represented (in the sense
of the model. Soft constraints in the form of subtle alter-      of symbol renaming or syntactic substitutions, which do not
ations to the loss function or learning algorithm are harder     affect the semantics). Two semantically equivalent problems
to specify than e.g. a linear set of hard constraints (Pathak,   have identical sets of compatible output hypotheses.
Krähenbühl, and Darrell 2015). Soft constraints are perva-        Examples of trivial syntactic changes to a problem in-
sive due to the computational expense of hard constraints in     clude: renaming atoms or predicates; substituting a ground
neural networks: since networks can have millions of adap-       term for a variable; or substituting in a different variable. An
ILP problem statement is parsed as an ordered set of log-        well with small amounts of data, regardless of its induc-
ical sentences, and substitutions within these sentences do      tive bias (Vapnik and Chervonenkis 2015). However, with
not affect the semantics of the individual examples. Absent      a more expressive learning algorithm (such as DL or ILP)
complicating implementation details, they thus do not affect     this is insufficient to yield good generalisation, and hence
the semantics of the output. Another syntactic change to a       such learning algorithms need a good inductive bias to work
problem is adding or removing copies of examples; these          well. ILP’s strong bias allows it to perform well on small
changes do not have any effect on what hypothesis is output.     datasets, even though hypotheses can also be highly expres-
   Changing the order of examples could (depending on            sive (Tausend 1994).
the search algorithm) change the chosen output hypothesis.           The two main components of inductive bias are
Even though the set of consistent output hypotheses does not     • Limits to the hypothesis space: Restricting the hypothesis
change when the order of examples changes, the hypothesis            space, i.e. the set of possible output models; and
that comes up first in the search may change. For example,       • Guiding the search: The search order for traversing
Metagol depends on the order (Cropper and Morel 2020).               through the hypothesis space, as well as heuristics to as-
This order dependence is a property of some clause-level             sess hypotheses.
search methods (Srinivasan 2006).
                                                                     Inductive bias in DL. The hypothesis space in DL is
   Robustness to semantic input change. Naturally, semantic      largely determined by the network architecture (Szymanski,
changes to the problem can completely change the output          McCane, and Albert 2018), which we have control over. For
hypothesis. For example, negating a single example can pre-      example, convolutional neural networks hard-code the as-
clude finding any appropriate hypothesis.                        sumption that output classes are invariant to shift transfor-
   Suppose D1 and D2 are two datasets, with corresponding        mation (Goodfellow, Bengio, and Courville 2016). Train-
hypothesis spaces respectively H1 and H2 . ILP has a fixed       time methods like dropout and learning rate decay also reg-
order (which depends on the inductive biases) of traversing      ularise networks and so add inductive bias (Srivastava et al.
the total set of potential hypotheses for a solution. Say ILP    2014). In addition, neural networks have a broad bias to-
outputs hypothesis h1 for problem D1 and hypothesis h2 for       wards simplicity, although it is unclear how this bias works
D2 . Even if H1 6= H2 , h1 and h2 may be the same. When          (Zhang et al. 2017; Poggio, Liao, and Banburski 2020). The
h1 6= h2 , we would like to assess their similarity.             lack of theoretical understanding of DL’s search bias implies
   Given an output model. If we change one input example,        little explicit control over it.
then we may be able to check whether this input example              Inductive bias in ILP. We can restrict the hypothesis space
is consistent with the output model. We may not be able to       in many ways. A critical design decision for an ILP system
completely visualise the coverage, but may be able to predict    is which fragment of FOL represents the examples, back-
whether the output model will be different.                      ground and output model. The classical choice restricts FOL
   Empirically assessing robustness to input change. Poten-      to definite Horn clauses (Muggleton and de Raedt 1994).
tially, sampling can inform us about the robustness to input         In addition, a strong ILP language bias stems from user-
change of ILP and deep learning. An experiment could work        supplied constraints on the structure or type of the hypoth-
as follows: Generate ILP problems such that we (approxi-         esis, e.g. mode declarations, meta-rules, or program tem-
mately) know the distance between the datasets. Then run         plates (Payani and Fekri 2019). In some sense these are hy-
ILP on each problem and store their output hypotheses. We        perparameters, as found in any ML system. However, these
then select a distance measure and assess the distance be-       constraints can be enormously informative, e.g. specifying:
tween each of the output hypotheses. This allows us to (ap-      which predicates to use in the head or body of the out-
proximately) evaluate the robustness to input change of ILP.     put model; the quantifier of each argument in the predicate;
A similar sampling process can be used for other learning al-    which arguments are to be treated as input and output; and
gorithms to compare the robustness of different algorithms.      the types of these argument (Evans and Grefenstette 2018).
                                                                     User-supplied constraints can pertain to (Muggleton and
Control over Inductive Bias                                      de Raedt 1994) among other things
The inductive bias of a learning algorithm is the set of (of-    • Syntax, e.g. second-order schema or bounded term depth;
ten implicit) assumptions used to generalise a finite input      • Semantics (on the level of terms), e.g. hard-coding
set to a complete output model (Mitchell 1980; Hüllermeier,         the types of predicate arguments, or using determinate
Fober, and Mernberger 2013). If several hypotheses fit the           clauses; and
training data, the inductive bias of the learning algorithm      • Bounds on the length of the output model.
determines which is selected. Correct behaviour is generally         Two elementary ways to order an ILP search over the set
under-determined by the training data, so selecting a model      of possible output models are top-down (‘from general to
with the right behaviour demands inductive bias. It is thus      specific’) or bottom-up (‘from specific to general’). At each
desirable to adapt the training algorithm through fine con-      step of ILP learning, we need a way to score multiple com-
trol over the inductive bias.                                    peting hypotheses. This can be done via computing the infor-
   Informally, a learning algorithm has a low Vap-               mation gain of the change to the hypothesis (Quinlan 1990)
nik–Chervonenkis (VC) dimension if it can only express           or through probabilistic scoring (Muggleton and de Raedt
simple models. If a learning algorithm has a low VC-             1994). A further source of search bias involves specifying
dimension then it can be shown that it is likely to generalise   the order in which we prune candidate hypotheses.
    Comparing ILP with DL. In Table 1 we compare control          can be attempted with arbitrary Prolog specifications, but
over inductive bias in ILP and DL. We consider the follow-        may not terminate.
ing, from Witten et al. (2017): language bias (hypothesis            Verification in DL. To quote Katz et al. (2017), “Deep neu-
space restriction), search bias (how the search through the       ral networks are large, non-linear, and non-convex, and ver-
hypothesis space is ordered), and simplicity bias (how over-      ifying even simple properties about them is an NP-complete
fitting is prevented).                                            problem”. In practice, complete solvers can verify properties
                                                                  of networks with thousands of nodes, but time out for larger
 Bias          ILP                    DL                          networks (Wang et al. 2018). Incomplete methods can verify
 Simplicity    Bound on               Not well understood         properties of networks with ∼100 000 ReLU nodes (Singh
               program length         (besides regularisers       et al. 2019; Botoeva et al. 2020). Note that the smallest net-
                                      e.g. dropout & LR decay)    works that achieve decent results on CIFAR10 have ∼50k
 Language      User constraints,      NN architecture             nodes. Networks can be trained such that they are easier to
               target logic                                       verify (Xiao et al. 2019).
 Search        Search order,          Local gradient search
               hypothesis scoring
                                                                  Post-hoc Model Editing
      Table 1: Realisations of types of inductive bias            Definition 5 (Model Editing) Let L: D → M be a learn-
                                                                  ing algorithm. Let M ∈ M be a learned model. Let s be a
   When is control over inductive bias actually hand-coding       specification. We apply model editing to M on specification
solutions? The more inductive biases are customised, the          s, if we find a model M 0 ∈ M that has property s without
more the learning method resembles explicit programming           re-applying the learning algorithm L.
of a solution class. For example, when doing reinforcement
learning it is possible to include information about how the         Let d be a distance metric on M. We say that we success-
task should be solved in the reward function. As more infor-      fully edit M to fit specification s with respect to distance d
mation is included, designing the reward function resembles       if we find a model M 0 ∈ M that has property s and out of
specifying a solution (Sutton and Barto 2018). In ILP, task-      all models with property s has minimal distance from M .
specific language biases are often unavoidable for perfor-           Model Editing in ILP. The symbolic representation could
mance reasons, but they risk pruning unexpected solutions,        make ILP models easier to manipulate than DL models. ILP
involve a good deal of expert human labour, and can lead to       output models are very interpretable and it is relatively easy
brittle systems which may not learn the problem structure so      for humans to write logical sentences, which should make it
much as they are passed it to begin with (Payani and Fekri        in some cases possible to apply model editing.
2019). This problem could be mitigated by progress in au-            The output model of ILP is a conjunction of logical
tomating inductive bias choice in ILP.                            clauses. The model can easily be edited, by removing or
                                                                  adding individual clauses. If we simply add clause s to M ,
Verification of Specifications                                    then we get a new model M 0 = M ∪ {s}, which satisfies s
In cases where model specification does not give hard             and has minimal distance to M with respect to the ‘rewrite
guarantees about model behaviour, post-hoc verification is        distance’. When adding clauses, one needs to ensure the new
needed. That is, determining, given program M and specifi-        model is still consistent.
cation s, whether M ’s behaviour satisfies s.                        A form of post-hoc model editing has been applied to
                                                                  large neural networks, though only by automating the edits.
Definition 4 (Specification) Let L: D → M be a learning           The OpenAI Five agent was trained across several differ-
algorithm. A specification s is a property such that for all      ent architectures, with an automatic process for discovering
models M ∈ M, the model satisfies the property or not.            weights to copy (Raiman, Zhang, and Dennison 2019).
   The problem of verifying whether a model satisfies a              Model Editing in DL. After training a large neural net-
specification is NP-hard (Clarke and Emerson 1981), both          work, we (practically speaking) obtain a black-box model.
for a neural network (Katz et al. 2017) and for logic pro-        This black-box is not easy to manipulate, owing to the num-
grams (Madelaine and Martin 2018).                                ber of parameters and the distributed nature of its repre-
   Verification in ILP. In practice, verifying properties of an   sentation. Through active learning or incremental learning
output hypothesis is often easy. Suppose you have a propo-        we can update the model - we could add a module that
sitional theory and want to verify whether this satisfies the     deals with exceptions, or fine-tune on extra training data for
specification False. This is equivalent to solving satisfiabil-   low-performing subgroups. However, these do not give us
ity, and so is at least NP hard. We can verify whether an         much control over exactly how the black-box changes (Set-
ILP model M satisfies an arbitrary Datalog specification s        tles 2009).
by running resolution on M ∪ {¬s} to see if it derives False.        Because the black-box is difficult to interpret, we do not
In fact, this can be done in some cases where s is not in Dat-    fully comprehend what function the network has learned and
alog. For example, this could be done as long as s is in the      so are not able to enhance it. Researchers can override the
Bernays-Schönfinkel fragment, albeit in double-exponential       output with a different learned module, but there is no low-
time (Piskac, de Moura, and Bjørner 2008). The proof search       level interactive combination of model and human insight.
Transparency                                                     practice learning can involve many thousands of steps, and
We consider the transparency of learned DL and ILP models,       so this can take an impractically long time).
and the transparency of the learning algorithms.                    Attributing the solution to individual training inputs.
   Transparency of the learned model. In contrast to DL          Given an ILP output model and an input example, a human
models, ILP outputs are represented in an explicit high-level    can usually assess whether they are consistent. So in ILP it
language, making them more transparent. We distinguish be-       is relatively clear which example or background predicate is
tween (Lipton 2018): a globally transparent or ‘simulatable’     causing the ILP algorithm to output a given model.
model; and a locally transparent or ‘decomposable’ model.           In DL however, it would be very difficult to (for instance)
   Decomposability. A decomposable model is one in which         assess whether an image was a member of the training set of
each part of the model - each input, parameter, and compu-       a given model. That is, it is difficult to attribute aspects of
tation - admits an intuitive explanation, independent of other   the output model to individual inputs.
instances of the model part (Lipton 2018). The many param-          Inductive bias towards interpretability. User-supplied
eters of a neural network form a distributed representation      program constraints and bounds on program length mean
of a nonlinear function, and as such it is unhelpful to reason   that we only generate programs of a certain form, which can
about individual parameters.                                     be interpretable by construction. Moreover, control over in-
   An ILP output model is a conjunction of predicates and        ductive bias itself can be seen as a form of accessibility.
literals. When the background is human-specified, each in-
dividual predicate will admit an intuitive explanation. When                              Discussion
predicates are invented by the ILP system, the results can be
counter-intuitive or long; however, they are still themselves    We have argued that ILP has a number of safety properties
encoded as decomposable clauses of intuitive features.           that make it attractive compared to DL:
   Simulatability. A user can simulate a model if they can       1. ILP is convenient for specification, insofar as it is intuitive
take input and work out what the model would output. More           to encode examples and properties of correct behaviour;
precisely, a model M is simulatable in proportion to the         2. ILP is robust to most syntactic changes in inputs;
mean accuracy with which, after brief study of M , a pop-        3. The program templates, and bounds on program length
ulation of users can reproduce the output of M on new data.         give control over the inductive bias in ILP;
   A small usability study (n=16) found that access to an        4. We can verify whether an ILP model satisfies an arbitrary
ILP program did allow users to simulate a concept they were         Datalog specification by running resolution;
unable to infer themselves (Muggleton et al. 2018b). It also     5. We can edit an ILP model by adding or removing clauses;
found, as expected, that increasing the complexity reduced       6. ILP models are interpretable as they are quite transparent
simulatability.                                                     and are reasonably accessible.
   Explainability of the learned model. Since ILP models are
relatively transparent, explanations are redundant (except in       Competitiveness of ILP. It is unlikely that the AI com-
very large programs). DL explainability is a highly active       munity will adopt ILP if its performance is not competitive.
field of research (Gilpin et al. 2018) and has produced many     Consider chemistry applications (Srinivasan et al. 1997):
post-hoc tools, making use of: visualization, text explana-      ILP continues to be applied (Kaalia et al. 2016), but DL ef-
tions, feature relevance, explanations by example, model         forts are now more extensive (Cova and Pais 2019). One
simplification and local explanations (Arrieta et al. 2020).     benchmark is suggestive: ILP found success in the early
   Transparency of the learning algorithm. In DL the learn-      years of the Comparative Assessment of protein Structure
ing algorithm optimises weights in a model that already has      Prediction (Cootes, Muggleton, and Sternberg 2003), but
the same architecture as the output model, such that during      most submissions now use DL (Senior et al. 2019). These
training we see many intermediate models. This implies that      results may not be indicative of ILP’s current potential as
many of the transparency properties of DL are relevant for       far less research is being invested in ILP than in DL. As a
its accessibility as well. On the other hand, in ILP we have a   suggestive bound on the ratio of investment, compare the
distinct training algorithm and output model.                    130 researchers (worldwide) listed on the ILP community
   Individual model updates during learning. In DL, back-        hub (Siebers 2019), to the 420 researchers at a single DL
propagation attributes changes in the loss to individual         lab, Berkeley AI Research, or to the 13,000 attendees of a
weights. However, backpropagation can lead to local min-         single DL conference, NeurIPS. This relative neglect might
ima, and so sometimes weights are updated in a direction         allow for performance gains from research into ILP and hy-
opposite from the ideal direction. This, along with their (hu-   brid ILP-DL systems.
manly) incomprehensible representation imply that individ-          A deeper concern is the limited domains in which ILP of-
ual updates are not interpretable.                               fers its benefits. ILP generates logic programs, where DL ap-
   This contrasts with ILP, where each step of the learning      proximates continuous functions. We have argued that logic
algorithm occurs on a symbolic level (for instance, general-     programs are more human interpretable, especially when the
ising a candidate hypothesis through dropping one literal). In   predicates used in the program represent concepts we know
principle a human user could step through ILP learning and       and use. Our discussion of ILP’s transparency only applies
understand the concept represented at each step, the com-        to domains where the data is already available on a symbolic
plete effect of each change on the model coverage, and the       level. Moreover, a major theme in recent AI, cognitive sci-
particular data points that constrain the change (though in      ence, and linguistics is that rule approaches are insufficient
to express or learn most human-level concepts, where con-         Cootes, A. P.; Muggleton, S.; and Sternberg, M. J. 2003.
tinuous features and similarity to exemplars appear neces-        The Automatic Discovery of Structural Principles Describ-
sary (Rouder and Ratcliff 2006; Spivey 2008; Norvig 2012).        ing Protein Fold Space. In Mol. Biol.
   Both of the above suggest a need to unify connection-          Cova, T. F.; and Pais, A. A. 2019. Deep Learning for Deep
ist and symbolic methods. Recent attempts implement re-           Chemistry: Optimizing the Prediction of Chemical Patterns.
lational or logical reasoning on neural networks (Garnelo         Frontiers in Chemistry 7.
and Shanahan 2019; Evans and Grefenstette 2018). How-
ever, from a safety perspective, these unifications lose de-      Cropper, A.; Dumančić, S.; and Muggleton, S. 2020. Turn-
sirable properties. We hope that future versions not only in-     ing 30: New Ideas in Inductive Logic Programming. In IJ-
crease in performance, but also retain their safety potential.    CAI.
   ILP as specification module. The above suggests a fruit-       Cropper, A.; and Morel, R. 2020. Learning programs by
ful role for ILP: as a specification generator in a mixed AI      learning from failures. arXiv 2005.02259 .
system. We may not be able to directly specify safety prop-       Cropper, A.; and Tourret, S. 2018. Derivation reduction
erties, but may be able to give positive and negative exam-       of metarules in meta-interpretive learning. In International
ples of safe behaviour. If it is natural to formulate these ex-   Conference on Inductive Logic Programming.
amples in natural language or logic, then ILP can generate
hypotheses based on these partial specifications. Since ILP       De Raedt, L.; Dries, A.; Thon, I.; Van den Broeck, G.; and
output models are easy to interpret, we may be able to verify     Verbeke, M. 2015. Inducing probabilistic relational rules
whether they meet our preferences, and perhaps edit them to       from probabilistic examples. In IJCAI.
account for noisy or missing features. In certain cases (e.g.     Edelmann, R.; and Kunčak, V. 2019. Neural-Network
Datalog), it may be possible to formally verify the hypoth-       Guided Expression Transformation. arXiv:1902.02194 .
esis’ correctness. The specification can then be transferred      Evans, R.; and Grefenstette, E. 2018. Learning Explanatory
losslessly to any other learning system that can handle logi-     Rules from Noisy Data. In JAIR.
cal expressions (e.g. graph neural networks).
                                                                  Garnelo, M.; and Shanahan, M. 2019. Reconciling deep
   ILP’s differences with DL suggest solutions to DL’s safety
                                                                  learning with symbolic artificial intelligence: representing
shortcomings. We are hopeful that hybrid systems can pro-
                                                                  objects and relations. In Current Opinion in Behavioral Sci-
vide safety guarantees.
                                                                  ences.
   Acknowledgements Supported by UKRI studentships
EP/S022937/1 and EP/S023356/1 and the AI Safety Re-               Gilpin, L.; Bau, D.; Yuan, B.; Bajwa, A.; Specter, M.; and
search Programme. Thanks to Alex Jackson for suggesting           Kagal, L. 2018. Explaining Explanations: An Overview of
ILP has potential as a specification generator.                   Interpretability of Machine Learning. In IEEE DSAA.
                                                                  Goodfellow, I.; Bengio, Y.; and Courville, A. 2016. Deep
                        References                                Learning. MIT Press. http://www.deeplearningbook.org.
Anderson, G.; Verma, A.; Dillig, I.; and Chaudhuri, S. 2020.      Hüllermeier, E.; Fober, T.; and Mernberger, M. 2013. Induc-
Neurosymbolic Reinforcement Learning with Formally Ver-           tive Bias, Encyclopedia of Systems Biology.
ified Exploration. In NeurIPS.                                    Kaalia, R.; Srinivasan, A.; Kumar, A.; and Ghosh, I. 2016.
Armstrong, S.; and Mindermann, S. 2018. Occam’s razor is          ILP-assisted de novo drug design. Machine Learning 103.
insufficient to infer the preferences of irrational agents. In    Kashinath, K.; Marcus, P.; et al. 2019. Enforcing Physi-
NeurIPS.                                                          cal Constraints in Neural Neural Networks through Differ-
Arrieta, A. B.; Rodrı́guez, N. D.; Ser, J. D.; Bennetot,          entiable PDE Layer. In ICLR DeepDiffEq workshop.
A.; Tabik, S.; Barbado, A.; Garcı́a, S.; Gil-Lopez, S.;           Katz, G.; Barrett, C.; Dill, D.; Julian, K.; and Kochenderfer,
Molina, D.; Benjamins, R.; Chatila, R.; and Herrera, F.           M. 2017. Reluplex: An Efficient SMT Solver for Verifying
2020. Explainable Artificial Intelligence (XAI): Concepts,        Deep Neural Networks. Computer Aided Verification. Lec-
taxonomies, opportunities and challenges toward responsi-         ture Notes in Computer Science 10426.
ble AI. Inf. Fusion 58.                                           Kroy, M. 1976. A Partial Formalization of Kant’s Categori-
Botoeva, E.; Kouvaros, P.; Kronqvist, J.; Lomuscio, A.; and       cal Imperative. An Application of Deontic Logic to Classical
Misener, R. 2020. Efficient Verification of ReLU-based            Moral Philosophy. Kant-Studien 67.
Neural Networks via Dependency Analysis. In AAAI.                 Lipton, Z. C. 2018. The mythos of model interpretability.
Boytcheva, S. 2002. Overview of ILP Systems. In Cyber-            Queue 16.
netics and Information Technologies.                              Madelaine, F.; and Martin, B. 2018. On the complexity of
Clarke, E. M.; and Emerson, E. A. 1981. The design and            the model checking problem. SIAM J. Comput. 47.
synthesis of synchronization skeletons using temporal logic.      Márquez-Neila, P.; Salzmann, M.; and Fua, P. 2017. Impos-
In Workshop on Logics of Programs.                                ing hard constraints on deep networks: Promises and limita-
Condry, N. 2016. Meaningful Models: Utilizing Concep-             tions. arXiv:1706.02025 .
tual Structure to Improve Machine Learning Interpretability.      McNamara, P. 2019. Deontic Logic. https://plato.stanford.
arXiv:1607.00279 .                                                edu/archives/sum2019/entries/logic-deontic/.
Mitchell, T. M. 1980. The need for biases in learning gen-       K.; and Hassabis, D. 2019. Improved protein structure pre-
eralizations. Department of Computer Science, Laboratory         diction using potentials from deep learning. Nature 577.
for Computer Science Research.                                   Settles, B. 2009. Active learning literature survey. Techni-
Muggleton, S. 1999. Inductive Logic Programming: Issues,         cal report, University of Wisconsin-Madison Department of
results and the challenge of Learning Language in Logic.         Computer Sciences.
Artificial Intelligence 114.                                     Siebers, M. 2019. inductive-programming community site.
Muggleton, S.; Dai, W.-Z.; Sammut, C.; Tamaddoni-                https://inductive-programming.org/people.html.
Nezhad, A.; Wen, J.; and Zhou, Z.-H. 2018a. Meta-                Singh, G.; Gehr, T.; Püschel, M.; and Vechev, M. 2019. An
interpretive learning from noisy images. Machine Learning        Abstract Domain for Certifying Neural Networks. ACM
107.                                                             Program. Lang. 3.
Muggleton, S.; and de Raedt, L. 1994. Inductive Logic Pro-       Spivey, M. 2008. The continuity of mind. Oxford University
gramming: Theory and methods. The Journal of Logic Pro-          Press.
gramming 19/20.
                                                                 Srinivasan, A. 2006. The Aleph System. https://www.cs.ox.
Muggleton, S.; Schmid, U.; Zeller, C.; Tamaddoni-Nezhad,         ac.uk/activities/programinduction/Aleph/aleph.html.
A.; and Besold, T. 2018b. Ultra-Strong Machine Learning:
comprehensibility of programs learned with ILP. Machine          Srinivasan, A.; King, R. D.; Muggleton, S.; and Sternberg,
Learning 107.                                                    M. J. E. 1997. Carcinogenesis Predictions Using ILP. In
                                                                 Inductive Logic Programming, 7th International Workshop.
Norvig, P. 2012. Chomsky and the two cultures of statistical
learning. Significance 9.                                        Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and
                                                                 Salakhutdinov, R. 2014. Dropout: A Simple Way to Prevent
Ortega, P. A.; and Maini, V. 2018.               Building safe   Neural Networks from Overfitting. JMLR 15.
artificial intelligence: specification, robustness, and as-
surance. https://medium.com/@deepmindsafetyresearch/             Sutton, R. S.; and Barto, A. G. 2018. Reinforcement learn-
building-safe-artificial-intelligence-52f5f75058f1.              ing: An introduction. MIT press.
Pathak, D.; Krähenbühl, P.; and Darrell, T. 2015. Con-         Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan,
strained Convolutional Neural Networks for Weakly Super-         D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing proper-
vised Segmentation. In IEEE ICCV.                                ties of neural networks. In ICLR.
Payani, A.; and Fekri, F. 2019. Inductive Logic Program-         Szymanski, L.; McCane, B.; and Albert, M. 2018. The effect
ming via Differentiable Deep Neural Logic Networks. CoRR         of the choice of neural network depth and breadth on the size
abs/1906.03523.                                                  of its hypothesis space. arXiv:1806.02460 .
Peterson, C. 2014. The categorical imperative: Category the-     Tausend, B. 1994. Representing biases for inductive logic
ory as a foundation for deontic logic. Applied Logic 12.         programming. In ECML.
Piskac, R.; de Moura, L.; and Bjørner, N. 2008. Deciding ef-     Vapnik, V.; and Chervonenkis, A. 2015. On the Uniform
fectively propositional logic with equality. Technical report,   Convergence of Relative Frequencies of Events to Their
MSR-TR-2008-181, Microsoft Research.                             Probabilities. Springer.
Platt, J. C.; and Barr, A. H. 1988. Constrained differential     Wang, S.; Pei, K.; Whitehouse, J.; Yang, J.; and Jana, S.
optimization. In NeurIPS.                                        2018. Efficient Formal Safety Analysis of Neural Networks.
                                                                 In NeurIPS.
Poggio, T.; Liao, Q.; and Banburski, A. 2020. Complexity
control by gradient descent in deep networks. Nature Com-        Witten, I.; Frank, E.; Hall, M.; and Pal, C. 2017. Data
munications 11.                                                  Mining: Practical Machine Learning Tools and Techniques.
                                                                 Morgan Kaufmann.
Powell, D.; and Thévenod-Fosse, P. 2002. Dependability is-
sues in ai-based autonomous systems for space applications.      Xiao, K. Y.; Tjeng, V.; Shafiullah, N. M.; and Madry, A.
In IARP-IEEE/RAS joint workshop.                                 2019. Training for Faster Adversarial Robustness Verifica-
                                                                 tion via Inducing ReLU Stability. In ICLR.
Quinlan, J. R. 1990. Learning Logical Definitions from Re-
lations. Machine Learning 5.                                     Yudkowsky, E. 2011. Complex Value Systems are Required
                                                                 to Realize Valuable Futures. The Singularity Institute, San
Raiman, J.; Zhang, S.; and Dennison, C. 2019. Neural Net-        Francisco, CA.
work Surgery with Sets. arXiv:1607.00279 .
                                                                 Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; and Vinyals,
Rouder, J. N.; and Ratcliff, R. 2006. Comparing Exemplar-        O. 2017. Understanding deep learning requires rethinking
and Rule-Based Theories of Categorization. Current Direc-        generalization. In ICLR.
tions in Psychological Science 15.
                                                                 Zhang, H.; Chen, H.; Xiao, C.; Gowal, S.; Stanforth, R.; Li,
Senior, A. W.; Evans, R.; Jumper, J.; Kirkpatrick, J.; Sifre,    B.; Boning, D.; and Hsieh, C.-J. 2020. Towards Stable and
L.; Green, T.; Qin, C.; Žı́dek, A.; Nelson, A. W. R.;           Efficient Training of Verifiably Robust Neural Networks. In
Bridgland, A.; Penedones, H.; Petersen, S.; Simonyan, K.;        ICLR.
Crossan, S.; Kohli, P.; Jones, D. T.; Silver, D.; Kavukcuoglu,