<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Interactively Learning Moral Norms via Analogy</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Joseph Blass</string-name>
          <email>joeblass@u.northwestern.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ph.D. Candidate, Qualitative Reasoning Group, Northwestern University</institution>
          ,
          <addr-line>Evanston, IL</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>256</fpage>
      <lpage>258</lpage>
      <abstract>
        <p>Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people, posing a challenge for encoding them explicitly in a system. This paper proposes to enable autonomous agents to use analogical reasoning techniques to interactively learn an individual's morals. MoralDM, Structure-Mapping, and the Companions Architecture</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1.1</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>Challenge and Research Goals
Should a self-driving car put its passengers at risk and swerve to avoid a jaywalker, or
protect its passengers and hit him? To participate in our society, computers need to
share our ethics. As these systems become more autonomous, they must consider the
moral ramifications of their actions. I intend to build an AI moral-reasoning system that
strives for good, but can select amongst only bad options, by acquiring and applying
human morals. This system will learn moral norms through natural-language
interaction with humans and analogical generalization, and apply these norms by analogy.</p>
      <p>
        The diversity of moral norms and concerns make hand-encoding an individual’s
moral sense or providing case-by-case instructions impossible. Natural interaction will
be key, since users may have neither the technical skills nor understand their own
morals enough to encode them themselves. Also, since human morals likely do not depend
on first-principles reasoning (FPR)
        <xref ref-type="bibr" rid="ref6">(Haidt, 2001)</xref>
        , and since moral rules contradict and
trade off with each other, I intend to minimize FPR in the system. A pure FPR moral
reasoning system would either need rules for all possible trade-offs, to be able to ignore
certain morals (a bad idea), or would freeze when moral obligations conflict. Analogical
reasoning can avoid these problems if provided a good analogue.
1.2
MoralDM
        <xref ref-type="bibr" rid="ref2">(Dehghani et al. 2009)</xref>
        is a computer model of moral reasoning that takes in
a moral dilemma in natural language, uses a natural language understanding (NLU)
system to generate Research-Cyc-derived predicate-logic representations of the
dilemma, and uses analogy over resolved cases and FPR over explicit moral rules to make
moral decisions consistent with humans’. MoralDM is the starting point for my work.
      </p>
      <p>
        The Structure Mapping Engine (SME), based on
        <xref ref-type="bibr" rid="ref4">Gentner’s (1983</xref>
        ) Structure
Mapping Theory of analogy, constructs an alignment between two relational cases and
draws inferences from it. SME can apply norms by analogy from stories
        <xref ref-type="bibr" rid="ref2">(Dehghani et
al. 2009)</xref>
        . Analogy is a good fit for moral decision-making because both are guided by
structure, not features. Consider the following examples. 1) A bomb will kill nine
people in a room, but you can toss it outside, where it will kill one person. 2) A bomb will
kill nine people, but you can toss someone onto it to absorb the blast and save the nine.
Most say tossing the bomb, but not the person, is morally acceptable. These scenarios
only differ structurally, in what fills which role; the entities and action types themselves
are shared. The classic trolley problem (a trolley will hit five people unless it is diverted
to a side track where it will hit one person), in contrast, has different features, but the
same structure, as the first bomb case. Humans see these two cases as morally alike.
      </p>
      <p>
        The Sequential Analogical Generalization Engine (SAGE) builds case
generalizations that emphasize shared, and deprecate case-specific, structures. SAGE uses a case
library of generalizations and exemplars. Generalizations contain facts from constituent
cases: non-identical corresponding entities are replaced by abstract ones; probabilities
indicate the proportion of assimilated cases each fact is present in. Given a probe,
SAGE uses SME to find the most similar case in its case library. If the match is strong
enough, the case is assimilated; if not, it is added as an exemplar. SAGE can use
nearmisses to determine defining characteristics of category members
        <xref ref-type="bibr" rid="ref7">(McLure et al., 2015)</xref>
        .
      </p>
      <p>
        The Companion Cognitive Architecture emphasizes the ubiquity of qualitative
representations and analogical reasoning in human cognition. Companion systems are
designed to work alongside and interactively with humans
        <xref ref-type="bibr" rid="ref3">(Forbus &amp; Hinrichs, 2006)</xref>
        .
2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Proposed Research and Progress</title>
      <p>I propose to extend MoralDM in the Companion Architecture to learn to model a human
user’s morals. The system will learn to recognize and extract moral norms through the
generalization process. It will get moral stories in natural language from the user,
generate qualitative representations of those stories, generalize over those representations,
and use SME to apply morals from the generalizations. I will extend MoralDM’s
analogical reasoning, integrate emotional appraisal, and improve NLU for a moral lexicon.</p>
      <p>Previously MoralDM’s analogical reasoning module exhaustively matched over
resolved cases, which is computationally expensive and cognitively implausible. SME
over ungeneralized cases also sees feature-similar but morally-different cases (i.e., the
bomb scenarios) as a good match, due to the amount they have in common.</p>
      <p>MAC/FAC is a two-step model of analogical retrieval. MAC efficiently computes
dot-products between the content vectors of the probe and each case in memory (a
coarse similarity measure). FAC then performs SME mappings on the most similar
cases. MAC sees cases concerning mostly the same entities as the probe as good
potential matches, even if the structures differ. Using MAC/FAC over generalizations rather
than exemplars solves this problem, since generalizations emphasize defining structure.
Abstract generalizations applied by analogy can therefore function as moral rules.</p>
      <p>
        We have found that reasoning by analogy over generalizations led to more
humanlike judgments than using ungeneralized cases
        <xref ref-type="bibr" rid="ref1">(Blass &amp; Forbus, 2015)</xref>
        . Reasoning can
be further improved using
        <xref ref-type="bibr" rid="ref7">McLure &amp; Forbus’ (2015)</xref>
        work on near-misses to illustrate
category boundaries and the conditions for membership or exclusion. MoralDM also
still reasons using FPR about facts relevant to moral judgment, such as directness of
harm. These are not explicitly stated, though we recognize them easily; MoralDM uses
them in a consistency check to ensure the quality of retrieved analogues. Near-misses
would let MoralDM use analogy, not FPR, to find the facts for the consistency check.
      </p>
      <p>
        We want to expand the range and provenance of stories for MoralDM to learn from.
One option is to crowd-source moral stories to present to a user for endorsement or
rejection, rather than force the user to provide them all. QRG’s NLU system, EA NLU,
generates qualitative representations from English input, but its moral vocabulary is
currently limited. The Moral Foundations Dictionary
        <xref ref-type="bibr" rid="ref5">(Graham et al., 2009)</xref>
        is a moral
lexicon; to enable EA NLU to understand moral stories, I will ensure lexical and
ontological support for this vocabulary. Another NLU challenge is how to infer information
implicit in the text. Work has been done at QRG on inferring narrative information,
including about moral responsibility
        <xref ref-type="bibr" rid="ref8">(Tomai &amp; Forbus, 2008)</xref>
        . I will extend EA NLU’s
abductive reasoning as needed to support moral narrative understanding. Finally, I will
integrate emotional appraisal
        <xref ref-type="bibr" rid="ref9">(Wilson et al. 2013)</xref>
        into MoralDM. Emotional appraisal
can help recognize moral violations and enforce moral decisions.
      </p>
      <p>My goal is to have a Companion running MoralDM with the above extensions
interact with a human and build a model of their moral system. MoralDM could not
previously do this, since it required all moral norms to be explicitly encoded, and modeled a
society’s aggregate judgments, not individuals. The new system will have the human
tell it a moral story, crowd-source thematically similar stories, and ask the human which
illustrate the same moral principle (the others are near-misses). For each story, the
system would predict the moral value of actions and compare its predictions to the human’s
moral labels. When the core facts of the generalization stop changing and the system’s
labels consistently match the human’s, the system has mastered that moral domain.</p>
      <p>This project brings challenges. How much FPR will remain necessary? How must
EA NLU be extended to understand moral narratives? What narrative inferences should
be made about implicit information? Nonetheless, I believe I can build a system that
interactively learns to model an individual’s morality.
3</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Blass</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Moral Decision-Making by Analogy: Generalizations vs</article-title>
          .
          <source>Exemplars. Proceedings of the 29th AAAI Conference on Artificial Intelligence</source>
          , Austin, TX.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Dehghani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sachdeva</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ekhtiari</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gentner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>The role of cultural narratives in moral decision making</article-title>
          .
          <source>In Proceedings of the 31st Annual Conference of the Cognitive Science Society.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hinrichs</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Companion Cognitive Systems: A Step Towards Human-Level AI</article-title>
          .
          <source>AI Magazine</source>
          <volume>27</volume>
          (
          <issue>2</issue>
          ),
          <fpage>83</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Gentner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>1983</year>
          ).
          <article-title>Structure-Mapping: A Theoretical Framework for Analogy</article-title>
          .
          <source>Cognitive Science</source>
          <volume>7</volume>
          (
          <issue>2</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Graham</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haidt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Nosek</surname>
            ,
            <given-names>B. A.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Liberals and conservatives rely on different sets of moral foundations</article-title>
          .
          <source>Journal of personality and social psychology</source>
          ,
          <volume>96</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1029</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Haidt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>108</volume>
          (
          <issue>4</issue>
          ),
          <fpage>814</fpage>
          -
          <lpage>834</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>McLure</surname>
            ,
            <given-names>M.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedman</surname>
            <given-names>S.E.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Forbus</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.D.</surname>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Extending Analogical Generalization with NearMisses</article-title>
          .
          <source>In Procs of the 29th AAAI Conference on Artificial Intelligence</source>
          , Austin, TX
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Tomai</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Using Qualitative Reasoning for the Attribution of Moral Responsibility</article-title>
          .
          <source>In Proceedings of the 30th Annual Conference of the Cognitive Science Society</source>
          . Washington, D.C.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Wilson</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K. D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>McLure</surname>
            ,
            <given-names>M. D.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Am I Really Scared? A Multi-phase Computational Model of Emotions</article-title>
          .
          <source>In Proceedings of the Second Annual Conference on Advances in Cognitive Systems.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>