<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Unifying Framework for Managing Conflict-laden Content</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Max Rapp</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>FAU Erlangen/Nürnberg</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>Human-Computer Interaction (HCI) increasingly takes place on the behavioural level: human behaviour is tracked and aggregated on an immense scale and used to elicit desired machine behaviour through training. On the human side such behavior is characterised by its low level of consciousness and reflection, the shortterm desires and whims that drive it and the small-level decisions that constitute it. Crucially, users who feel anonymous but are in fact highly observable do not intend their exhibited behaviour to shape their online experience. On the other side, machine behaviour is regarded as desirable if it succeeds in predicting and influencing human behaviour. This is achieved through learning algorithms whose output is highly opaque. The result are systems that - while highly effective at giving users what they “want” - do so by exploiting and feeding the biases of “System 1” [2] creating a feedback loop that manipulates and disempowers users by compromising their self-control and their understanding of the systems with which they interact. This thesis project adheres to the belief that Artificial Intelligence (AI) should instead put users in the driver seat. Thus AI-systems that interact with humans should be based on principles decided upon by users' through a conscious, reflected, high-level process. The inner workings of such systems should be transparent and the reasons for their decisions and actions should be explainable to the users. The way humans deliberate on and justify their actions or rationalise their behaviour is through arguments. Machines that achieve the goals stated above will therefore require argumentation capabilities: they need to engage user's conscious focus on the options at hand through offering arguments as entry points into reflective processes; they need to assist users in this process through argumentative support and elicit the users' stance on the high-level principles that should guide the HCI; finally, they need to motivate and rationalise their actions through explanations that weigh the arguments in favor of and against each option giving a veracious yet human-understandable representation of the systems actual reasoning process. The content that needs to be represented, processed and created in these argumentative interactions is highly conflict-laden: not only do arguments frequently support contradicting conclusions (rebuttal), they may also attack other arguments' premises (undercut), the mode of inference they employ or even meta-content such as their utterers. A plethora of argumentation theories has been devised to represent and reason with such content. They comprise frameworks of different levels of formality, intertheoretical integration and implementation. Likewise, the range and depth of application domains for these frameworks is growing rapidly, confronting theory with new and diverse requirements. This thesis project seeks to develop a unifying knowledge representation framework for argumentation that enables the rapid implementation of use-case adapted formalisms; creating, hosting, sharing, processing and visualizing conflict-laden content across formalisms and application domains; dynamic updating of such content</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>through dialogical processes; and ultimately the representation of human agents’ minds as such conflict-laden
content.</p>
      <p>To gather requirements for such a framework and to put its usefulness to the test, a range of applications
are intended that roughly correspond to the system features delineated above: the creation of an atlas of
argumentation theories; formalization of and reasoning on legal text; a dynamic error and conflict handling
system; an argumentation based recommender system for educational content.</p>
      <p>
        As a first step towards these goals the MMT language and system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is extended by argumentation capabilities:
theory graphs are extended by a defined attack relations yielding what we call context graphs. We submit the
ALMANAC-hypothesis: “Any argumentation system A can be refactored into a a classical object-language L
and a context graph scheme G such that A is isomorphic to hL; Gi.” To test the hypothesis, a logic atlas of the
existing argumentation theories and their interrelations will be built. MMT’s proof checking - and in the future
proof assistant capabilities - in combination with an already partially implemented argumentation semantic
computation and visualization tool suite for MMT should immediately furnish a prototype implementation for
many of the formalisms in the atlas.
      </p>
      <p>
        The tools developed in the aforementioned extension of MMT are put to use in a project on the formalization
of legal text (JLogic). Here proof checking will be employed to assess the correctness of legal arguments found
in the text. The created content will be hosted on the MathHub [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] platform.
      </p>
      <p>Later work will see the addition of dynamics to furnish an error and conflict handling system: Automated
generation of conflict graphs together with MMT’s type checking will yield the capability to provide automated,
illuminating error messages and to prompt users on the presence of conflicting content in a theory graph.</p>
      <p>Finally, we will explore how far argumentation goes in equipping AI with a theory of mind through a dialogical
recommender system for educational content and code documentation: based on students/users queries the
system will construct arguments regarding students’/users’ current knowledge. Likewise it will attempt to infer
the arguments that lead the students’/users’ to arrive at erroneous conceptions of the domain. It will use these
representations to suggest precisely targeted learning or documentation items to the students’/users.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Mihnea</given-names>
            <surname>Iancu</surname>
          </string-name>
          , Constantin Jucovschi,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Kohlhase</surname>
          </string-name>
          , and Tom Wiesing.
          <article-title>System description: Mathhub.info</article-title>
          . In Stephen M. Watt,
          <string-name>
            <surname>James H. Davenport</surname>
          </string-name>
          , Alan P. Sexton, Petr Sojka, and Josef Urban, editors,
          <source>Intelligent Computer Mathematics</source>
          , pages
          <fpage>431</fpage>
          -
          <lpage>434</lpage>
          , Cham,
          <year>2014</year>
          . Springer International Publishing.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Kahneman</surname>
          </string-name>
          .
          <article-title>Thinking, fast and slow</article-title>
          . Farrar, Straus and Giroux, New York,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Rabe</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kohlhase</surname>
          </string-name>
          .
          <article-title>A Scalable Module System</article-title>
          .
          <source>Information and Computation</source>
          ,
          <volume>230</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>54</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>