<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Causal Knowledge Graph for Scene Understanding in Autonomous Driving</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Utkarshani Jaimini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cory Henson</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amit Sheth</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence Institute, University of South Carolina</institution>
          ,
          <addr-line>Columbia, SC</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Bosch Center for Artificial Intelligence</institution>
          ,
          <addr-line>Pittsburgh, PA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The current approaches to autonomous driving focus on learning from observation or simulated data. These approaches are based on correlations rather than causation. For safety-critical applications, like autonomous driving, it's important to represent causal dependencies among variables in addition to the domain knowledge expressed in a knowledge graph. This will allow for a better understanding of causation during scenarios that have not been observed, such as malfunctions or accidents. The causal knowledge graph, coupled with domain knowledge, demonstrates how autonomous driving scenes can be represented, learned, and explained using counterfactual and intervention reasoning to infer and understand the behavior of entities in the scene.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Causality</kwd>
        <kwd>causal knowledge graph</kwd>
        <kwd>intervention</kwd>
        <kwd>counterfactual</kwd>
        <kwd>autonomous driving</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        element, like a stop line marking, afects the behavior of the vehicle? Or predicting the vehicle’s
response if a pedestrian is jaywalking? What would be the impact if the vehicle fails to identify
the stop line marking on the behavior concerning a pedestrian?
A Causal Knowledge Graph (CausalKG) incorporates causal knowledge into a KG, including
causal domain knowledge encoded within a causal Bayesian network (CBN), and automates
causal inference tasks [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. It leverages the strengths of CBNs, causal ontology, and KGs to
deliver robust and explainable insights. The primary benefit of building a CausalKG lies in
integrating causal knowledge into reasoning and prediction processes, which is crucial for
safety-critical applications2. This integration not only boosts the accuracy and reliability of
current AI algorithms but also provides improved explainability of outcomes, thereby enhancing
trust and confidence in the system. In the context of scene understanding for autonomous
driving, a real-world AD dataset, Pandaset3, was used to build a causal knowledge graph4. The
CausalKG contains causal relations and causal efect weights estimated using the data from
Pandaset and a derived CBN. The causal efect weights are quantitative analyses of interventions
on one or more variables in the dataset. When queried, the CausalKG provided insights into
intervention and counterfactual reasoning, demonstrating its relevance and applicability for
scene understanding. It was observed that a stop line marking (STL) has a higher causal efect on
a pedestrian walking with an object, such as a stroller, backpack, umbrella, etc. Pedestrians with
objects seem to be more responsible citizens, following trafic rules while crossing the street. If
a pedestrian with an object is jaywalking (walking in a scene with no STL), there is a positive
causal efect on the stopping of a vehicle. Jaywalking pedestrians with an object have a higher
causal efect on stopping a vehicle than jaywalking pedestrians without an object, as vehicles or
drivers tend to be more alert of pedestrians walking with an object. Similarly, if a pedestrian is
standing at an STL in a scene, but the vehicle fails to identify the STL, the vehicle will continue
to move. The causal efect estimated using the CBN and AD dataset, incorporated into a KG,
provides a better explanation and understanding of interactions between the entities in the
driving scene, enlightening us about the complex dynamics of the driving scene. CausalKGs can
be used in the future to predict new causal entities in the driving scene. Acknowledgments:
NSF Awards #2335967 and #2119654.
2https://tinyurl.com/m5ukmn8m
3https://scale.com/open-av-datasets/pandaset
4CausalKG for autonomous driving: https://github.com/utkarshani/CausalKG-Pandaset
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          , Causality, Cambridge university press,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>U.</given-names>
            <surname>Jaimini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sheth</surname>
          </string-name>
          , Causalkg:
          <article-title>Causal knowledge graph explainability using interventional and counterfactual reasoning</article-title>
          ,
          <source>IEEE Internet Computing</source>
          <volume>26</volume>
          (
          <year>2022</year>
          )
          <fpage>43</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>