<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Knowledge-based Question Answering for DIYers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Doo Soon Kim?</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhe Feng</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lin Zhao</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>The DIY QA System</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bosch Research</institution>
          ,
          <addr-line>Sunnyvale, CA 94085</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>DIY (Do-It-Yourself) requires extensive knowledge such as the usage of tools, properties of materials, and the procedure of activities. Most DIYers use online search to find information but it is usually time-consuming and challenging for novice DIYers to understand the retrieved results and later apply them to their individual DIY tasks. In the work, we present a Question Answering (QA) system which can address the DIYers' specific needs. The core component is a knowledge base (KB) which contains a vast amount of domain knowledge encoded in a knowledge graph. The system is able to explain how the answers are derived with reasoning process. Our user study shows that the QA system addresses DIYers' needs more effectively than the web search.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Problem</title>
      <p>Question Type Sub-question type Example
required entities / properties WWhhaatt ipsotwheerletonogltshdoof tIhneederdililnbtihtenpeerodjeedcti?n the project?
alternative entities Can I use a circular saw instead of a jigsaw in step 2?
Project Question teixmpela/ndaitfifioncuolftyac/tcioosnts CHaonwylooungexdpoleasinthteheprsoajwecintgtasktee?p in more detail?
specific location of actions Where should it be screwed?
alternative actions Are there other options instead of pre-drilling?
definition of tool / accessory / material What is jigsaw?
related action What can I do with a jigsaw?
Domain Question tsitprusctural info IWs hthaetrdeoaensyasjaigfestaywtilpofookrluiksein?g a jigsaw?
comparison How does a jigsaw differ from a circular saw?</p>
      <sec id="sec-1-1">
        <title>Domain</title>
        <p>KR
Taxonomy
of Concept</p>
      </sec>
      <sec id="sec-1-2">
        <title>Project KR</title>
      </sec>
      <sec id="sec-1-3">
        <title>Product KR</title>
      </sec>
      <sec id="sec-1-4">
        <title>User / Context KR</title>
        <p>The KB is then used by the reasoner to derive an answer. For most question types,
our system converts the question into a SPARQL query using in-house NLP solutions
and execute it against the knowledge graph. For some complex question types (e.g.,
’alternative entities’ in Table. 1), we use advanced AI reasoning techniques 3.
3</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Evaluation and Discussion</title>
      <p>In the pilot study, we compared our system against online search, a common method
for information seeking. Specifically, we conducted a user study where 20 users were
given a sample DIY project along with 5 questions and were asked to find an answer
using online search and our QA system in separate sessions. With online search, the
average time of finding an answer was 3.8 min. while our QA system can instantly
provide an answer. The users’ satisfaction rate with our system was also found to be
significantly better than that with online search. In the talk, we also want to share the
lessons we learned from this project: pipelined architecture vs. end-to-end architecture,
importance of explainability, and the knowledge acquisition bottleneck.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>