<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Platform for Commonsense Knowledge Acquisition Using Crowdsourcing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christos T. Rodosthenous</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loizos Michael</string-name>
          <email>loizosg@ouc.ac.cy</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Open University of Cyprus</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Research Center on Interactive Media</institution>
          ,
          <addr-line>Smart Systems, and Emerging Technologies P.O Box 12794, 2252, Nicosia</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>24</fpage>
      <lpage>25</lpage>
      <abstract>
        <p>In this article, we present our work on developing and using a crowdsourcing platform for acquiring commonsense knowledge aiming to create machines that are able to understand stories. More specifically, we present a platform that has been used in the development of a crowdsourcing application and two Games With A Purpose. The platform's specifications and features are presented along with examples of applying them in developing the aforementioned applications. The article concludes with pointers on how the crowdsourcing platform can be utilized for language learning, referencing relevant work on developing a prototype application for a vocabulary trainer.</p>
      </abstract>
      <kwd-group>
        <kwd>Games With A Purpose</kwd>
        <kwd>Crowdsourcing</kwd>
        <kwd>cloze tests</kwd>
        <kwd>commonsense knowledge</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Human computation
        <xref ref-type="bibr" rid="ref8">(Law and von Ahn, 2011)</xref>
        or
crowdsourcing
        <xref ref-type="bibr" rid="ref15 ref23">(von Ahn and Dabbish, 2008)</xref>
        is applied in cases
where machines are not able to perform as good as
humans can. In this paper, we focus on our work for
developing a platform which utilizes crowdsourcing for acquiring
knowledge about our world, i.e, commonsense knowledge.
This platform was used to develop crowdsourcing
applications, including Games With A Purpose (GWAPs) for
acquiring commonsense knowledge suitable for
understanding stories. More specifically, we present how the various
platform features were used for the creation of two GWAPs:
“Knowledge Coder”
        <xref ref-type="bibr" rid="ref16">(Rodosthenous and Michael, 2014)</xref>
        and “Robot Trainer”
        <xref ref-type="bibr" rid="ref17">(Rodosthenous and Michael, 2016)</xref>
        and a crowdsourcing application for acquiring knowledge
that can be used in solving cloze tests, i.e., an exercise
where a word from a passage or a sentence is removed and
readers are asked to fill the gap.
      </p>
      <p>
        The two games were designed to help the acquisition of
commonsense knowledge in the form of rules. The first
game implements a four-step methodology, i.e, acquiring,
encoding, generalizing knowledge and verifying its
applicability in other domains than the one used to generate
it. The second game uses a hybrid methodology, where
both human players and an automated reasoning system,
based on the STory comprehension through
ARgumentation (STAR) system
        <xref ref-type="bibr" rid="ref3">(Diakidoy et al., 2015)</xref>
        , are combined
to identify and verify the contributed knowledge.
Knowledge gathered is tested on answering questions on new
unseen stories using the STAR system. Both games use a
number of ready-made gamification elements from the
platform to increase player contribution and interest to the task.
Furthermore, the crowdsourcing platform’s back-end
interface was employed for real-time monitoring of the
acquisition process and presentation of metrics and statistics in an
intuitive dashboard.
      </p>
      <p>For the crowdsourcing application, a three-step
methodology was used, where contributors first find the missing
word in a story, then they identify the words that lead to
selecting the missing word and finally they verify the
applicability of the contributed knowledge on filling a gap in
a story where similar words are present. The process is
repeated using a story which contains the previously
identified words but with the missing word not explicitly present
in the text. This application can also find use in language
learning, since generated cloze tests can be delivered to
language learners, while crowdsourcing the answers.
In the following sections, we present the developed
crowdsourcing platform and its features, along with examples of
how the platform was used in real scenarios for
acquiring commonsense knowledge. In the penultimate section,
we present related work in using crowdsourcing and
discuss the differences with our approach for acquiring
commonsense knowledge. In the final section, we give an
overview of our work, provide insights on future directions
and present a relevant extension of the crowdsourcing
application in developing a vocabulary trainer for language
learning.</p>
      <p>2.</p>
    </sec>
    <sec id="sec-2">
      <title>The Crowdsourcing Platform</title>
      <p>Following our vision for acquiring commonsense
knowledge using crowdsourcing, we designed a platform which
offers features and services that can be used to
facilitate commonsense knowledge gathering from a number of
paradigms, such as games, crowdsourcing tasks and mini
applications. Most of the platform’s specifications are
applied in the majority of crowdsourcing platforms and
applications and some of them are specific for the task of
acquiring commonsense knowledge.</p>
    </sec>
    <sec id="sec-3">
      <title>2.1. Platform Specifications</title>
      <p>For developing the platform, we considered the following
key design options: 1. the selection of a suitable technology
for delivering task-based applications and GWAPs, 2. the
handling of contributors’ profiles, and 3. the representation
of knowledge in a structured form that can be reused and
verified. The platform should also allow monitoring of the
acquisition process both in terms of contributors and
acquired knowledge.
Furthermore, the platform should be able to offer a number
of design elements needed in games and educational
applications. These include but are not limited to: 1. leader
boards, 2. contributors’ ranking, 3. medals and awards,
4. progress-bars, 5. live feedback with notifications (both
synchronous and asynchronous) for the events, and other
gamification elements needed to provide the user with a
pleasant experience while contributing.</p>
      <p>On the back-end, the platform should be able to provide
tools for designing a crowdsourcing application and
managing contributors. These tools should provide developers
the ability to easily change parameters of the application,
e.g., number of raters for acquired knowledge to be valid,
dynamic loading and changing of datasets (testing and
validation) and export statistics on the system usage.
We chose to develop a web-based system using the Joomla1
content management system (CMS) framework. The
specific CMS inherently covers a lot of the aforementioned
features in its core and it has a plethora of extensions for
users to install, such as a community building component
for creating multi-user sites with blogs, forums and social
network connectivity. Additionally, the CMS provides a
very powerful component development engine, for
developers to deploy additional elements that can be reused in
multi-domain applications.</p>
      <p>
        There are many cases where crowdsourcing applications
require functionality from other systems or knowledge bases,
e.g., automated reasoning engines, datasets and natural
language processing systems. For the crowdsourcing
platform we constructed an Application Programming
Interface (API) to the Web-STAR system
        <xref ref-type="bibr" rid="ref18">(Rodosthenous and
Michael, 2018)</xref>
        for story understanding related processing
and we offer a direct integration to the Stanford CoreNLP
        <xref ref-type="bibr" rid="ref12">(Manning et al., 2014)</xref>
        system. It is also able to retrieve and
process factual knowledge, from ConceptNet
        <xref ref-type="bibr" rid="ref21">(Speer et al.,
2016)</xref>
        , YAGO
        <xref ref-type="bibr" rid="ref22">(Suchanek et al., 2007)</xref>
        and WordNet
        <xref ref-type="bibr" rid="ref4">(Fellbaum, 2010)</xref>
        . Developers can integrate other
SPARQLbased
        <xref ref-type="bibr" rid="ref15 ref23">(Quilitz and Leser, 2008)</xref>
        knowledge bases since the
methodology used is generic.
      </p>
      <p>The crowdsourcing platform offers a number of features for
promoting the application to groups of users, either in
social media or user forums. Contributors can share their
contribution status/points/awards to social media groups. This
tactic can increase user retention to the application.
Moreover, developers can enable the “invitations” functionality,
where contributors gain extra points when they invite other
people to contribute.
2.2.</p>
    </sec>
    <sec id="sec-4">
      <title>Steps for Designing a Crowdsourcing</title>
    </sec>
    <sec id="sec-5">
      <title>Application Using the Platform</title>
      <p>In this section, we showcase the steps needed for a
developer to design and deploy a crowdsourcing application.
These steps are also depicted in Figure 1. First, a template
must be selected to match the application domain. There
are a number of templates available to match a number of
crowdsourcing paradigms (e.g., GWAPs, language
learning applications) which can be customized according to the
specific needs of the task.
Developers need to prepare the main functionality of their
system by coding it in PHP, or any other language and
encapsulate its executable in the platform and deliver the
result using HTML, CSS and JavaScript. During this process,
they need to prepare a list of parameters that can be used
in the experiments and code it in XML format. These
parameters can be incorporated in the code and control how
various elements are displayed (e.g., display/hide web tour
and guidance, choose what knowledge is presented for
verification, etc.).</p>
      <p>The next steps involve the selection of knowledge
acquisition tasks. Developers can select among acquisition,
verification and knowledge preference identification tasks and
map the methodology steps to application screens or game
missions (depending on the chosen paradigm). The
knowledge preference selection task involves the ability of a
human contributor to choose pieces of knowledge that are
used in a given situation and discard the ones that are not.
For example, when reading a story about birds, readers can
infer that birds can fly. From a similar story, where it is
explicitly mentioned that birds are penguins, readers can infer
that penguins cannot fly.</p>
      <p>
        For each task, a data stream is required. The data stream
can be anything from text inserted directly from
contributors, i.e, a dedicated task in the application, a pre-selected
dataset such as Triangle-COPA
        <xref ref-type="bibr" rid="ref13">(Maslan et al., 2015)</xref>
        or
ROCStories
        <xref ref-type="bibr" rid="ref14">(Mostafazadeh et al., 2016)</xref>
        , or the outcome
of another task.
      </p>
      <p>Developers are free to design and code the logic behind
each task as they see fit to achieve their goals. The platform
has a number of pre-defined functions for storing
commonsense knowledge in the form of rules or facts, both in
natural language and in a logic-based format, e.g., hug(X,Y)
implies like(X,Y) where X and Y are arguments and
intuitively means if a person X hugs a person Y then person X
likes person Y.</p>
      <p>Moreover, the platform incorporates a number of
visualization libraries (e.g., d3.js2, Cytoscape.js3, chart.js4) to
provide live feedback to the contributor.</p>
      <p>For each application, developers need to choose how
contributed knowledge is selected and what are the criteria for
storing this knowledge in the accepted knowledge pool.
Developers can choose among a number of strategies or
a combination of them, such as selecting knowledge that
was contributed by at least n number of persons,
knowledge that is simple (e.g., rules with at most n predicates in
their body), knowledge that is evaluated/rated by at least n
raters and knowledge that is evaluated by an automatic
reasoning engine. Depending on the type of application,
developers also need to choose a marking scheme that fits the
logic behind the application and reward contributors, e.g.,
points and medals for games.</p>
      <p>When the design of the various tasks is completed, the
developer needs to choose how contributors will have access
to the platform (e.g., anonymously, through registration or
social networks) and what details need to be filled in their
profiles.</p>
      <sec id="sec-5-1">
        <title>2https://d3js.org/</title>
      </sec>
      <sec id="sec-5-2">
        <title>3http://js.cytoscape.org/</title>
      </sec>
      <sec id="sec-5-3">
        <title>4https://www.chartjs.org/</title>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>2.3. Technological Infrastructure</title>
      <p>In terms of technological infrastructure, the platform
relies on a web-server with Linux-Apache-MariaDB-PHP
(LAMP) stack and on the Joomla framework. The platform
also utilizes the JQuery5 and the bootstrap frameworks both
for designing elements and for application functionality.
The platform employs the Joomla Model-View-Controller
(MVC)6 framework that allows the development of
components by separating the data manipulation functions from
the view controls. The controller is responsible for
examining the request and determining which processes will be
needed to satisfy the request and which view should be used
to return the results back to the user. This architecture
allows the usage of both internal (e.g., database) and external
data sources (e.g., APIs, files) and of course deliver these
services in an abstraction layer that can be used by other
applications.</p>
      <p>For user authentication, both the Joomla internal
mechanisms are used and the Oauth7 authentication methods that
permit the seamless integration of social network
authentication with the platform.</p>
    </sec>
    <sec id="sec-7">
      <title>2.4. Data Visualization</title>
      <p>It is important for application developers to be able to
visualize acquired knowledge for better understanding what and
how users behaved during the crowdsourcing experiment.
In Figure 2 an example of a Sankey type graph is presented
for the Robot Trainer game where results for both the
contributors and the acquired knowledge are depicted on the
same diagram. This type of functionality is possible by
using the D3.JS library with data feed from the database and
a graph theory (network) library for visualization and
analysis called Cytoscape.js. The latter was also used for
representing and contributing commonsense knowledge rules in
a graphical manner in WebStar and was evaluated positively</p>
      <sec id="sec-7-1">
        <title>5https://jquery.com/</title>
      </sec>
      <sec id="sec-7-2">
        <title>6https://docs.joomla.org/</title>
        <p>Model-View-Controller</p>
      </sec>
      <sec id="sec-7-3">
        <title>7https://oauth.net/2/</title>
        <p>by novice users in conjunction with using a text-based
editor for the same task.</p>
        <p>3.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>An Example of Developing a</title>
    </sec>
    <sec id="sec-9">
      <title>Crowdsourcing Application</title>
      <p>
        In its current state, the platform was used to develop two
GWAPs and a crowdsourcing application. There is an
extensive presentation of the two GWAPs in our previous
work
        <xref ref-type="bibr" rid="ref16 ref17">(Rodosthenous and Michael, 2014; Rodosthenous
and Michael, 2016)</xref>
        and readers are directed there to learn
more about the design, the various elements employed
and the experiments performed to acquire commonsense
knowledge.
      </p>
      <p>In this section, we focus on how the platform was used for
the task of acquiring knowledge in the form of natural
language rules for solving cloze tests. For running this task,
first we retrieved stories from the ROCStories dataset in a
tabular format and loaded them in the platform’s database
table. Then we parsed each story sentence through the
CoreNLP system and got the Parts-Of-Speech (POS) for
each word and its base form (lemma). The aforementioned,
were stored in a database table. For each story, a noun word
was obscured and more than 1000 cloze tests were created.
For each test at least 5 possible answers were generated and
stored, including synonyms and antonyms retrieved from
Wordnet. This workflow was developed in the back-end by
reusing components from the two GWAPs and by adding
new functionality specifically used for that workflow.
The task was separated in three subtasks and for the
frontend design, each one of these tasks is presented on a
separate screen (see Figure 3). Each screen comprises an
instruction area on top, the active task area below that on the
left and the visual representation area on the right. The
visual representation area is dynamically updated with every
user action. Directly below these two are the task controls.
This template, based on bootstrap, was chosen for its
simplicity, since we wanted to avoid users paying attention to
unnecessary elements.</p>
      <p>To start contributing, users need to create an account using
either the registration form or one of the social media
connected account methods inherently present in the platform.
After entering their credentials, contributors are firstly
presented with a test (see Figure 2a) which they solve and state
how confident they are in solving it, in a scale of 0 to 100%.
Secondly, the contributors are asked to highlight the words
in the text that helped them decide the missing word (see
Figure 2b), and thirdly, they are presented with a new test
where both the correct answer selected in the 1st step and
the highlighted words selected in the 2nd step are present
(see Figure 2c). The contributor is asked to verify if the
highlighted words are used to find the missing word.
Finally, a new test appears which includes the highlighted
words from the 2nd step but not the selected missing word
from the 1st step. Contributors are asked if the missing
word is implied in the story. Each contributor is also
presented with a task to verify if the chosen words selected by
another contributor are useful for solving the cloze test (see
Figure 2d).</p>
      <p>Each test is retrieved randomly from the database and for
the verification task, tests are selected randomly at first, and
by prioritizing selection of tests that have at least one
contribution. That way, we give priority to verifying
contributions. This is set before running the experiment in the
parameters screen on the back-end. All user contributions are
recorded and stored in a database table recording both the
task data (e.g., missing word, highlighted words,
verification response) and metadata (e.g., response time).
Recording is possible using the JQuery AJAX libraries and APIs,
which allow dynamic update of the content without
refreshing the browser webpage and make the contributor to loose
focus on the task.</p>
      <p>Through these tasks, we are able to acquire knowledge both
for cases where the word is explicitly stated in the text and
for cases that it is implied. The crowdsourcing
application was tested with a small crowd and initial experiments
showed that acquired rules can be used both for solving
cloze tests and for generating inferences from a story. For
example the following two rules were generated and
verified on unseen stories:
when words (or their lemmas) friends and high
exist in a story then the missing word is probably
school
when words (or their lemmas) player and scored
exist in a story then the missing word is probably
team</p>
    </sec>
    <sec id="sec-10">
      <title>Related Work</title>
      <p>
        Currently, there are many attempts to harness the power of
the crowd for several tasks such as image tagging,
knowledge gathering, text recognition, etc. The motives for
people contributing, are categorized between intrinsic and
extrinsic
        <xref ref-type="bibr" rid="ref6">(Kaufmann et al., 2011)</xref>
        . Intrinsic motivation
includes enjoyment and community based contributions and
extrinsic includes immediate and delayed payoffs and
social motivation.
      </p>
      <p>
        For the purpose of acquiring commonsense knowledge
there are examples of games and frameworks such as
Verbosity
        <xref ref-type="bibr" rid="ref24">(von Ahn et al., 2006)</xref>
        , i.e., a GWAP for
Collecting Commonsense knowledge facts, Common Consensus
        <xref ref-type="bibr" rid="ref10">(Lieberman et al., 2007)</xref>
        , i.e., a GWAP for gathering
commonsense goals, GECKA
        <xref ref-type="bibr" rid="ref2">(Cambria et al., 2015)</xref>
        , i.e., a
game engine for commonsense knowledge acquisition, the
Concept Game
        <xref ref-type="bibr" rid="ref4 ref5">(Herdagdelen and Baroni, 2010)</xref>
        , i.e, a
GWAP for verifying commonsense knowledge assertions,
the FACTory Game
        <xref ref-type="bibr" rid="ref9">(Lenat and Guha, 1990)</xref>
        where players
are asked to verify facts from Cyc, the Virtual Pet and the
Rapport games
        <xref ref-type="bibr" rid="ref7">(Kuo et al., 2009)</xref>
        for commonsense data
collection and many other.
      </p>
      <p>
        There are also approaches where contributors are
motivated by money such as the Amazon Mechanical Turk
        <xref ref-type="bibr" rid="ref1">(Buhrmester et al., 2011)</xref>
        and Figure Eight8 and others,
where motivation is geared towards contributing to science
or other noble causes. These approaches rely on citizen
science frameworks for crafting crowdsourcing tasks, such as
Pybossa9 and psiTurk10.
      </p>
      <p>The aforementioned systems and games are very
interesting and provide a lot of features, but their design is focused
on targeting a single task, rather than a series of tasks that
are chained. Furthermore, the majority of systems is
limited to the templates and standard workflow processes
offered in order to accommodate the most common and most
popular crowdsourcing tasks. The task of commonsense
knowledge acquisition is more complex and requires more
complex workflows to be used, e.g, contribution and then
verification.</p>
      <p>There are cases where crowdsourcing only is not the best
option for acquiring a specific type of knowledge and
hybrid solutions, i.e., solutions that employ both human
contributors and machine processing, should be used towards
that direction. Such an example is the acquisition of
commonsense knowledge in the form of rules, where we
compared a pure crowdsourcing approach (“Knowledge Coder”
game) with a hybrid one (“Robot Trainer” game). The
results suggest, that the hybrid approach is more appropriate
for gathering general commonsense knowledge rules, that
can be used for question-answering. This is one of the
reasons we chose to develop a custom made platform in order
to have more flexibility in developing such tasks.
Readymade templates offered by the mainstream platforms
cannot give you this flexibility, since they aim in a broader set
of experimenters. Of course, this comes at the cost that
some development should be made from the experimenter.
The crowdsourcing platform has internal mechanisms for
knowledge representation in the form of rules, which can
be reused in many different applications that serve a similar
purpose. Using one of the mainstream platforms requires
handling the knowledge rule representation using external
tools that need to be developed beforehand. The
crowdsourcing platform also engulfs natural language processing
tools for treating datasets, before requesting crowd workers
to process them. There are also modules for direct
integration with knowledge bases (e.g., ConceptNet, YAGO)
that can be used in conjunction with the crowd tasks, either
for knowledge verification or to reduce the ambiguities in
language. The aforementioned features cannot be found in</p>
      <sec id="sec-10-1">
        <title>8https://www.figure-eight.com/</title>
      </sec>
      <sec id="sec-10-2">
        <title>9https://pybossa.com/ 10https://psiturk.org/</title>
        <p>(a) Screenshot of the 1st step where contributors first select a
word to fill the gap from one of the possible answers.
(b) Screenshot of the 2nd step where contributors highlight the
words that led in selecting the missing word.
(c) Screenshot of the 3rd step where contributors verify if the
highlighted words from the 2nd step can be used to identify the
same missing word with that of the 1st step on a new cloze test.
(d) Screenshot of the verification step where contributors verify
other contributors highlighted words used for solving a cloze test.
platforms such as PYBOSSA or psiTurk which concentrate
on designing crowdsourcing experiments nor in GECKA
which is focused in designing GWAPs.</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>Discussion and Future Work</title>
      <p>In this article we presented an overview of the
crowdsourcing platform developed to facilitate the development of
crowdsourcing applications and GWAPs focused on
acquiring commonsense knowledge. Examples of how the
platform was used to acquire commonsense knowledge were
depicted along with how the various platform elements
were used to achieve the goal of the application.
The key features of the crowdsourcing platform include the
ability to design complex workflows for acquiring
commonsense knowledge, a storage and handling mechanism
for acquired knowledge and numerous tools for dataset
processing and integration with large semantic knowledge
bases and reasoning engines. Moreover the platform offers
a wide range of visualizations and analytics to the
experimenters that can be customized to facilitate the monitoring
and reporting needed during crowd experiments.
In terms of results, from the first GWAP we implemented,
i.e., “Knowledge Coder” game, we gathered 93 knowledge
rules from 5 contributors. These rules were too specific on
the story that was used to generate them and did not
offer any value for understanding or answering questions on
other stories. When the crowdsourcing platform was used
for the “Robot Trainer” game we were able to recruit 800
players from Facebook and some popular game forums in
a period of 153 days. These players contributed 1847
commonsense knowledge rules (893 unique). Contributed rules
were general enough to be used in other domains, e.g., the
symbolic rule hit(X,Y) IMPLIES angry(X)
meaning that a person X hits a person Y implies that person X
is angry. Through the game, 1501 commonsense
knowledge rule evaluations were gathered and the interesting part
is that players added a “Positive” evaluation to simple rules
instead of more complex ones.</p>
      <p>
        We are currently investigating how this work can be used
in the context of language learning by using commonsense
knowledge databases for creating exercises such as cloze
tests, “find synonyms, antonyms, etc.” and delivering them
to students. The platform can be used to deliver
vocabulary exercises, generated from commonsense knowledge
databases and ontologies such as ConceptNet. The
responses can be used to expand the knowledge bases that
the exercises originated from. A prototype implementation
of this
        <xref ref-type="bibr" rid="ref19 ref20">(Rodosthenous et al., 2019)</xref>
        , was developed during
the CrowFest organized by the European Network for
Combining Language Learning with Crowdsourcing Techniques
(EnetCollect) COST Action
        <xref ref-type="bibr" rid="ref11">(Lyding et al., 2018)</xref>
        .
The crowdsourcing platform can also be used on our
research for identifying the geographic focus of a story. We
have developed a system called GeoMantis
        <xref ref-type="bibr" rid="ref19 ref20">(Rodosthenous
and Michael, 2019)</xref>
        that reads a story and returns the
possible countries of focus for that story. GeoMantis uses
commonsense knowledge from ConceptNet and YAGO to
perform this task. We plan to launch a crowdsourcing
task where users will be presented with knowledge about a
country, e.g., parthenon atLocation Greece and
will be asked to evaluate if it is a good argument to identify
the geographic focus of a story to that specific country,
aiming to add weights on each argument and test if the system
yields better results.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Buhrmester</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwang</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gosling</surname>
            ,
            <given-names>S. D.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Amazon's Mechanical Turk</article-title>
          .
          <source>Perspectives on Psychological Science</source>
          ,
          <volume>6</volume>
          (
          <issue>1</issue>
          ):
          <fpage>3</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Cambria</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rajagopal</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwok</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Sepulveda</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>GECKA: Game Engine for Commonsense Knowledge Acquisition</article-title>
          .
          <source>In Proceedings of the 28th International Flairs Conference</source>
          , pages
          <fpage>282</fpage>
          -
          <lpage>287</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Diakidoy</surname>
            ,
            <given-names>I.-A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kakas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>STAR: A System of Argumentation for Story Comprehension and Beyond</article-title>
          .
          <source>In Working Notes of the 12th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense</source>
          <year>2015</year>
          ), pages
          <fpage>64</fpage>
          -
          <lpage>70</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Fellbaum</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <source>Theory and Applications of Ontology: Computer Applications</source>
          . Springer Netherlands, Dordrecht.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Herdagdelen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Baroni</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>The Concept Game: Better Commonsense Knowledge Extraction by Combining Text Mining and a Game with a Purpose</article-title>
          .
          <source>AAAI Fall Symposium on Commonsense Knowledge</source>
          , Arlington, (
          <year>2006</year>
          ):
          <fpage>52</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Kaufmann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schulze</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Veit</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk</article-title>
          .
          <source>In AMCIS</source>
          , volume
          <volume>11</volume>
          , pages
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Kuo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Community-based Game Design: Experiments on Social Games for Commonsense Data Collection</article-title>
          .
          <source>In Proceedings of the 1st ACM SIGKDD Workshop on Human Computation (HCOMP</source>
          <year>2009</year>
          ), pages
          <fpage>15</fpage>
          -
          <lpage>22</lpage>
          , Paris, France.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Law</surname>
          </string-name>
          , E. and
          <string-name>
            <surname>von Ahn</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Human Computation</article-title>
          . Morgan &amp; Claypool Publishers, 1st edition.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Lenat</surname>
            ,
            <given-names>D. B.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Guha</surname>
            ,
            <given-names>R. V.</given-names>
          </string-name>
          (
          <year>1990</year>
          ).
          <article-title>Building Large Knowledge-based Systems: Representation and Inference in the Cyc Project</article-title>
          .
          <string-name>
            <surname>Addison-Wesley Longman</surname>
          </string-name>
          Publishing Co., Inc., Boston, MA, USA, 1st edition.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Lieberman</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Teeters</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Common Consensus: A Web-Based Game for Collecting Commonsense Goals</article-title>
          .
          <source>In Proceedings of the Workshop on Common Sense and Intelligent User Interfaces</source>
          , Honolulu, Hawaii, USA.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Lyding</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nicolas</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , Be´di,
          <string-name>
            <given-names>B.</given-names>
            , and
            <surname>Fort</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          , (
          <year>2018</year>
          ).
          <article-title>Introducing the European NETwork for COmbining Language LEarning</article-title>
          and
          <source>Crowdsourcing Techniques (enetCollect)</source>
          , pages
          <fpage>176</fpage>
          -
          <lpage>181</lpage>
          . Research-publishing.
          <source>net.</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Manning</surname>
            ,
            <given-names>C. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bauer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finkel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bethard</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Surdeanu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>McClosky</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>The Stanford CoreNLP Natural Language Processing Toolkit</article-title>
          .
          <source>In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations</source>
          , pages
          <fpage>55</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Maslan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roemmele</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gordon</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>One Hundred Challenge Problems for Logical Formalizations of Commonsense Psychology</article-title>
          .
          <source>In Proceedings of the 12th International Symposium on Logical Formalizations of Commonsense Reasoning</source>
          , Stanford, California, USA.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Mostafazadeh</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chambers</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parikh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Batra</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vanderwende</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kohli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories</article-title>
          .
          <source>In Proceedings of the</source>
          <year>2016</year>
          <article-title>North American Chapter of the ACL (NAACL HLT)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Quilitz</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Leser</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          , (
          <year>2008</year>
          ).
          <article-title>Querying Distributed RDF Data Sources with SPARQL</article-title>
          , pages
          <fpage>524</fpage>
          -
          <lpage>538</lpage>
          . Springer Berlin Heidelberg, Berlin, Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Rodosthenous</surname>
            ,
            <given-names>C. T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Gathering Background Knowledge for Story Understanding through Crowdsourcing</article-title>
          .
          <source>In Proceedings of the 5th Workshop on Computational Models of Narrative (CMN</source>
          <year>2014</year>
          ), volume
          <volume>41</volume>
          , pages
          <fpage>154</fpage>
          -
          <lpage>163</lpage>
          , Quebec, Canada. Schloss
          <string-name>
            <surname>Dagstuhl-Leibniz-Zentrum</surname>
          </string-name>
          fuer Informatik.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Rodosthenous</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A Hybrid Approach to Commonsense Knowledge Acquisition</article-title>
          .
          <source>In Proceedings of the 8th European Starting AI Researcher Symposium</source>
          , pages
          <fpage>111</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Rodosthenous</surname>
            ,
            <given-names>C. T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Web-STAR: A Visual Web-based IDE for a Story Comprehension System</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Rodosthenous</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Using Generic Ontologies to Infer the Geographic Focus of Text</article-title>
          . In Jaap van den Herik et al., editors,
          <source>Agents and Artificial Intelligence</source>
          , pages
          <fpage>223</fpage>
          -
          <lpage>246</lpage>
          , Cham. Springer International Publishing.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Rodosthenous</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lyding</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , Ko¨nig,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Horbacauskiene</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Katinskaia</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          , ul Hassan,
          <string-name>
            <given-names>U.</given-names>
            ,
            <surname>Isaak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Sangati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            , and
            <surname>Nicolas</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Designing a Prototype Architecture for Crowdsourcing Language Resources</article-title>
          .
          <source>In Proceedings of the 2nd Language</source>
          ,
          <article-title>Data and Knowledge (LDK) Conference (to appear)</article-title>
          .
          <source>CEUR-WS.</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Speer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Havasi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>ConceptNet 5.5: An Open Multilingual Graph of General Knowledge</article-title>
          .
          <source>In Proceedings of the 31st AAAI Conference on Artificial Intelligence</source>
          , pages
          <fpage>4444</fpage>
          -
          <lpage>4451</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Suchanek</surname>
            ,
            <given-names>F. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kasneci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Weikum</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Yago: A Core of Semantic Knowledge</article-title>
          .
          <source>In Proceedings of the 16th International Conference on World Wide Web</source>
          , pages
          <fpage>697</fpage>
          -
          <lpage>706</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>von Ahn</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Dabbish</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Designing Games With a Purpose</article-title>
          .
          <source>Communications of the ACM</source>
          ,
          <volume>51</volume>
          (
          <issue>8</issue>
          ):
          <fpage>57</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>von Ahn</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kedia</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Blum</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Verbosity: A Game for Collecting Common-Sense Facts</article-title>
          .
          <source>In Proceedings of the 25th SIGCHI Conference on Human Factors in Computing Systems (CHI</source>
          <year>2006</year>
          ), page 75, Montre´al, Que´bec. ACM.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>