=Paper=
{{Paper
|id=Vol-2774/xpreface
|storemode=property
|title=None
|pdfUrl=https://ceur-ws.org/Vol-2774/preface.pdf
|volume=Vol-2774
}}
==None==
Preface - SMART 2020
SMART 2020 [1] was the first edition of the SeMantic AnsweR Type prediction
task (SMART), which part of the ISWC 2020 Semantic Web Challenge. It was
co-located with the 19th International Semantic Web Conference (ISWC 2020)1 .
Given a question in natural language, the task of SMART challenge is, to predict
the answer type using a target ontology. The challenge had 2 tracks, one using
the DBpedia ontology and the other using Wikidata ontology. Eight teams
participated in the DBpedia track and three teams in the Wikidata track. This
volume contains peer-reviewed system description papers of all the systems that
participated in the challenge. More details about the challenge can be found at
https://smart-task.github.io/.
Challenge Description
This challenge is focused on answer type prediction, which plays an important
role in Question Answering systems. Given a natural language question, the
task is to produce a ranked list of answer types of a given target ontology.
Previous such answer type classifications in literature are performed as a short-
text classification task using a set of coarse-grained types, for instance, either
six types [2, 3, 4, 5] or 50 types [6] with TREC QA task2 . We propose a more
granular answer type classification using popular Semantic Web ontologies such
as DBpedia and Wikidata.
Table 1 illustrates some examples. The participating systems can be either
supervised (training data is provided) or unsupervised. The systems can utilise
a wide range of approaches; from rule-based to neural approaches.
Table 1: Example questions and answer types.
Answer Type
Question
DBpedia Wikidata
Give me all actors starring in dbo:Actor wd:Q33999
movies directed by and star-
ring William Shatner.
Which programming lan- dbo:ProgrammingLanguage wd:Q9143
guages were influenced by
Perl?
Who is the heaviest player of dbo:BasketballPlayer wd:Q3665646
the Chicago Bulls?
How many employees does xsd:integer xsd:integer
Google have?
1 https://iswc2020.semanticweb.org/
2 https://trec.nist.gov/data/qamain.html
1
Presentations
Eight teams competed in SMART 2020 and presented their systems at the ISWC
2020 conference. Table 2 shows their presentation titles along with the authors.
Slot Title / Authors
Session 6A: Thursday, 5th November, 2020
Augmentation-based Answer Type Classification of
09:00 - 9:15 the SMART dataset
Aleksandr Perevalov and Andreas Both
Semantic Answer Type Prediction Using BERT
09:15 - 9:30
Vinay Setty and Krisztian Balog
Two-stage Semantic Answer Type Prediction for QA
09:30 - 09:45 using BERT and Class-Specificity Rewarding
Christos Nikas, Pavlos Fafalios and Yannis Tzitzikas
COALA – A Rule-Based Approach to Answer Type
09:45 - 10:00 Prediction
Nadine Steinmetz and Kai-Uwe Sattler
Session 8A: Thursday, 5th November 2020
A Methodology for Hierarchical Classification of
12:00 - 12:15 Semantic Answer Types of Questions
Ammar Ammar, Shervin Mehryar, and Remzi Celebi
Hierarchical Contextualized Representation Models
for Answer Type Prediction
12:15 - 12:30
Natthawut Kertkeidkachorn, Rungsiman Nararatwong, Phuc
Nguyen, Ikuya Yamada, Hideaki Takeda, and Ryutaro Ichise
Fine and Ultra-File type Embeddings for Question
12:30 - 12:45 Answering
Sai Vallurupalli, Jennifer Sleeman, and Tim Finin
Question Embeddings for Semantic Answer Type
12:45 - 13:00 Prediction
Eleanor Bill and Ernesto Jiménez-Ruiz
Table 2: Presentation Schedule for the Participating Systems
Leaderboards
For each natural language question in the test set, the participating systems are
expected to provide two predictions: answer category and answer type. Answer
category can be either ‘resource’, ‘literal’ or ‘boolean’. If the answer category is
‘resource’, the answer type should be an ontology class (DBpedia or Wikidata,
depending on the dataset). The systems could predict a ranked list of classes
2
from the corresponding ontology. If the answer category is ‘literal‘, the answer
type can be either ‘number’, ‘date’ or ‘string’.
DBpedia Dataset
Category prediction will be considered as a multi-class classification problem
and accuracy score will be used as the metric. As DBpedia follows DBpedia
ontology for its classes, thus for type predication, we will use the metric lenient
NDCG@k with a linear decay, adopted from Balog & Neumayer [7].
System Accuracy NDCG@5 NDCG@10
Setty et al 0.98 0.80 0.79
Nikas et al 0.96 0.78 0.76
Perevalov et al 0.98 0.76 0.73
Kertkeidkachorn et al 0.96 0.75 0.72
Ammar et al 0.94 0.62 0.61
Vallurupalli et al 0.88 0.54 0.52
Steinmetz et al 0.74 0.54 0.52
Bill et al 0.79 0.31 0.30
Table 3: Leader-board for DBpedia dataset
Wikidata Dataset
Here again the category prediction will be considered as a multi-class classifica-
tion problem and accuracy score will be used as the metric. Wikidata does not
follow a strict ontology for the classes, it has a very large and rather flat set of
classes and subclasses. Thus for type prediction, we use a mean reciprocal rank
(MRR) based scoring system [8], where the expected type prediction is a list.
System Accuracy MRR
Setty et al 0.97 0.68
Kertkeidkachorn et al 0.96 0.59
Vallurupalli et al 0.85 0.40
Table 4: Leader-board for Wikipedia dataset
Organisation
In this section, we list the people who organised and contributed to the success
of this event.
3
Challenge Chairs
• Nandana Mihindukulasooriya (IBM Research AI)
• Mohnish Dubey (University of Bonn and Fraunhofer IAIS)
• Alfio Gliozzo (IBM Research AI)
• Jens Lehmann (University of Bonn and Fraunhofer IAIS)
• Axel-Cyrille Ngonga Ngomo (Universität Paderborn)
• Ricardo Usbeck (Fraunhofer IAIS Dresden)
Challenge Programme Committee Members
The challenge programme committee helped to peer-review the eight system
papers and the organisers would like to thank them for their valuable time.
• Ibrahim Abdelaziz (IBM Research AI)
• Sarthak Dash (IBM Research AI)
• Srinivas Ravishankar (IBM Research AI)
• Pavan Kapanipathi (IBM Research AI)
• Md Rashad Al Hasan Rony (Fraunhofer IAIS)
• Liubov Kovriguina (Fraunhofer IAIS)
• Mohnish Dubey (University of Bonn and Fraunhofer IAIS)
• Nandana Mihindukulasooriya (IBM Research AI)
Acknowledgements
We would like to thank the ISWC Semantic Web Challenge chairs, Anna Lisa
Gentile and Ruben Verborgh, and the whole ISWC organising committee for
their invaluable support to make this event a success. We would also like to
thank the challenge participants for their interest, quality of work, and infor-
mative presentations during the event which made it attractive to the ISWC
audience.
4
References
[1] Nandana Mihindukulasooriya, Mohnish Dubey, Alfio Gliozzo, Jens
Lehmann, Axel-Cyrille Ngonga Ngomo, and Ricardo Usbeck. SeMantic An-
sweR Type prediction task (SMART) at ISWC 2020 Semantic Web Chal-
lenge. CoRR/arXiv, abs/2012.00555, 2020.
[2] Han Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical
sentence model. In Twenty-Fourth International Joint Conference on Arti-
ficial Intelligence, 2015.
[3] Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. A C-LSTM
neural network for text classification. arXiv preprint arXiv:1511.08630, 2015.
[4] Yoon Kim. Convolutional neural networks for sentence classification. In Pro-
ceedings of the 2014 Conference on Empirical Methods in Natural Language
Processing (EMNLP 2014), 2014.
[5] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional
neural network for modelling sentences. In Proceedings of the 52nd Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long
Papers), pages 655–665, Baltimore, Maryland, June 2014. Association for
Computational Linguistics.
[6] Xin Li and Dan Roth. Learning Question Classifiers: the Role of Semantic
Information. Natural Language Engineering, 12(3):229–249, 2006.
[7] Krisztian Balog and Robert Neumayer. Hierarchical target type identifi-
cation for entity-oriented queries. In 21st ACM International Conference
on Information and Knowledge Management, CIKM’12, Maui, HI, USA,
October 29 - November 02, 2012, pages 2391–2394. ACM, 2012.
[8] Dragomir R. Radev, Hong Qi, Harris Wu, and Weiguo Fan. Evaluating web-
based question answering systems. In Proceedings of the Third International
Conference on Language Resources and Evaluation, LREC 2002, May 29-
31, 2002, Las Palmas, Canary Islands, Spain. European Language Resources
Association, 2002.
5