<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Xinhe Li</string-name>
          <email>lixinhe669@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shuxin Wang</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wei Zhou</string-name>
          <email>zhouweiseu@seu.edu.cn</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gongrui Zhang</string-name>
          <email>grzhang@seu.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chenghuan Jiang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tianyu Hong</string-name>
          <email>tianyuhong677@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peng Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Chien-Shiung Wu College, Southeast University</institution>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>College of Software Engineering, Southeast University</institution>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>School of Computer Science and Engineering, Southeast University</institution>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Tabular Data</institution>
          ,
          <addr-line>Knowledge Graph, Entity Linking, KGCODE-Tab, Semantic Annotation</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <fpage>2</fpage>
      <lpage>9</lpage>
      <abstract>
        <p>This paper presents the results of KGCODE-Tab in the tabular data to knowledge graph matching contest SemTab 2022. As an eficient tabular data linking system, KGCODE-Tab is intended to participate in three tasks of the content: Column Type Annotation (CTA), Cell Entity Annotation (CEA), and Columns Property Annotation (CPA). The specific techniques used by KGCODE-Tab will be introduced briefly. The strengths and weaknesses of KGCODE-Tab will also be discussed.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>https://github.com/Xinhe-Li (X. Li); https://github.com/A-BigTree (S. Wang); https://github.com/MyWhiteLip
(W. Zhou); https://github.com/TideDra (G. Zhang); https://github.com/QuadnucYard (C. Jiang);
https://github.com/Tianyu-Hong (T. Hong)
CEUR
Workshop
Proceedings</p>
      <p>KGCODE-Tab combines several efective tabular data preprocessing techniques, which are
fundamental for TDKGM. We analyze the structure of tabular data, which is helpful to extract
the subject column and non-subject columns, correct the spelling of texts in cells, and recall all
candidate entities and their information needed in the later modules. In the entity
disambiguation module, preliminary scores are assigned to all candidate entities of the cells in the subject
column, based on the similarities between tabular cells and property values in KGs. In each task,
a ranking algorithm is designed according to the preliminary scores, and finally we obtain the
semantic annotation based on the ranks. KGCODE-Tab separates the look-up step and entity
linking step, the latter can directly use the intermediate results produced by the former in JSON
ifles.</p>
      <p>In SemTab 2022, KGCODE-Tab is an eficient tabular data linking system, and some algorithms
and matching strategies of it have been designed for high eficiency.
1.2. Specific techniques used
KGCODE-Tab aims to provide high-quality semantic annotation of tabular data. The main
specific techniques used by KGCODE-Tab are as follows.</p>
    </sec>
    <sec id="sec-2">
      <title>1.2.1. Table Structure Analysis</title>
      <p>Firstly, KGCODE-Tab classifies each column into entity column and non-entity column. It
employs spaCy2, a python package for Named Entity Recognition (NER), to give each cell a
tag. A cell is an entity cell if it is tagged with P E R S O N , N O R P , F A C , O R G , G P E , L O C , P R O D U C T , E V E N T ,
W O R K _ O F _ A R T , L A W or L A N G U A G E . A cell is a non-entity cell if it is tagged with D A T E , T I M E , P E R C E N T ,
M O N E Y , Q U A N T I T Y , O R D I N A L or C A R D I N A L . Cells that cannot be recognized by spaCy are classified
into entity cells to prevent omissions. Then a column is an entity column if more than half of
its cells (except the header) are entity cells. Otherwise, it is a non-entity column.</p>
      <p>Secondly, KGCODE-Tab selects the subject column from the entity columns. It defines the
Column Entropy, which describes the diversity of contents in a column. The subject column
commonly has a higher value of the Column Entropy. If more than one subject columns exist,
then KGCODE-Tab selects the one with the smallest index.</p>
    </sec>
    <sec id="sec-3">
      <title>1.2.2. Spell Correction</title>
      <p>
        Tables on the Internet usually have misspelled words, and researches [
        <xref ref-type="bibr" rid="ref2">2, 3</xref>
        ] show that spelling
mistakes can make a huge diference to entity recall. Some systems [ 4, 5] remove special
characters in the text, but have no idea about the wrong words. Inspired by [6], KGCODE-Tab
utilizes search engines to find the correct words.
      </p>
      <p>For a tabular cell   , KGCODE-Tab uses Bing3 to search it and obtains the result page in HTML
format. Secondly, it extracts the titles of websites in the HTML and splits them into words
 = { 1,  2, … ,   }, where  is the total number of words. Thirdly, it calculates the Levenshtein
Distance between   ,  = 1, 2 … ,  and   . Finally, the word with the shortest Levenshtein Distance
2https://github.com/explosion/spaCy
3https://www.bing.com/search
to   is selected as the correct mention of   , and words whose Levenshtein Distance to the correct
word are no more than 2 are also appended to the list of candidate mentions of   , preventing
omissions.</p>
    </sec>
    <sec id="sec-4">
      <title>1.2.3. Entity Recall</title>
      <p>Entity recall aims to select several candidate entities from a given KG. If the system cannot even
recall the ground truth entities, then all the subsequent work is in vain. For the data source
of KG, Some systems [7, 8, 9] build their database using the Wikidata local dump. However,
the method requires high storage and IO performance of computers due to the huge size of
local dump files. Therefore, we use the look-up services
Lookup5 to access the data of KGs online. We use 100 threads in entity query to improve query
MediaWiki Action API 4 and DBpedia
speed and obtain up to 50 candidate entities for each query text.</p>
      <p>Furthermore, we find that the look-up services of KGs (Wikidata/DBpedia) are sensitive to
the noise in the query text, such as adverbs, adjectives, prepositions, and so on. They may lead
to wrong or empty results.</p>
      <p>To tackle this problem, we introduce the tokenization technique. For the text of cell   with
 words t = [ 1,  2, … ,   ], KGCODE-Tab constructs a query set  = { q∶ = [ ,  +1 , … ,   ] | ,  =

1, 2, … ,  and  ⩽ } . Then it sends each q∶ in  to the spell correction module and obtains the
candidate mention set ℳ of   . Finally, it sends ℳ into the KGs API and gets the candidate
entities set ℰ. It also collects the information of each entity into a dictionary containing its
label, description, statements, identifiers, and so on.</p>
    </sec>
    <sec id="sec-5">
      <title>1.2.4. Entity Disambiguation</title>
      <p>
        Entity disambiguation is to select the ground truth entity from candidate entities. The
architecture of existing systems can be classified into two categories: Graph-based [ 7, 8, 10] and
Score-base [
        <xref ref-type="bibr" rid="ref2">2, 4, 5, 11</xref>
        ], and we design an algorithm to calculate the similarity score.
      </p>
      <p>Commonly, a table has at least one subject column, and the others are non-subject columns.
The non-subject columns are generally properties of subject columns. Therefore,
KGCODETab can exclude some candidate entities of subject columns by comparing their properties
with the content of related non-subject columns. There are mainly six data types in Wikidata:
wikibase-entityid, string, time, globecoordinate, quantity, and multilingualtext, so we need to
design diferent formulas to calculate the similarity score according to diferent data types. Let
an entity  has  properties, and   denotes the  -th property.</p>
      <p>For the string and multilingualtext data types, it is enough to rely on Levenshtein Distance.
For the wikibase-entityid data type, they need to be converted to labels firstly. The similarity
score formula is shown as follows:
(</p>
      <p>For the quantity data type, we define the Number Relevance Degree (NRD) which is shown as
follows:
 (, ) =
⎨
⎩0,</p>
      <p>|−|
⎧1 − max(||,||) ,  ≠ 0
1 − | − |,
 = 0
and 1 − max(||,||)
and 1 − | − | ⩾ 
|−|</p>
      <p>⩾ 
where the optimal value of parameter  is 0.98 which is also obtained by experiments.</p>
      <p>For the globecoordina data type which contains longitude and latitude, we directly use NRD
to calculate the similarity score. The similarity score formula is shown as follows:
(
 ,   ) = max ( (
 , 


),  (
 , 
  ))</p>
      <p>For the time data type, we define a list T which contains year, month, day, hour, minute, and
second to represent the time value. In tabular data, we use regular expressions for extracting
time information as a T. The similarity score formula is shown as follows:
 sub(, 
 ) = {  −1 ≠
0,
1
∑ (
 ,   ),
(</p>
      <p>, InstanceOf, ) ∈  
(2)
(3)
(4)
(5)
(6)
(7)
(8)</p>
      <p>After the similarity scores calculation, each candidate entity has a final score calculated by
the formula:
where  is the candidate entity of the  -th cell in the subject column,  denotes the column index
of subject column, and   is the set of properties in  .</p>
    </sec>
    <sec id="sec-6">
      <title>1.2.5. Task Analysis</title>
      <p>In our system, we utilize a cooperative score mechanism. Let  (
the matching score of (

 ,   ) or (

 ′) used later. We use a normalization function
 ,   ) and  (
to widen the gap between high and low matching score, where  = 1.1 and  = 8 .</p>
    </sec>
    <sec id="sec-7">
      <title>Column Type Annotation</title>
      <p>Let   denote the  -th candidate entity of the  -th cell in the

subject column. Then the set of candidate types is  sub = {|(  , InstanceOf, ) ∈  ,  =
1, 2, … , ,  = 1, 2, …  (</p>
      <p>)}, where  (  ) is the number of candidate entities of the  -th cell. We
assign a score to each type  in  sub by Eq.9.</p>
      <p>For non-subject columns, the score of candidate types in  non are assigned by Eq.11.
  
 () = ∑
max  ( sub(,</p>
      <p>))
  (</p>
      <p>)</p>
      <p>For an entity in the subject column, we enumerate all types   of
candidates to take advantage of CTA scores, as shown in Eq.12, where the parameter  is a
cooperative factor set to 0.1. We skip the items that makes  sub(⋅, ⋅)or  (⋅, ⋅) equals 0.</p>
      <p>For a non-subject column with index  , we give the entity   score by Eq.13.
 
 
,
 (
 (
 ) = max { ( sub(

,   )) +  ⋅   
 (</p>
      <p>)
 ′) = max { ( (
 ′=1
 ′)) +  ⋅  
 (  )}
 (
 )}</p>
    </sec>
    <sec id="sec-8">
      <title>Columns Property Annotation</title>
      <p>The set of candidate properties is denoted by
 { | (</p>
      <p>, ℎ    , ) ∈  ,  = 1, 2 … , }
with respect to the  -th column by:
. We assign a score to each property  in 
 (,   ) = {
 (
0,</p>
      <sec id="sec-8-1">
        <title>2. Results</title>
        <p>tasks.
competition.</p>
        <p>In the Accuracy Track of SemTab 2022, participants compete with each other for three rounds.
In each round, diferent datasets are provided to evaluate their systems on CTA, CEA, and CPA
Dataset</p>
        <p>Round1
HardTablesR1(WD)</p>
        <p>Round2
HardTablesR2(WD)
ToughTables(WD)
ToughTables(DBP)</p>
        <p>Round3
BiodivTab(DBP)
GitTables(DBP)
GitTables(SCH)(class)
GitTables(SCH)(property)
2.1. Round 1
In Round 1, tables of HardTables datasets have small numbers of rows and columns, and the
subject columns of most tables are the first columns. Thus KGCODE-Tab processes tables
in batches and sets the first columns as subject columns by default. Experiments show that
processing in batches dramatically improves the eficiency of spell correction and entity recall,
fully utilizing the multithreading technology. Fixing subject columns also reduce the error
caused by the table structure analysis module.
2.2. Round 2
In Round 2, the subject columns of tables in ToughTables datasets are not always the first
columns, and non-subject columns are not necessary to be the properties of subject columns but
can be their descriptions. Hence, the table structure analysis module comes into play, and the
descriptions of entities participate in the calculation of similarity scores. Results show that these
modifications largely increase the accuracy of the entity disambiguation module, improving the
ranking of our system.</p>
        <p>In addition, the number of rows in each table in ToughTables datasets fluctuates greatly,
and some tables have extremely large numbers of rows. Hence, adaptive batch processing is
introduced according to the size of the tabular data, and for the table with a large number
of rows, only part of the representative rows are randomly selected for CTA task annotation,
improving the eficiency of tabular data in spell correction and entity recall.
2.3. Round 3
In Round 3, tables in the BiodivTab datasets are about biodiversity, so KGCODE-Tab constructs
a biodiversity corpus for abbreviations and aliases commonly used in the field of biodiversity.
Furthermore, many cells contain noise like adverbs and adjectives, and most headers have
semantic information. Therefore, tokenization is introduced to reduce the efect of noise, and
KGCODE-Tab converts CTA task into CEA task for headers.</p>
        <p>For Gittables datasets, by observing the annotation results of its training dataset, we find
that the number of its labels is small and the type of annotation is relatively general, so we
consider using a text classification algorithm to solve the problem. After preliminary analysis
and research, we select the FastText [12] model. Firstly, original words are divided into several
tokens, and the CTA results are used as labels. Then the spaCy is used for word recognition,
and the results are used as keywords. They are put into the FastText model for training. After
training, it is used to annotate the test dataset.</p>
      </sec>
      <sec id="sec-8-2">
        <title>3. General comments</title>
        <p>In SemTab 2022, our KGCODE-Tab team participating in SemTab for the first time has a good
result. Among all the participating teams, we achieve first-place results in multiple tasks.</p>
        <p>KGCODE-Tab has some strategies to improve performance with less query time. The task
analysis of the top layer can directly call the interface of the bottom layer, which increases the
maintainability of the system. The tabular data preprocessing module makes full use of several
tools like search engines, KGs API, and spaCy library to generate structured JSON files for each
tabular data to increase reusability. To achieve the semantic annotation of tabular data, three
tasks of CEA, CTA, and CPA are closely combined to deal with. As a whole, KGCODE-Tab fully
utilizes the context of the whole table and the information provided by KGs to achieve a high
accuracy.</p>
        <p>However, the entity disambiguation module can continue to be optimized, and machine
learning algorithms can be used to train parameters.</p>
      </sec>
      <sec id="sec-8-3">
        <title>4. Conclusion</title>
        <p>In this paper, we propose a novel table annotation system, KGCODE-Tab, which can deal
with three TDKGM tasks: CTA, CEA, and CPA. We propose several efective tabular data
preprocessing techniques, which consist of table structure analysis, spell correction, and entity
recall. KGCODE-Tab emphasizes entity disambiguation with table context, which reduces much
noise and remains candidate entities with high confidence. For each task, we design a scoring
formula to select the right answer among candidate entities, which utilizes the results from
other tasks. Results of SemTab 2022 show that KGCODE-Tab has excellent disambiguation
ability and achieves outstanding performance.</p>
        <p>Supplemental Material Statement: Source code and constructed datasets will be released
on GitHub soon.
Matching (SemTab 2020) co-located with the 19th International Semantic Web Conference
(ISWC 2020), Virtual, Online, 2020.
[3] S. Chen, A. Karaoglu, C. Negreanu, T. Ma, J.-G. Yao, J. Williams, A. Gordon, C.-Y. Lin,
Linkingpark: An integrated approach for semantic table interpretation, in: Proceedings of
the Semantic Web Challenge on Tabular Data to Knowledge Graph Matching (SemTab 2020)
co-located with the 19th International Semantic Web Conference (ISWC 2020), Virtual,
Online, 2020.
[4] Y. Chabot, T. Labbe, J. Liu, R. Troncy, Dagobah: An end-to-end context-free tabular data
semantic annotation system, in: Proceedings of the Semantic Web Challenge on Tabular
Data to Knowledge Graph Matching (SemTab 2019) co-located with the 18th International
Semantic Web Conference (ISWC 2019), Auckland, New zealand, 2019.
[5] V.-P. Huynh, J. Liu, Y. Chabot, T. Labbe, P. Monnin, R. Troncy, Dagobah: Enhanced scoring
algorithms for scalable annotations of tabular data, in: Proceedings of the Semantic Web
Challenge on Tabular Data to Knowledge Graph Matching (SemTab 2020) co-located with
the 19th International Semantic Web Conference (ISWC 2020), Virtual, Online, 2020.
[6] S. Yumusak, Knowledge graph matching with inter-service information transfer, in:
Proceedings of the Semantic Web Challenge on Tabular Data to Knowledge Graph Matching
(SemTab 2020) co-located with the 19th International Semantic Web Conference (ISWC
2020), Virtual, Online, 2020.
[7] D. Oliveira, M. d’Aquin, Adog - annotating data with ontologies and graphs, in: Proceedings
of the Semantic Web Challenge on Tabular Data to Knowledge Graph Matching (SemTab
2019) co-located with the 18th International Semantic Web Conference (ISWC 2019),
Auckland, New zealand, 2019.
[8] M. Cremaschi, R. Avogadro, D. Chieregato, Mantistable: An automatic approach for the
semantic table interpretation, in: Proceedings of the Semantic Web Challenge on Tabular
Data to Knowledge Graph Matching (SemTab 2019) co-located with the 18th International
Semantic Web Conference (ISWC 2019), Auckland, New zealand, 2019.
[9] H. Morikawa, Semantic table interpretation using lod4all, in: Proceedings of the Semantic
Web Challenge on Tabular Data to Knowledge Graph Matching (SemTab 2019) co-located
with the 18th International Semantic Web Conference (ISWC 2019), Auckland, New zealand,
2019.
[10] B. Steenwinckel, G. Vandewiele, F. de Turck, F. Ongenae, Csv2kg: Transforming tabular
data into semantic knowledge, in: Proceedings of the Semantic Web Challenge on Tabular
Data to Knowledge Graph Matching (SemTab 2019) co-located with the 18th International
Semantic Web Conference (ISWC 2019), Auckland, New zealand, 2019.
[11] S. Tyagi, E. Jimenez-Ruiz, Lexma: Tabular data to knowledge graph matching using lexical
techniques, in: Proceedings of the Semantic Web Challenge on Tabular Data to Knowledge
Graph Matching (SemTab 2020) co-located with the 19th International Semantic Web
Conference (ISWC 2020), Virtual, Online, 2020.
[12] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of tricks for eficient text classification,
in: Proceedings of the 15th Conference of the European Chapter of the Association for
Computational Linguistics (EACL 2017), Valencia, Spain, 2017.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Jiménez-Ruiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Hassanzadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Efthymiou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Srinivas</surname>
          </string-name>
          ,
          <year>Semtab 2019</year>
          :
          <article-title>Resources to benchmark tabular data to knowledge graph matching systems</article-title>
          ,
          <source>in: Proceedings of the 17th Extended Semantic Web Conference (ESWC</source>
          <year>2020</year>
          ), Berlin, Heidelberg,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Azzi</surname>
          </string-name>
          , G. Diallo, Amalgam:
          <article-title>Making tabular dataset explicit with knowledge graph, in: Proceedings of the Semantic Web Challenge on Tabular Data to Knowledge Graph</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>