<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A systematic approach towards higher quality linked open data at Nieuwe Instituut</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nora Abdelmageed</string-name>
          <email>n.abdelmageed@nieuweinstituut.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lois Hutubessy</string-name>
          <email>l.hutubessy@nieuweinstituut.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Linked Open Data, Cultural Heritage, Data Quality, Entity Linking, Entity Resolution</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Nieuwe Instituut</institution>
          ,
          <addr-line>Museumpark 25, 3015 CB Rotterdam</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SEMANTiCS - 20th International Conference on Semantic Systems</institution>
          ,
          <addr-line>Sep 27-19, 2024, Amsterdam</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Nieuwe Instituut (NI) houses the Dutch National Collection of Architecture and Urban Planning. This collection consists of about 4 million objects, including design drawings, 3D models, and photographs. As part of the program Disclosing Architecture, which is now in its sixth and final year, the Linked Open Data (LOD) project aims to share the richness of data within the collection with the public through semantic web technologies. This project will ultimately facilitate the exchange of cultural heritage data with related national and international institutions. Currently, Nieuwe Instituut (NI)'s collection management system contains inconsistent records due to changes in registration guidelines and the migration from older collection management tools. Without documentation of these guidelines, it is impossible to establish consistent rules for the entire dataset. Yet, clean data is crucial for efectively showcasing NI's collection to the public. In response, this paper introduces a framework for higherquality LOD data, the Data Cleaning Initiative (DCI). The first implementation of the DCI is through a series of steps planned for the year 2024 with the goal of cleaning and enriching the collection data at NI.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The Dutch Cultural Heritage sector has undergone a significant transformation by adopting
semantic web technologies and converting their data into Linked Open Data (LOD). The dataset
register1 developed by Netwerk Digitaal Erfgoed2 lists 71 publishers that expose their datasets
in one or more linked data formats3. Rijksmuseum is a pioneer in this area, having published
its collection data as LOD as one of the first [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], thereby diversifying search results in other
applications [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Another notable efort is by Van Wissen [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], who targeted named entity
extraction from archival records with the help of LOD. However, ensuring high-standard and
clean data remains an open challenge.
3Access June 2024 using search feature with n-triples*, rdf+xml*, turtle, and trig.
and 3D models. As part of the program Disclosing Architecture (AD)5, which is now in its sixth
and final year, LOD project aims to share the richness of data within the collection with the public
through semantic web technologies. In parallel, NI develops a new online Collection platform
that will further increase accessibility to a wider audience, facilitating discovery by design.
While this platform is set to launch in November 2024, the underlying data is already exposed
through a separate endpoint6, with more than 18 million triples available in the customized
Triply7 environment.
      </p>
      <p>In this paper, we propose our systematic approach Data Cleaning Initiative (DCI) at NI to
enhance the data quality of our LOD. We explain its scope, tasks, and how we implement such
an approach in practice. DCI is a vision that could be applied in any Cultural Heritage instituut
for cleaning and enriching their LOD.</p>
      <sec id="sec-2-1">
        <title>1.1. Motivation &amp; Problem Definition</title>
        <p>We aim to increase the exposure of Architecture and Urban Planning data. For instance, we
built a central point or an encyclopedia on top of the Dutch Heritage Data collection in NI.
However, due to human errors and changes in guidelines and collection registration tools,
the data has become heterogeneous and inconsistent on a large scale. As such, publishing
the current data version may not be the best approach. The data currently contained in the
collection management system (Axiell Collections8), while suitable for internal purposes, would
benefit from further cleaning and enrichment to ensure a higher quality of data for public use.
This would facilitate collaborations with third parties and attract a wider audience.</p>
        <p>Records in NI’s collection management system are inconsistent due to changes in registration
guidelines and migrations from older collection management tools. In addition, there is no
documentation of these guidelines, making it dificult to establish rules for the entire dataset.
Cleaning these records in their entirety is challenging for two reasons. On the one hand,
understanding the meaning of sparse heterogeneous records within the same catalog is dificult.
E.g., “Library catalog” registers 16 types, including books and audio materials. On the other
hand, the sheer volume of heritage records adds to the complexity.</p>
      </sec>
      <sec id="sec-2-2">
        <title>1.2. Objectives &amp; Tasks</title>
        <p>The Data Cleaning Initiative (DCI) has two main objectives: (i) Proposing to split catalogs
containing broad relations of data into smaller, closely related datasets. This facilitates the
semantic grouping of the original catalogs within the collection management system. (ii)
Applying cleaning and enrichment strategies to the resulting semantic categories to improve
the data quality.</p>
        <p>We propose four categories of data cleaning and enrichment tasks in the context of DCI. We
define these main categories as:</p>
        <sec id="sec-2-2-1">
          <title>5https://nieuweinstituut.nl/en/projects/architectuur-dichterbij 6https://collectiedata.hetnieuweinstituut.nl/the-other-interface/knowledge-graph/sparql 7triply.cc 8https://www.axiell.com/solutions/product/axiell-collections/</title>
          <p>1. Data Cleaning: This category involves all tasks concerning primitive data cleaning. E.g.,
handling inconsistencies like using diferent formats or tackling missing values. The
latter influences the guidelines for filling in this metadata. E.g., discovering a potential
required field.
2. Entity Resolution: This category aims at similar entities discovery and grouping. Since
all data is manually entered by domain experts, they might use diferent representations
describing the same entity. E.g., Doesburg, Theo van is a diferent representation for
Doesburg, Th. van.
3. Entity Linking This group maps internal records of our heritage data collections to
external resources, or Knowledge Graphs (KGs), e.g., Wikidata9. For example, Doesburg,
Theo van would be mapped to wd:Q16042210.
4. Entity Enrichment This category aims to fetch external properties and pieces of
information that exist in external resources and do not exist in the local collection management
system. E.g., we save the image of Doesburg from Wikidata in our Axiell Collections.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Approach</title>
      <p>In this section, we describe the initiative’s approach. Initially, we explain the concept of semantic
grouping; then, we give details of the DCI pipeline.</p>
      <sec id="sec-3-1">
        <title>2.1. Semantic Grouping</title>
        <p>Currently, domain experts enter metadata for multiple semantic categories into the same Axiell
catalog. For instance, they use “People &amp; Institutions” for registering persons, architects,
publishers, universities, etc. The same case applies to the “Library catalog”, where domain</p>
        <sec id="sec-3-1-1">
          <title>9https://www.wikidata.org/wiki/Wikidata:Introduction 10http://www.wikidata.org/entity/Q160422 11https://www.wikidata.org/wiki/Q2759953</title>
          <p>Entity Enrichment
Entity Linking
Entity Resolution</p>
          <p>Data Cleaning</p>
          <p>Semantic</p>
          <p>Groups
•Get a snippetof the
target semantic
groupto fix
1 Export Group
2</p>
          <p>Analyze
•Find issues
•Log them(backlog)
•Investigateand
documenthowto
solve theseissues
3</p>
          <p>Propose
4 Discuss &amp;</p>
          <p>Validate
•Validate the
proposedsolution
•Discussit with
domain expert
•Applythe final
solution to the target
5</p>
          <p>Solve
6 Approve &amp;</p>
          <p>Import
•Get thegreenlight to
import thesefixes
•Get thestuff backto
Axiel
experts register sixteen semantic types, including Books, serials, Audio/Visual Material, and
Articles. This yields a large catalog but is sparse in terms of the metadata. Thus, we decided to
split each catalog into smaller chunks that share the semantic type, i.g. extracting only Books
from the Library. The main idea is the DCI relies on the semantic division of the Axiell catalogs.
By this means, it facilitates human intervention and yields a better statistical view of the data.
The target of this phase of the DCI is to obtain named groups to be cleaned.</p>
          <p>Creating a semantic group requires determining representative fields for each group. To
ifnd these groups, we instigated Axiell Collections’ fields for each category. In addition, we
held several meetings with domain experts to determine those fields and ensure their
correctness and scope. For instance, “People &amp; Institutions” catalog contains for example: name,
birthDate, birthPlace, deathDate, deathPlace, biography, ISBN_publisher_prefix. This group of
ifelds represents two semantic groups: 1) Persons, and this group contains: name, birthDate,
birthPlace, deathDate, deathPlace, biography. 2) Institutes and this group contains name and
ISBN_publisher_prefix.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Pipeline &amp; Workflow</title>
        <p>Figure 1 depicts the proposed pipeline for DCI. The figure represents the separate steps that we
follow til we reach the final goal or the end of the year for each semantic group. Our pipeline
consists of six iterative steps taking into consideration four scopes of work: Data cleaning, entity
resolution, entity linking, and entity enrichment. Our pipeline starts with: 1) Export the target
semantic group given the representative group fields in a CSV file. The CSV file allows batch
processing and facilitates the general overview of the exported records. 2) Analyze the data and
log the encountered issues in our backlog system. 3) Propose a solution for individual issues
and determine if it is possible to solve them automatically or if it needs manual intervention.
4) Discuss the proposed solution with domain experts and stakeholders to validate it. If it is
not a valid solution, we go back to the proposing stage otherwise, we move to, 5) Solve the
target issue by applying the proposed solution on the CSV file directly. Finally, 6) Approve and
Import, we seek approval from our manager, and if agreed, then we import the CSV with fixes
back to the Axiell Acceptance environment.
Domain Product
Expert Manager</p>
        <p>Propose</p>
        <p>Analyze</p>
        <p>Executive</p>
        <p>Start
Discuss</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>We would like to thank our domain experts, Inge van Stokkom, Christel Leenen, Ernst des Bouvrie,
Evelien Dekker, Kelly James, and program manager Gijs Broos, Nieuwe Instituut (NI). Moreover, we
would like to thank the Dutch Ministry of Culture, Education, and Science for funding the Disclosing
Architecture program.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dijkshoorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jongma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Aroyo</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. van Ossenbruggen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Schreiber</surname>
          </string-name>
          , W. ter Weele,
          <string-name>
            <surname>J. Wielemaker,</surname>
          </string-name>
          <article-title>The rijksmuseum collection as linked data</article-title>
          ,
          <source>Semantic Web</source>
          <volume>9</volume>
          (
          <year>2018</year>
          )
          <fpage>221</fpage>
          -
          <lpage>230</lpage>
          . URL: https://doi.org/10.3233/SW-170257. doi:
          <volume>10</volume>
          .3233/SW- 170257.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dijkshoorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Aroyo</surname>
          </string-name>
          , G. Schreiber,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wielemaker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jongma</surname>
          </string-name>
          ,
          <article-title>Using linked data to diversify search results a case study in cultural heritage, in: Knowledge Engineering and Knowledge Management -</article-title>
          19th
          <source>International Conference, EKAW</source>
          <year>2014</year>
          , Linköping, Sweden,
          <source>November 24-28</source>
          ,
          <year>2014</year>
          . Proceedings, volume
          <volume>8876</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2014</year>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>120</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -13704-
          <issue>9</issue>
          _9. doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>319</fpage>
          - 13704- 9\_9.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>L. van Wissen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Latronico</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Zamborlini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Reinders</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>van den Heuvel, Unlocking the archives. a pipeline for scanning, transcribing and modelling entities of archival documents into linked open data.(short paper)</article-title>
          ,
          <source>in: DH Benelux</source>
          <year>2020</year>
          -Online:# Goesonline,
          <year>2020</year>
          . URL: http://2020.dhbenelux.org/. doi:http://10.0.20.161/zenodo.3862817.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>