<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Process of generating RDF mapping model for OGD-LOD transformation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Khadidja Bouchelouche</string-name>
          <email>k_bouchelouche@esi.dz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdessamed Réda Ghomari</string-name>
          <email>a_ghomari@esi.dz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Leila Zemmouchi-Ghomari</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>LMCS, Ecole nationale Supérieure d'Informatique (ESI)</institution>
          ,
          <addr-line>Algiers</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>LTI, Ecole Nationale Supérieure des Technologies Avancées (ENSTA)</institution>
          ,
          <addr-line>Algiers</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Transformation of Open Government Data (OGD) into Linked Open Data (LOD) can revolutionize how we access and use OGD since the LOD technology guides the publication of data and its interconnection in a machine-readable medium, allowing automatic interpretation and exploitation. Considering the nature of OGD, which is often unstructured, heterogeneous, and significant in volume, this requires an efort of integration to transform OGD into LOD. Among the essential issues that must be addressed is the generation of the Resource Description Framework (RDF) mapping model, which requires extracting RDF triples from OGD. This is a tricky process in OGD-LOD transformation due to the variety of OGD formats. Thus, many works addressed the case of Relational database RDF mapping, considering its helpful structure for extracting RDF triples. For this, it is necessary to provide a model that can generate RDF mapping for diferent input data formats to extract the RDF triples and link the input datasets to other datasets on the web. This paper proposes a new approach for generating an RDF mapping model for OGD-LOD transformation. It can be used to transform any OGD data into LOD, regardless of its format (CSV, Excel, HTML, PDF, and TXT), using the interlinking techniques such as Named Entity Recognition (NER) and DBpedia alignment and including the four principles of linked data as a validation layer. We believe that it has the potential to significantly accelerate the adoption of LOD and make it more accessible to a broader range of users.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Open Government Data (OGD)</kwd>
        <kwd>Linked Open Data (LOD)</kwd>
        <kwd>OGD-LOD transformation</kwd>
        <kwd>RDF mapping model</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Open Government Data (OGD) is an international collaboration between the United States, United
Kingdom, France, and Singapore governments to share machine-readable datasets covering
government activities [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The datasets are produced by governments or under the control of government
entities [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Many datasets could belong to government data, including data held indirectly by public
administrations (e.g., through agencies or subsidiaries), such as pollution/climate, education/childcare, and
trafic/congestion [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>A large number of applications have been developed that exploit the OGD
(https://www.data.gov/applications) in diefrent countries and ofer many services to people
wishing to obtain practical information concerning, for example, the distribution of job applications
by sector of activity and by region of a country or electricity consumption according to the type of
household appliance used by time slot of the day in another country or even more generally the foods
to avoid or to advocate in the case of this or that disease.</p>
      <p>
        OGD is often unstructured, heterogeneous, and significant in volume. This requires an efort of
integration to Transform OGD into Linked Open Data (LOD). LOD is derived from combining
openlinked data [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. LOD is based on realizing the large-scale implementation of a lightweight Semantic
Web [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        The Linked Data (LD) initiative provides a framework in which data are represented, connected, and
automatically accessed and processed by applications or web services. Thus, the linked data principles
allow data publication in a self-descriptive mean and facilitate the integration of data from diferent
sources [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In addition, linked data facilitates data discovery and consumption and reduces
redundancy [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        To transform OGD into LOD, the process of generating an RDF (Resource Description Framework)
mapping model is required to extract the RDF triples from the input data files [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Many works were conducted to provide an RDF mapping language such as [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [10], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [11], [12]
and [13], where the mapping is done manually and to specific input data formats [ 14], [15], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [12],
[10].
      </p>
      <p>
        This process is considered a dificult stage in the Process of the OGD-LOD transformation since the
input data formats are various and of several data types [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [16], [17].
      </p>
      <p>
        It is challenging to provide a model that can generate RDF mapping for mixed data formats, to
extract RDF triples and link them to external datasets on the web, without requiring intensive human
workload [12], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [16], [17], [18], [19].
      </p>
      <p>In this paper, we aim to cover this gap by providing an RDF mapping model to extract RDF triples
and link them to external datasets on the web, for structured and unstructured data formats and types,
as well as reducing human intervention in the mapping task.</p>
      <p>The organization of this paper is as follows: Section II presents the Related Works. Next, section
III presents the proposed RDF mapping process. Finally, the conclusion summarizes the work and the
ifndings in section IV.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>This section presents the current approaches to RDF mapping. The works in this field tended to provide
RDF mapping languages for Relational databases.</p>
      <p>
        Thus, many works were conducted to provide RDF mapping languages such as [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [12] and
[10]. Where the RDF mapping was done manually and to specific input data formats such as Relational
Databases [15], [13], [11], [14], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [12], [10].
      </p>
      <p>In [15], the authors provided a survey comparing the approaches allowing RDF mapping from
Relational databases, based on a defined reference framework comprising: mapping generation, query
execution, and data integration achieved by mapping Relational databases to RDF.</p>
      <p>In [13], the authors provided a comparison framework based on use cases and requirements for
mapping Relational Databases to RDF languages, where nine RDF mapping languages were treated:
Direct Mapping, eD2R, R2O, Relational.OWL, Virtuoso RDF Views, D2RQ, Triplify, R2RML, R3M. As
a result, the authors provided a classification of RDF mapping languages from Relational databases
into four categories: direct mapping, read-only general-purpose mapping, read-write general-purpose
mapping, and special-purpose mapping.</p>
      <p>In [11],the authors provided a survey comparing the approaches to allowing RDF mapping from
Relational databases based on a defined reference framework comprising mapping generation, query
execution, and data integration achieved by mapping Relational databases to RDF.</p>
      <p>In [14], the authors presented a series of reusable RDF mapping patterns from Relational databases,
based on the author’s experience, where the mappings were represented in the R2RML language.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the authors described the xR2RML, a language for expressing RDF mapping from various
types of databases (XML, Object-Oriented, NoSQL).
xR2RML is an extension of the R2RML mapping language, which relies on the properties of the RML
mapping language. R2RML treats the RDF mapping based on relational databases. In contrast, RML
extends R2RML to treat the RDF mapping of heterogeneous data formats (CSV, XML, JSON) that do not
include the case of dealing with diferent types of heterogeneous databases. Thus, xR2RML extends
this scope to a wide range of non-Relational databases.
      </p>
      <p>In [12], the authors presented YAMA Mapping Language (YAMAML) as a lightweight RDF mapping
language, which is based on Yet Another Metadata Application Profiles (YAMA). YAMA is an extensible
intermediary application that generates application profile expressions (a combination of vocabularies,
mixed and matched based on diferent name spaces and optimized for a particular local application).
YAMAML allows the map of RDF data structures to RDF based on the application profile. It is proposed
as an intermediary format for generating RDF but does not consider RDF representation syntax in the
output.</p>
      <p>In [10], the authors proposed using the RDF Mapping Language (RML) to transform Dublin Core
descriptions of (X)HTML web pages into an RDF model.</p>
      <p>
        From the presented approaches of RDF mapping, we can notice the need for intensive human
intervention for the mapping tasks or the consideration of particular data formats (Relational databases,
diferent types of databases, or Dublin Core descriptions of (X)HTML). The necessity of providing an
RDF mapping model comes to address the issue of the diferent input data formats (structured and
unstructured) to extract the RDF triples and link the input datasets to other datasets on the web by
reducing human intervention through a process to automate this task. Considering this task of RDF
mapping requires an intensive human workload to consider the possible input formats of the data as
well as the type of the data itself [12], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [16], [17].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The proposed RDF mapping process</title>
      <sec id="sec-3-1">
        <title>3.1. Overview and Objectives</title>
        <p>This section proposes an RDF mapping process to improve the OGD-LOD transformation without
requiring intensive human workloads.</p>
        <p>The necessary Terminology and vocabulary for this process are defined as follows:
• Named Entity Recognition (NER): helps identify predefined entities in a text and is a fundamental
step in many tasks, like building knowledge graphs or answering questions [20].
• Subject-Predicate-Object alignment: This allows the generation of the Subject-Predicate-Object
from a text based on the extracted NER and associated information in the text.
• RDF Mapping: This represents the template that enables the generation of RDF triples and
associated links from well-structured formats like CSV.</p>
        <p>• OGD-LOD Transformation: is the process of converting OGD into LOD (RDF format).
The proposed process considers three main objectives:
• The treatment of heterogeneous OGD formats (CSV, Excel, HTML, PDF and TXT ).
• The generation of a single RDF file, linked to other datasets on the web based on DBPedia
database.</p>
        <p>• The validation of the generated RDF file according to the linked data principles.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. RDF Mapping Process</title>
        <p>To provide a durable model that allows the mapping of OGD into LOD based on RDF graphs, we define
the following rules:
• The input data files must be in English.
• The RDF mapping process must consider all the input formats for a generic process to reduce
human intervention.
• The prefixes of the used ontologies must be defined in advance and general to consider the
diferent types of data.</p>
        <p>• The domain name must be defined.
1. For the structured data (CSV and Excel), convert the Excel formats to CSV formats, to consider a
unified CSV formats. Many libraries are provided for this conversion such as Pandas 1, CSV file
API2, xlrd3 in python, and Aspoce4 in java.
2. For the unstructured data (HTML, PDF and TXT), extract the text from the HTML and PDF
formats to treat them as a unified TXT formats, using IRONPDF 5 library in java and PyPDF26,
Pdfminer7, Pdfplumber8 libraries in python.
3. Define the prefixes and the domain name to use for the mapping and linking process of the
diferent data types such as: dbo 9, rdf10, owl11, sdx12, scv13, xsd14, rdfs15, ckan16, foaf17, dc18.</p>
        <p>After that, the set of CSV files will be treated according to their formats (Orange color in Figure
as follows:
1),
1. Use NER to identify named entities in CSV files, such as names of persons, countries,
organizations, and institutions, with libraries like SpaCy19, NLTK20 and Flair21 (Python) or OpenNLP22
and Stanford NER23 (Java).
1https://pandas.pydata.org/
2https://docs.python.org/3/library/csv.html
3https://xlrd.readthedocs.io/en/latest/
4https://products.aspose.com/cells/java/
5https://ironpdf.com/java/examples/extract-text-from-pdf/
6https://pypi.org/project/PyPDF2/
7https://pypi.org/project/pdfminer/
8https://pypi.org/project/pdfplumber/
9&lt;http://dbpedia.org/ontology/&gt;
10&lt;https://www.w3.org/1999/02/22-rdf-syntax-ns&gt;
11&lt;https://www.w3.org/2002/07/owl&gt;
12&lt;https://www.epimorphics.com/vocab/sdx&gt;
13&lt;https://purl.org/NET/scovo&gt;
14&lt;https://www.w3.org/2001/XMLSchema&gt;
15&lt;https://www.w3.org/2000/01/rdf-schema&gt;
16&lt;https://ckan.net/ns&gt;
17&lt;https://xmlns.com/foaf/0.1/&gt;
18&lt;https://purl.org/dc/elements/1.1/&gt;
19https://realpython.com/natural-language-processing-spacy-python/
20https://www.nltk.org/
21https://flairnlp.github.io/
22https://opennlp.sourceforge.net/projects.html
23https://nlp.stanford.edu/software/CRF-NER.shtml
2. Provide URIs for each line in the CSV file as main subjects, except the first line.
3. Set the elements of the first line in the CSV file as basic ordered predicates.
4. Set the elements of each line in the CSV file(Except the first line), as basic ordered objects.
5. Match the subjects to predicates and objects in the same order using the proposed mapping
model to generate the RDF triples (Figure 2).
6. link the extracted NER to DBpedia database and generate the RDF file.</p>
        <p>For the TXT files, they will be treated according to their formats (Grey color in Figure 1), as follows:
1. Use NER to identify named entities in TXT files, such as names of persons, countries,
organizations, and institutions, with libraries like SpaCy24, NLTK20 and Flair21 (Python) or OpenNLP22
and Stanford NER1 (Java).
2. For each extracted NER, extract the full sentences (Subject, Verb, Object) using OpenIE25 API for</p>
        <p>Java.
3. Align the extracted sentences (Subject, Verb, Object) to RDF triples (Subject, Predicate, Object)
using the proposed mapping model (Figure 2).
4. Link the extracted NER to DBpedia database.</p>
        <p>The output for both file formats (CSV and TXT) will be an RDF file (Green color in Figure 1).</p>
        <p>The proposed mapping model presented in Figure 2, considers the case of transforming several
formats (CSV, Excel, Txt, PDF and HTML) to RDF using the XLWrap26 tool since it is eficient for
the transformation of spreadsheets to arbitrary RDF graphs based on a mapping specification, as it
supports Microsoft Excel and OpenDocument spreadsheets such as CSV files. Moreover, it can load
local files or download remote files via HTTP.</p>
        <p>Figure 3 presents an example of transforming a text automatically to RDF based on the proposed
mapping model.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Validation Process</title>
        <p>We can evaluate the eficiency of the proposed process based on the validity of its generated RDF file
(linked data format) of Figure 3.</p>
        <p>
          Thus,the generated RDF file is evaluated and validated according to the four principles using the
Ontology-Evaluation/LD-Principles27 , which was proposed in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] as follows (see Figures 4, 5, 6 and 7 ):
• Principle 1: extracts all triples from the RDF file. Then check their representation with valid
        </p>
        <p>URIs. The validation is applied by replacing all detected no URIs with URIs that identify them.
• Principle 2: extracts RDF triples that comply with the first principle and are dereferenceable</p>
        <p>HTTP URIs (HTTP code testing with an agent to get response).
• Principle 3: checks the queried resource for providing valuable information and validate the RDF
syntax of the resource.
• Principle 4: verifies that the datasets include links to external datasets. DBpedia is queried via
its endpoint for equivalent URIs.</p>
        <p>According to the first principle, the evaluation of the generated RDF file has been validated with a
92.7% score, as shown in Figure 4.</p>
        <p>While the evaluation of the latter according to the second principle has been validated with a 100%
score, as shown in Figure 5.</p>
        <p>For the third principle, the evaluation has been validated by detecting the existence of 45 examples
of helpful information (see Figure 6).
24https://realpython.com/natural-language-processing-spacy-python/
25https://stanfordnlp.github.io/CoreNLP/openie.html
26https://xlwrap.sourceforge.io/?fbclid=IwAR2RwHnPrpdZLdsBc1hByAqXwhleR59gzReWXsOvnjVd</p>
        <p>Hc6tNnVuEmhGcexample
27https://sourceforge.net/projects/evaluate-ontology-ldprinciples/</p>
        <p>For validating the generated RDF file according to the fourth principle, the evaluation has been
validated with a 7.37% score (see Figure 7).</p>
        <p>While the average rate of the generated RDF file according to the four principles has been validated
with a 61.27% score, as shown in Figure 7.</p>
        <p>Human intervention
level
High
High
High
Medium
Medium</p>
        <p>RDF mapping
reusability
None
Yes
None
None
None
Yes</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Comparison with Previous Approaches</title>
        <p>To show what our approach brings as novelty compared to previous approaches, Table 1 presents a
comparison of the proposed approach compared to previous work by following these criteria:
• The variety of input formats: to check the input data formats addressed for the RDF mapping
generation.
• The level of Human intervention for the generation of RDF mapping: we consider three main
levels, ”High” which requires to adjust the RDF model before running the process, ”Medium”
which requires adjusting some parameters via the interface before execution and ”Low” which
must be generated once and then be reused without needing for adjustment before execution.
• The possibility of reusing the RDF mapping model: to evaluate whether the generated RDF
mapping model is not specific to one dataset, but can be reused directly on other datasets in order to
automate this process.</p>
        <p>From Table 1, we can notice the novelty of the proposed RDF mapping approach according to the
three main criteria.</p>
        <p>The proposed RDF mapping approach considers heterogeneous data formats for structured and
unstructured data. It reduces the Human intervention for the mapping generation since it provides a
general process that can address the issue of mapping RDF data as a single model. Thus, the proposed
RDF mapping approach also enables the re-usability of the RDF mapping model since it is defined as a
general approach that does not depend on the data types or structures.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This paper aims to improve the OGD mapping in heterogeneous formats into RDF formats. Thus,
we provided a generic process for RDF mapping that reduces human intervention and facilitates the
extraction of RDF triples.</p>
      <p>We believe this proposition will be helpful for developers. Indeed, we presented libraries for
implementing the Natural Language Processing (NLP) and NER tasks in the process for Java and Python
programming languages.</p>
      <p>Nevertheless, other tasks must be implemented from scratch, such as matching the extracted entities
to RDF triples and linking the Entities to the DBpedia database.</p>
      <p>In future work, we aim to consider converting visual elements (tables, charts) from a PDF to text. To
improve RDF data quality, we plan to apply advanced preprocessing techniques to analyze document
structure, extracting metadata and title hierarchies before RDF mapping. We will also test OpenIE
Performance for extracting relationships in unstructured text. AI techniques, especially Transformers
and pre-trained language models like BERT, may achieve improved results.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This work has been carried out within the framework of the PRFU Project OGDIVAA28 (Open
Government Data Initiatives and Value delivery for Algerian public Agencies).</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[10] T. Georgieva-Trifonova, Transforming dublin core (x) html descriptions to rdf model using rdf
mapping language, in: 2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH),
2023, pp. 1–5.
[11] M. Arenas, A. Bertails, E. Prud’hommeaux, J. Sequeda, Others, A direct mapping of relational data
to rdf, W3C Recommendation, 2012.
[12] N. Thalhath, M. Nagamori, T. Sakaguchi, Yamaml: An application profile based lightweight rdf
mapping language, in: International Conference On Asian Digital Libraries, 2022, pp. 412–420.
[13] M. Hert, G. Reif, H. Gall, A comparison of rdb-to-rdf mapping languages, in: Proceedings Of The
7th International Conference On Semantic Systems, 2011, pp. 25–32.
[14] J. Sequeda, F. Priyatna, B. Villazón-Terrazas, Relational database to rdf mapping patterns, in:</p>
      <p>WOP, 2012.
[15] S. Sahoo, W. Halb, S. Hellmann, K. Idehen, T. T. Jr, S. Auer, J. Sequeda, A. Ezzat, A survey of current
approaches for mapping of relational databases to rdf, W3C RDB2RDF Incubator Group Report,
2009.
[16] M. Vafopoulos, S. Rallis, I. Anagnostopoulos, V. Peristeras, D. Negkas, I. Skaros, A. Tzani, Mining
and linking open economic data from governmental communities, in: Open Source Systems:
Enterprise Software And Solutions: 14th IFIP WG 2.13 International Conference, OSS 2018, 2018,
pp. 144–148.
[17] P. Budsapawanich, C. Anutariya, C. Haruechaiyasak, A conceptual framework for linking open
government data based-on geolocation: A case of thailand, in: Semantic Technology: 8th Joint
International Conference, JIST 2018, 2018, pp. 352–366.
[18] K. Bouchelouche, A. R. Ghomari, L. Zemmouchi-Ghomari, Open government data (ogd)
publication as linked open data (lod): A survey, International Journal Of Computer And Information
Technology 10 (2021).
[19] I. Mutambik, A. Almuqrin, J. Lee, J. Gauthier, A. Homadi, Open government data in gulf
cooperation council countries: An analysis of progress, Sustainability 14 (2022) 7200.
[20] V. Moscato, M. Postiglione, G. Sperlı,́ Few-shot named entity recognition: Definition, taxonomy
and research directions, ACM Transactions on Intelligent Systems and Technology 14 (2023).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C. V.</given-names>
            <surname>Buttow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Weerts</surname>
          </string-name>
          ,
          <article-title>Open government data: The oecd's swiss army knife in the transformation of government</article-title>
          ,
          <source>Policy Internet</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <fpage>219</fpage>
          -
          <lpage>234</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Attard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Orlandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <article-title>Data driven governments: Creating value through open government data, in: Transactions On Large-Scale Data-</article-title>
          and
          <string-name>
            <surname>Knowledge-Centered Systems</surname>
            <given-names>XXVII</given-names>
          </string-name>
          :
          <article-title>Special Issue On Big Data For Complex Urban Systems</article-title>
          ,
          <year>2016</year>
          , pp.
          <fpage>84</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bouchelouche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Ghomari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zemmouchi-Ghomari</surname>
          </string-name>
          ,
          <article-title>Enhanced analysis of open government data: Proposed metrics for improving data quality assessment</article-title>
          ,
          <source>in: 2022 5th International Symposium On Informatics And Its Applications (ISIA)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Attard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Orlandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Scerri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <article-title>A systematic review of open government data initiatives</article-title>
          ,
          <source>Government Information Quarterly</source>
          <volume>32</volume>
          (
          <year>2015</year>
          )
          <fpage>399</fpage>
          -
          <lpage>418</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Nawi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Noah</surname>
          </string-name>
          , L. Zakaria,
          <article-title>Integration of linked open data in collaborative group recommender systems</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>150753</fpage>
          -
          <lpage>150767</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zemmouchi-Ghomari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mezaache</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Oumessad</surname>
          </string-name>
          ,
          <article-title>Ontology assessment based on linked data principles</article-title>
          ,
          <source>International Journal Of Web Information Systems</source>
          <volume>14</volume>
          (
          <year>2018</year>
          )
          <fpage>453</fpage>
          -
          <lpage>479</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Djimenou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zucker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Montagnat</surname>
          </string-name>
          , xr2rml:
          <article-title>Relational and non-relational databases to rdf mapping language</article-title>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Heyvaert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chaves-Fraga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Priyatna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Corcho</surname>
          </string-name>
          , E. Mannens,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verborgh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dimou</surname>
          </string-name>
          ,
          <article-title>Conformance test cases for the rdf mapping language (rml)</article-title>
          ,
          <source>in: Iberoamerican Knowledge Graphs And Semantic Web Conference</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>162</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dimou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Sande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Colpaert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verborgh</surname>
          </string-name>
          , E. Mannens,
          <string-name>
            <given-names>R.</given-names>
            <surname>Walle</surname>
          </string-name>
          ,
          <article-title>Rdf mapping language (rml), W3C</article-title>
          ,
          <string-name>
            <given-names>Unoficial</given-names>
            <surname>Draft</surname>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>