<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evaluating InterDev: A FAIR Platform for International Development Data⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Matt Murtagh-White</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>P. J. Wall</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Declan O'Sullivan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ADAPT, School of Computer Science and Statistics, Trinity College Dublin</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ADAPT, Technological University Dublin</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>CRT-AI, School of Computer Science and Statistics, Trinity College Dublin</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Over the past twenty years, the application of Randomised Controlled Trials in economics and global development has expanded, offering policymakers and researchers fresh perspectives on effective initiatives. InterDev, an online knowledge discovery platform, enables users to find, discover, and reuse data from evaluations structured according to the ERCT ontology. This study is the first of three iterations evaluating the usability of InterDev through a user study where participants completed 10 tasks, recorded their task completion times, interventions needed, and used the think-aloud protocol. They also filled out the Post-Study System Usability Questionnaire (PSSUQ). Thematic analysis of open-ended responses and recordings, along with quantitative analysis of the PSSUQ, revealed that while users generally find the platform functional, there are significant areas for improvement. Key findings indicate issues with error message clarity and overall user satisfaction, particularly in tasks involving filtering and managing collections. Users highlighted the need for enhanced search capabilities, better guidance and navigation, and more intuitive interface design.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Linked Data</kwd>
        <kwd>International Development</kwd>
        <kwd>Randomised Controlled Trials</kwd>
        <kwd>Knowledge Graph Representation</kwd>
        <kwd>Data Exploration 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the past two decades, the trend towards evidence-based public policy has catalysed a
significant shift in the social sciences, emphasising impact evaluation.
Drawing
on
methodologies from Randomised Controlled Trials (RCTs) in medical research, social scientists
and policymakers have embedded evaluation mechanisms into interventionist policies to assess
their effectiveness. This research approach has yielded important insights, particularly for
public policy in lower-income countries. For instance, studies have shown that childhood
exposure to cash transfer programs with conditions tied to health and education can lead to
improved educational, mobility, and labour market outcomes in adulthood [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Additionally, the
duration of exposure to these programs has been linked to increased long-term consumption
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Recently, there has been a growing emphasis on meta-analysis, with researchers seeking to
extract broader policy lessons from a deepening pool of evidence [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Efforts have been
made to create systematic review frameworks to support specific policy areas and address
external validity concerns that may arise from conclusions based on single evaluations.
Traditional meta-analyses have included both qualitative desk studies that synthesise findings
from multiple studies [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and quantitative approaches that aggregate treatment effects to
evaluate the effectiveness of interventions in a particular domain [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        InterDev, an online knowledge discovery platform, was developed to support this growing
need for systematic review frameworks by enabling users to find, discover, and reuse data from
evaluations, to make accessing such data Findable, Accessible, Interoperable and Reusable
(FAIR) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. It builds on previous work on ontology development by providing an interface that
allows for the curation of data according to the ERCT ontology framework [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], without the need
for knowledge graph expertise that may not be within the remit of non-technical researchers
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This follows research which has similarly adapted knowledge graph data for non-technical
researchers in health research [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] In this paper, we focus on the user evaluation of InterDev to
understand its effectiveness and usability, presenting the first set of results from a planned three
round evaluation. Participants were assigned 10 tasks and their task completion times, the
number of interventions required, and verbal processes via the think-aloud protocol were
recorded by the author and later transcribed. They also completed the Post-Study System
Usability Questionnaire (PSSUQ). Through thematic analysis of open-ended responses and
recordings, and quantitative analysis of the PSSUQ, we found that users generally navigate the
platform well but highlighted the need for additional functionalities, such as enhanced features
and improved search capabilities, to maximise its utility.
      </p>
      <p>This paper is structured as follows: Section 2 describes the implementation and methodology
of InterDev, detailing data collection, semantic uplift, data presentation, and usability
evaluation. Section 3 describes the technical architecture and data integration processes. Section
4 presents the evaluation, including both quantitative results, such as task completion times and
PSSUQ scores, and qualitative results from thematic analysis of user feedback. Finally, Section
5 concludes with a summary of findings, discussing strengths and areas for improvement, and
outlining future development directions for InterDev.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>The methodology of InterDev can be defined in five key stages, illustrated in figure 1.</p>
      <p>Step 1: Data Collection. In the first phase, evaluation data from various development data
sources are gathered along with contextual data from multiple repositories. This comprehensive
data collection provides a rich dataset that enables the platform's functionality. Any source of
development data where data is structured as evaluations can be integrated.</p>
      <p>
        Step 2: Semantic Uplift. The second phase involves the semantic uplift of collected data,
where the data is structured according to the ERCT Ontology [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] using tools such as RDFLib
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], allowing for the expression and combination of the underlying data as RDF [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. This
process converts CSV data into RDF (Resource Description Framework) format, facilitating the
creation of the InterDev Knowledge Graph (KG). The semantic uplift ensures that data is not
only standardized but also enriched with semantic meaning, enhancing the platform's ability to
support sophisticated queries and data integration, improving the discoverability and usability
of the information.
      </p>
      <p>
        Step 3: Data Presentation and Curation. The third phase focuses on the presentation
and curation of data within the InterDev user interface (UI). The platform offers various views,
such as Evidence View, Collection View, Submission View, and Evaluation Filters, to help users
navigate and interact with the data effectively. This phase is important for transforming raw
data into a user-friendly format, enabling users to access, explore, and curate the information
they need efficiently, particularly for non-technical researchers unfamiliar with semantic web
technology [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>Step 4: Data Export. In the fourth phase, users can export curated data collections in .ttl
(Turtle) format. This capability allows users to download and utilize the data outside the
platform, facilitating broader dissemination and application of the knowledge discovered
through InterDev. Data export is a vital feature for researchers who need to incorporate the
data into their analyses or share it with collaborators.</p>
      <p>Phase 5: Usability Evaluation. The final phase involves a thorough usability evaluation,
consisting of user experiments, refinements based on feedback, re-evaluations, and eventual
delivery of the improved platform. These evaluations consist of multiple metrics and formats,
such as PSSUQ, user interviews and thematic analysis. This phase ensures that the platform
meets user needs and expectations, leading to iterative refinements and enhancements based
on real user experiences.</p>
      <p>Overall, this approach allows for the platform to grow and evolve in response to user
feedback to develop a KG powered platform that is shaped by user needs. The KG backend
allows for diverse types of data to be integrated into the system, while the incorporation of the
ERCT ontology allows the mapping of this data to move towards standardisation. Meanwhile,
the development of the InterDev dashboard and frontend allows users who are familiar with
international development but do not have technical skills in semantic web technology to take
advantage of linked data. As this research develops, the platform is likely to change and adapt
in response to each evaluation round.</p>
      <sec id="sec-2-1">
        <title>2.1. State of the Art</title>
        <p>
          Existing portals, such as the 3IE Evidence Portal [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] and the American Economic
Association’s repository of randomized controlled trials [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], primarily provide high-level
overviews and repository functions. InterDev, in contrast, focuses on international
development and employs a decentralized, knowledge graph-based approach. This method
ensures data consistency and interoperability across diverse datasets. By adopting a single
standard for organizing and linking data, InterDev aims to enhance the accessibility and
effectiveness of data for policymakers and researchers in the international development sector.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. InterDev Implementation</title>
      <p>InterDev is designed to provide a knowledge discovery platform aimed at facilitating the
integration, curation, and analysis of impact evaluation data within the realm of international
development. The architecture of InterDev, shown in Figure 1, is centered around a knowledge
graph and an interface developed using React 18.2, with a backend infrastructure supported by
Flask 3.0. This setup ensures efficient data discovery and interaction.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Collection and Uplift</title>
        <p>The data for this study was collected from multiple sources. Data from the International
Initiative for Impact Evaluation (3ie) was scraped from their evidence portal, providing
extensive information on the effectiveness of various development interventions. The American
Economic Association (AEA) Registry data was obtained through downloadable CSV files,
offering detailed records of randomized controlled trials. Additionally, contextual data from the
World Bank was sourced from their databank, encompassing a wide range of global
development indicators. This multi-source data collection approach underpins the robust
knowledge base of InterDev, facilitating thorough analysis and evaluation of development
initiatives.</p>
        <p>The collected data was uplifted using RDFLib to convert it into RDF (Resource Description
Framework) format. This process involved structuring the data according to the ERCT ontology,
ensuring consistency and interoperability across different datasets. RDFLib facilitated the
transformation of raw data into a standardized format, enabling integration within the
knowledge graph.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Main Dashboard</title>
        <p>The main dashboard is the central area for accessing the features of InterDev. It is divided
into three sections:</p>
        <p>Navigation Menu: Located on the left side, this menu provides quick access to key
functionalities such as filtering by sector or country.</p>
        <p>Primary Views: Users can switch between different views (Evidence View, Collection View,
Submission View) using buttons at the top.</p>
        <p>Content Area: The central part displays results of user interactions, such as evidence
summaries, collections, or submission forms.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Evidence View</title>
        <p>The Evidence View is designed for exploring and searching impact evaluations. Users can
refine their searches by filtering results based on criteria such as sector or country. Results are
displayed in a grid format, with each tile representing an evaluation. Tiles provide snapshots
including the title, authors, and a brief description. Clicking on a tile gives access to detailed
information about the evaluation, including methodology, findings, and related data.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Collection View</title>
        <p>The Collection View allows users to create, manage, and share collections of evaluations for
projects or policy decisions. Users can add evaluations from the Evidence View into their
collections, view contents, and share or download these collections directly from the platform.
This feature facilitates collaboration and the effective utilization of relevant studies.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Submission View</title>
        <p>The Submission View provides an interface for submitting new evaluation data. It guides
users through the process to ensure comprehensive and standardized data collection, capturing
essential information such as the abstract, authors, title, project details, and evaluation design.
This approach adheres to ERCT ontology standards, ensuring submitted data is integrated into
the knowledge graph and accessible for future searches and analysis.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Evaluation</title>
      <sec id="sec-4-1">
        <title>The first iteration of the InterDev evaluation is described below.</title>
        <sec id="sec-4-1-1">
          <title>4.1. Experimental Design</title>
          <p>
            The evaluation methodology integrates both qualitative and quantitative approaches.
Participants completed ten specific tasks using the InterDev platform, such as searching for
evaluations, creating collections, and submitting new data. The study was conducted with five
participants, including a mix of PhD researchers and social science researchers, none of whom
had prior experience with semantic web technology. The evaluation aimed to assess how
effectively users could navigate, interact with, and utilize the platform for their research needs
without the requirement of experience in the semantic web. Task completion times were
recorded by the observer to measure efficiency. During these tasks, the think-aloud protocol
was employed, where participants verbalized their thoughts and actions, providing real-time
feedback on their experiences and any difficulties encountered [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ].
          </p>
          <p>The tasks involved in the evaluation were as follows: selecting “Evidence View” from the
navigation bar and waiting for the information to appear (T1), selecting any trial from the
evidence view and viewing its associated information (T2), noting the sector of the selected trial
(T3), filtering the trials in the evidence view by the noted sector until only trials from that sector
appear (T4), adding four trials from this selection to the collection and confirming their presence
in the “Collection View” (T5), returning to the evidence view and filtering for both a country
and a sector, adding at most four more trials to the collection, and confirming their presence in
the “Collection View” (T6), going to the “Collection View,” filtering the collection by any
property, and downloading the collection (T7), submitting a new trial with any data in the “Trial
Submission” view (T8), finding the submitted evaluation data in the “Evidence View” (T9), and
finally, downloading the evaluation data in the .ttl format (T10).</p>
          <p>
            Additionally, instances where participants encountered an issue and required assistance
were recorded by the observer to identify potential areas for improvement within the platform.
The number of instances of these were recorded by the observer for each task. After completing
the tasks, participants filled out the Post-Study System Usability Questionnaire (PSSUQ), which
provided quantitative data on their overall satisfaction and the usability of the platform. This is
a standardised survey that assessed the evolution of the usability during the development of the
system and comprises of 19 questions [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ].
          </p>
          <p>To analyze the data, thematic analysis was conducted on the open-ended responses within
the PSSUQ and recordings from the think-aloud protocol, identifying common themes and user
feedback. The thematic analysis followed a standardised 6 step process: familiarisation with the
data, generation of initial codes, a search for themes, a review of themes, definition and naming
of themes and then reporting on findings [19]. Instances of themes were tagged in-text and a
script written in Python to count and summarise the instances of these themes across the
evaluation data. The PSSUQ results were quantitatively analyzed to assess various aspects of
usability, such as ease of use, efficiency, and error handling. This methodology ensures a
thorough evaluation of the InterDev platform, combining both user experiences and measurable
data to inform future improvements and enhance the platform's usability and effectiveness for
researchers and policymakers in international development.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>4.2. Quantitative Results</title>
          <p>Figure 3 illustrates the box plot of time spent to complete each task. Tasks such as selecting
the “Evidence View” from the navigation bar (Task 1), selecting any trial from the evidence
view (Task 2), and noting the sector of the trial (Task 3) have low median completion times and
minimal variability, indicating that users found these tasks straightforward and easy to
complete. However, tasks involving filtering and managing collections presented more
challenges. For instance, Task 4, which requires filtering trials for the noted sector, shows
moderate median completion time with some variability, suggesting users found the filtering
function somewhat challenging. Task 5, which involves adding four trials to the collection, and
Task 6, which includes filtering for both a country and a sector, both exhibit higher median
completion times and significant variability, indicating these tasks were particularly difficult
for users. Other tasks, such as submitting a new trial (Task 8) and finding the submitted
evaluation in the evidence view (Task 9), also show higher median completion times and some
outliers, reflecting challenges in the submission process and locating submitted evaluations.</p>
          <p>Figure 4 shows the intervention count for each task, providing further insights into task
difficulty. Task 7, which involves filtering the collection by any property and downloading it,
had the highest number of interventions, suggesting it was particularly challenging for users.
Tasks 2, 3, and 5 had moderate intervention counts, indicating these tasks presented some
challenges but were generally manageable. Tasks 1, 4, and 8 had lower intervention counts,
suggesting these tasks were relatively straightforward for users. Tasks 6, 9, and 10 had no
recorded interventions, indicating that these tasks were the easiest for users to complete
independently.</p>
          <p>The analysis of the PSSUQ data seen in figure 5 indicates that users generally find the system
functional, with lower scores reflecting better usability and satisfaction. However, significant
variability in satisfaction levels was observed. Notably, questions related to error messages (Q9)
and overall satisfaction (Q19) exhibit higher scores and outliers, suggesting inconsistent user
experiences in these areas. This inconsistency underscores the need for targeted improvements
in error message clarity and overall system responsiveness. Additionally, the higher median
scores for some questions indicate areas where users are less satisfied, highlighting the
necessity for comprehensive enhancements in interface design and functionality.</p>
          <p>The implications of these findings suggest that while the InterDev platform serves its
primary purpose, there is substantial room for improvement. Enhancing error message clarity
can significantly reduce user frustration and improve task efficiency, allowing more intuitive
interaction with the system.</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>4.3. Qualitative Results</title>
          <p>Table 1 summarizes the thematic analysis for the first iteration of InterDev user testing,
providing further insights into user feedback. Usability and Interface Design (UID), which
encompasses overall design, intuitiveness, and ease of use, had the highest frequency with 19
mentions, indicating that users frequently commented on the visual layout, ease of finding
information, and general user experience. Guidance and Navigation (GN) had 13 mentions,
highlighting user comments on the clarity of instructions, ease of navigation, and suggestions
for improving user guidance, such as better task prompts and visual cues. Functionality and
Features (FF) was mentioned 12 times, reflecting feedback related to the platform’s
functionalities, including search capabilities, filtering options, and specific features like
collection management and submission forms. Efficiency and Performance (EP), with 9
mentions, included observations related to the speed and efficiency of completing tasks, as well
as any technical issues or bugs encountered during use.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>Overall design, intuitiveness, and ease of use of the platform's interface. This includes feedback on visual layout, ease of finding information, and general user experience.</title>
      </sec>
      <sec id="sec-4-3">
        <title>Comments on the clarity of instructions, ease of navigation, and suggestions for improving user guidance, such as better task prompts and visual cues.</title>
      </sec>
      <sec id="sec-4-4">
        <title>Feedback related to the platform’s functionalities, such as search capabilities, filtering options, and specific features like collection management and submission forms.</title>
        <p>Observations related to the speed and efficiency
of completing tasks, as well as any technical
issues or bugs encountered during use.
19
13
12
9</p>
        <p>Theme</p>
      </sec>
      <sec id="sec-4-5">
        <title>Usability and Interface Design (UID)</title>
      </sec>
      <sec id="sec-4-6">
        <title>Guidance and Navigation (GN)</title>
      </sec>
      <sec id="sec-4-7">
        <title>Functionality and Features (FF)</title>
      </sec>
      <sec id="sec-4-8">
        <title>Efficiency and Performance (EP)</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The initial evaluation of InterDev demonstrates its potential to enhance data discovery and
usability for researchers and policymakers in international development. While users found the
platform generally functional, significant improvements are needed, particularly in filtering,
error message clarity, search capabilities, and overall interface design. Quantitative and
qualitative feedback from our user study highlighted key areas for enhancement, such as better
guidance, improved navigation, and more intuitive features. These insights will guide the
iterative refinement of InterDev to better meet user needs. While InterDev shows promise,
further continuous user-centered development is required. Future iterations will address
identified challenges, further refine user needs, and aim to improve the user experience and
maximize the platform’s utility in making international development data more accessible and
actionable.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This research was conducted with the financial support of Science Foundation Ireland under
Grant Agreement No. 13/RC/2106_P2 at the ADAPT SFI Research Centre at Trinity College
Dublin. ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded
by Science Foundation Ireland through the SFI Research Centres Programme.
[19] L. S. Nowell, J. M. Norris, D. E. White, and N. J. Moules, “Thematic analysis: Striving to
meet the trustworthiness criteria,” Int. J. Qual. Methods, vol. 16, no. 1, p. 1609406917733847,
2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>D. de Walque</surname>
            , L. Fernald,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Gertler</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hidrobo</surname>
          </string-name>
          , “
          <article-title>Cash transfers and child and adolescent development</article-title>
          ,”
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Parker</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Vogl</surname>
          </string-name>
          , “
          <article-title>Do conditional cash transfers improve economic outcomes in the next generation? Evidence from Mexico,”</article-title>
          <source>National Bureau of Economic Research</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Baird</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. H.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Özler</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Woolcock</surname>
          </string-name>
          , “
          <article-title>Relative effectiveness of conditional and unconditional cash transfers for schooling outcomes in developing countries: a systematic review,” Campbell Syst</article-title>
          .
          <source>Rev.</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>124</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bastagli</surname>
          </string-name>
          et al., “
          <article-title>Cash transfers: what does the evidence say,” Rigorous Rev</article-title>
          .
          <source>Programme Impact Role Des. Implement. Featur. Lond. ODI</source>
          , vol.
          <volume>1</volume>
          , no.
          <issue>7</issue>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Waddington</surname>
          </string-name>
          et al., “
          <article-title>How to do a good systematic review of effects in international development: a tool kit,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Dev</surname>
          </string-name>
          . Eff., vol.
          <volume>4</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>359</fpage>
          -
          <lpage>387</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. T.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Charles</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Lloyd-Williams</surname>
          </string-name>
          , “
          <article-title>Public health economics: a systematic review of guidance for the economic evaluation of public health interventions and discussion of key methodological issues,” BMC Public Health</article-title>
          , vol.
          <volume>13</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Langbein</surname>
          </string-name>
          , and G. Roberts, “
          <article-title>Policy evaluation, randomized controlled trials, and external validity-A systematic review,” Econ</article-title>
          . Lett., vol.
          <volume>147</volume>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>54</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>M. D.</surname>
          </string-name>
          Wilkinson et al., “
          <article-title>The FAIR Guiding Principles for scientific data management and stewardship</article-title>
          ,
          <source>” Sci. Data</source>
          , vol.
          <volume>3</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Murtagh-White</surname>
          </string-name>
          ,
          <article-title>“ERCT: An Ontology for Describing Randomised Controlled Trials in the Social Sciences</article-title>
          ,”
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Smith-Yoshimura</surname>
          </string-name>
          ,
          <article-title>“Analysis of 2018 international linked data survey for implementers,” Code4Lib J</article-title>
          .,
          <source>no. 42</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Navarro-Gallinad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Orlandi</surname>
          </string-name>
          , and
          <string-name>
            <surname>D. O'Sullivan</surname>
          </string-name>
          , “
          <article-title>Enhancing rare disease research with semantic integration of environmental and health data,” presented at the</article-title>
          <source>Proceedings of the 10th International Joint Conference on Knowledge Graphs</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>[12] RDFLib, “RDFlib.” [Online]. Available: https://pypi.org/project/rdflib/</mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Brickley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. V.</given-names>
            <surname>Guha</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>McBride</surname>
          </string-name>
          , “
          <source>RDF Schema 1</source>
          .1,” W3C Recomm., vol.
          <volume>25</volume>
          , pp.
          <fpage>2004</fpage>
          -
          <lpage>2014</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Rietveld</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Hoekstra</surname>
          </string-name>
          , “
          <article-title>The YASGUI family of SPARQL clients 1,” Semantic Web</article-title>
          , vol.
          <volume>8</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>373</fpage>
          -
          <lpage>383</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <article-title>International Initiative for Impact Evaluation, “3ie Development Evidence Portal</article-title>
          .”
          <source>Accessed: Apr. 04</source>
          ,
          <year>2022</year>
          . [Online]. Available: https://www.3ieimpact.org/evidence-hub
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16] American Economic Association, “
          <article-title>Trial Data Access,” Trial Data Access</article-title>
          .
          <source>Accessed: Jul. 13</source>
          ,
          <year>2021</year>
          . [Online]. Available: https://www.socialscienceregistry.org/site/data
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Boren</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Ramey</surname>
          </string-name>
          , “
          <article-title>Thinking aloud: Reconciling theory and practice</article-title>
          ,
          <source>” IEEE Trans. Prof</source>
          . Commun., vol.
          <volume>43</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>261</fpage>
          -
          <lpage>278</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , “
          <article-title>Psychometric evaluation of the PSSUQ using data from five years of usability studies,”</article-title>
          <string-name>
            <given-names>Int. J.</given-names>
            <surname>Hum</surname>
          </string-name>
          .-Comput. Interact., vol.
          <volume>14</volume>
          , no.
          <issue>3-4</issue>
          , pp.
          <fpage>463</fpage>
          -
          <lpage>488</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>