<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Evaluating InterDev: A FAIR Platform for International Development Data ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Matt</forename><surname>Murtagh-White</surname></persName>
						</author>
						<author>
							<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Wall</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">ADAPT</orgName>
								<orgName type="institution">Technological University Dublin</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Declan</forename><surname>O'sullivan</surname></persName>
							<affiliation key="aff2">
								<orgName type="department" key="dep1">ADAPT</orgName>
								<orgName type="department" key="dep2">School of Computer Science and Statistics</orgName>
								<orgName type="institution">Trinity College Dublin</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">CRT-AI</orgName>
								<orgName type="department" key="dep2">School of Computer Science and Statistics</orgName>
								<orgName type="institution">Trinity College Dublin</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Evaluating InterDev: A FAIR Platform for International Development Data ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">2BCADA4C7D8D48FFD8F1B4555376C3E8</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:34+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Linked Data</term>
					<term>International Development</term>
					<term>Randomised Controlled Trials</term>
					<term>Knowledge Graph Representation</term>
					<term>Data Exploration 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Over the past twenty years, the application of Randomised Controlled Trials in economics and global development has expanded, offering policymakers and researchers fresh perspectives on effective initiatives. InterDev, an online knowledge discovery platform, enables users to find, discover, and reuse data from evaluations structured according to the ERCT ontology. This study is the first of three iterations evaluating the usability of InterDev through a user study where participants completed 10 tasks, recorded their task completion times, interventions needed, and used the think-aloud protocol. They also filled out the Post-Study System Usability Questionnaire (PSSUQ). Thematic analysis of open-ended responses and recordings, along with quantitative analysis of the PSSUQ, revealed that while users generally find the platform functional, there are significant areas for improvement. Key findings indicate issues with error message clarity and overall user satisfaction, particularly in tasks involving filtering and managing collections. Users highlighted the need for enhanced search capabilities, better guidance and navigation, and more intuitive interface design.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In the past two decades, the trend towards evidence-based public policy has catalysed a significant shift in the social sciences, emphasising impact evaluation. Drawing on methodologies from Randomised Controlled Trials (RCTs) in medical research, social scientists and policymakers have embedded evaluation mechanisms into interventionist policies to assess their effectiveness. This research approach has yielded important insights, particularly for public policy in lower-income countries. For instance, studies have shown that childhood exposure to cash transfer programs with conditions tied to health and education can lead to improved educational, mobility, and labour market outcomes in adulthood <ref type="bibr" target="#b0">[1]</ref>. Additionally, the duration of exposure to these programs has been linked to increased long-term consumption <ref type="bibr" target="#b1">[2]</ref>.</p><p>Recently, there has been a growing emphasis on meta-analysis, with researchers seeking to extract broader policy lessons from a deepening pool of evidence <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>. Efforts have been made to create systematic review frameworks to support specific policy areas and address external validity concerns that may arise from conclusions based on single evaluations. Traditional meta-analyses have included both qualitative desk studies that synthesise findings from multiple studies <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref> and quantitative approaches that aggregate treatment effects to evaluate the effectiveness of interventions in a particular domain <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b6">[7]</ref>.</p><p>InterDev, an online knowledge discovery platform, was developed to support this growing need for systematic review frameworks by enabling users to find, discover, and reuse data from evaluations, to make accessing such data Findable, Accessible, Interoperable and Reusable (FAIR) <ref type="bibr" target="#b7">[8]</ref>. It builds on previous work on ontology development by providing an interface that allows for the curation of data according to the ERCT ontology framework <ref type="bibr" target="#b8">[9]</ref>, without the need for knowledge graph expertise that may not be within the remit of non-technical researchers <ref type="bibr" target="#b9">[10]</ref>. This follows research which has similarly adapted knowledge graph data for non-technical researchers in health research <ref type="bibr" target="#b10">[11]</ref> In this paper, we focus on the user evaluation of InterDev to understand its effectiveness and usability, presenting the first set of results from a planned three round evaluation. Participants were assigned 10 tasks and their task completion times, the number of interventions required, and verbal processes via the think-aloud protocol were recorded by the author and later transcribed. They also completed the Post-Study System Usability Questionnaire (PSSUQ). Through thematic analysis of open-ended responses and recordings, and quantitative analysis of the PSSUQ, we found that users generally navigate the platform well but highlighted the need for additional functionalities, such as enhanced features and improved search capabilities, to maximise its utility.</p><p>This paper is structured as follows: Section 2 describes the implementation and methodology of InterDev, detailing data collection, semantic uplift, data presentation, and usability evaluation. Section 3 describes the technical architecture and data integration processes. Section 4 presents the evaluation, including both quantitative results, such as task completion times and PSSUQ scores, and qualitative results from thematic analysis of user feedback. Finally, Section 5 concludes with a summary of findings, discussing strengths and areas for improvement, and outlining future development directions for InterDev.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head><p>The methodology of InterDev can be defined in five key stages, illustrated in figure <ref type="figure" target="#fig_0">1</ref>. Step 1: Data Collection. In the first phase, evaluation data from various development data sources are gathered along with contextual data from multiple repositories. This comprehensive data collection provides a rich dataset that enables the platform's functionality. Any source of development data where data is structured as evaluations can be integrated.</p><p>Step 2: Semantic Uplift. The second phase involves the semantic uplift of collected data, where the data is structured according to the ERCT Ontology <ref type="bibr" target="#b8">[9]</ref> using tools such as RDFLib <ref type="bibr" target="#b11">[12]</ref>, allowing for the expression and combination of the underlying data as RDF <ref type="bibr" target="#b12">[13]</ref>. This process converts CSV data into RDF (Resource Description Framework) format, facilitating the creation of the InterDev Knowledge Graph (KG). The semantic uplift ensures that data is not only standardized but also enriched with semantic meaning, enhancing the platform's ability to support sophisticated queries and data integration, improving the discoverability and usability of the information.</p><p>Step 3: Data Presentation and Curation. The third phase focuses on the presentation and curation of data within the InterDev user interface (UI). The platform offers various views, such as Evidence View, Collection View, Submission View, and Evaluation Filters, to help users navigate and interact with the data effectively. This phase is important for transforming raw data into a user-friendly format, enabling users to access, explore, and curate the information they need efficiently, particularly for non-technical researchers unfamiliar with semantic web technology <ref type="bibr" target="#b13">[14]</ref>.</p><p>Step 4: Data Export. In the fourth phase, users can export curated data collections in .ttl (Turtle) format. This capability allows users to download and utilize the data outside the platform, facilitating broader dissemination and application of the knowledge discovered through InterDev. Data export is a vital feature for researchers who need to incorporate the data into their analyses or share it with collaborators.</p><p>Phase 5: Usability Evaluation. The final phase involves a thorough usability evaluation, consisting of user experiments, refinements based on feedback, re-evaluations, and eventual delivery of the improved platform. These evaluations consist of multiple metrics and formats, such as PSSUQ, user interviews and thematic analysis. This phase ensures that the platform meets user needs and expectations, leading to iterative refinements and enhancements based on real user experiences.</p><p>Overall, this approach allows for the platform to grow and evolve in response to user feedback to develop a KG powered platform that is shaped by user needs. The KG backend allows for diverse types of data to be integrated into the system, while the incorporation of the ERCT ontology allows the mapping of this data to move towards standardisation. Meanwhile, the development of the InterDev dashboard and frontend allows users who are familiar with international development but do not have technical skills in semantic web technology to take advantage of linked data. As this research develops, the platform is likely to change and adapt in response to each evaluation round.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">State of the Art</head><p>Existing portals, such as the 3IE Evidence Portal <ref type="bibr" target="#b14">[15]</ref> and the American Economic Association's repository of randomized controlled trials <ref type="bibr" target="#b15">[16]</ref>, primarily provide high-level overviews and repository functions. InterDev, in contrast, focuses on international development and employs a decentralized, knowledge graph-based approach. This method ensures data consistency and interoperability across diverse datasets. By adopting a single standard for organizing and linking data, InterDev aims to enhance the accessibility and effectiveness of data for policymakers and researchers in the international development sector. InterDev is designed to provide a knowledge discovery platform aimed at facilitating the integration, curation, and analysis of impact evaluation data within the realm of international development. The architecture of InterDev, shown in Figure <ref type="figure" target="#fig_0">1</ref>, is centered around a knowledge graph and an interface developed using React 18.2, with a backend infrastructure supported by Flask 3.0. This setup ensures efficient data discovery and interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">InterDev Implementation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Data Collection and Uplift</head><p>The data for this study was collected from multiple sources. Data from the International Initiative for Impact Evaluation (3ie) was scraped from their evidence portal, providing extensive information on the effectiveness of various development interventions. The American Economic Association (AEA) Registry data was obtained through downloadable CSV files, offering detailed records of randomized controlled trials. Additionally, contextual data from the World Bank was sourced from their databank, encompassing a wide range of global development indicators. This multi-source data collection approach underpins the robust knowledge base of InterDev, facilitating thorough analysis and evaluation of development initiatives.</p><p>The collected data was uplifted using RDFLib to convert it into RDF (Resource Description Framework) format. This process involved structuring the data according to the ERCT ontology, ensuring consistency and interoperability across different datasets. RDFLib facilitated the transformation of raw data into a standardized format, enabling integration within the knowledge graph. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Main Dashboard</head><p>The main dashboard is the central area for accessing the features of InterDev. It is divided into three sections:</p><p>Navigation Menu: Located on the left side, this menu provides quick access to key functionalities such as filtering by sector or country.</p><p>Primary Views: Users can switch between different views (Evidence View, Collection View, Submission View) using buttons at the top.</p><p>Content Area: The central part displays results of user interactions, such as evidence summaries, collections, or submission forms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Evidence View</head><p>The Evidence View is designed for exploring and searching impact evaluations. Users can refine their searches by filtering results based on criteria such as sector or country. Results are displayed in a grid format, with each tile representing an evaluation. Tiles provide snapshots including the title, authors, and a brief description. Clicking on a tile gives access to detailed information about the evaluation, including methodology, findings, and related data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Collection View</head><p>The Collection View allows users to create, manage, and share collections of evaluations for projects or policy decisions. Users can add evaluations from the Evidence View into their collections, view contents, and share or download these collections directly from the platform. This feature facilitates collaboration and the effective utilization of relevant studies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Submission View</head><p>The Submission View provides an interface for submitting new evaluation data. It guides users through the process to ensure comprehensive and standardized data collection, capturing essential information such as the abstract, authors, title, project details, and evaluation design. This approach adheres to ERCT ontology standards, ensuring submitted data is integrated into the knowledge graph and accessible for future searches and analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head><p>The first iteration of the InterDev evaluation is described below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Experimental Design</head><p>The evaluation methodology integrates both qualitative and quantitative approaches. Participants completed ten specific tasks using the InterDev platform, such as searching for evaluations, creating collections, and submitting new data. The study was conducted with five participants, including a mix of PhD researchers and social science researchers, none of whom had prior experience with semantic web technology. The evaluation aimed to assess how effectively users could navigate, interact with, and utilize the platform for their research needs without the requirement of experience in the semantic web. Task completion times were recorded by the observer to measure efficiency. During these tasks, the think-aloud protocol was employed, where participants verbalized their thoughts and actions, providing real-time feedback on their experiences and any difficulties encountered <ref type="bibr" target="#b16">[17]</ref>.</p><p>The tasks involved in the evaluation were as follows: selecting "Evidence View" from the navigation bar and waiting for the information to appear (T1), selecting any trial from the evidence view and viewing its associated information (T2), noting the sector of the selected trial (T3), filtering the trials in the evidence view by the noted sector until only trials from that sector appear (T4), adding four trials from this selection to the collection and confirming their presence in the "Collection View" (T5), returning to the evidence view and filtering for both a country and a sector, adding at most four more trials to the collection, and confirming their presence in the "Collection View" (T6), going to the "Collection View," filtering the collection by any property, and downloading the collection (T7), submitting a new trial with any data in the "Trial Submission" view (T8), finding the submitted evaluation data in the "Evidence View" (T9), and finally, downloading the evaluation data in the .ttl format (T10).</p><p>Additionally, instances where participants encountered an issue and required assistance were recorded by the observer to identify potential areas for improvement within the platform. The number of instances of these were recorded by the observer for each task. After completing the tasks, participants filled out the Post-Study System Usability Questionnaire (PSSUQ), which provided quantitative data on their overall satisfaction and the usability of the platform. This is a standardised survey that assessed the evolution of the usability during the development of the system and comprises of 19 questions <ref type="bibr" target="#b17">[18]</ref>.</p><p>To analyze the data, thematic analysis was conducted on the open-ended responses within the PSSUQ and recordings from the think-aloud protocol, identifying common themes and user feedback. The thematic analysis followed a standardised 6 step process: familiarisation with the data, generation of initial codes, a search for themes, a review of themes, definition and naming of themes and then reporting on findings <ref type="bibr" target="#b18">[19]</ref>. Instances of themes were tagged in-text and a script written in Python to count and summarise the instances of these themes across the evaluation data. The PSSUQ results were quantitatively analyzed to assess various aspects of usability, such as ease of use, efficiency, and error handling. This methodology ensures a thorough evaluation of the InterDev platform, combining both user experiences and measurable data to inform future improvements and enhance the platform's usability and effectiveness for researchers and policymakers in international development.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Quantitative Results</head><p>Figure <ref type="figure" target="#fig_2">3</ref> illustrates the box plot of time spent to complete each task. Tasks such as selecting the "Evidence View" from the navigation bar (Task 1), selecting any trial from the evidence view (Task 2), and noting the sector of the trial (Task 3) have low median completion times and minimal variability, indicating that users found these tasks straightforward and easy to complete. However, tasks involving filtering and managing collections presented more challenges. For instance, Task 4, which requires filtering trials for the noted sector, shows moderate median completion time with some variability, suggesting users found the filtering function somewhat challenging. Task 5, which involves adding four trials to the collection, and Task 6, which includes filtering for both a country and a sector, both exhibit higher median completion times and significant variability, indicating these tasks were particularly difficult for users. Other tasks, such as submitting a new trial (Task 8) and finding the submitted evaluation in the evidence view (Task 9), also show higher median completion times and some outliers, reflecting challenges in the submission process and locating submitted evaluations.  Figure <ref type="figure" target="#fig_3">4</ref> shows the intervention count for each task, providing further insights into task difficulty. Task 7, which involves filtering the collection by any property and downloading it, had the highest number of interventions, suggesting it was particularly challenging for users. Tasks 2, 3, and 5 had moderate intervention counts, indicating these tasks presented some challenges but were generally manageable. Tasks 1, 4, and 8 had lower intervention counts, suggesting these tasks were relatively straightforward for users. Tasks 6, 9, and 10 had no recorded interventions, indicating that these tasks were the easiest for users to complete independently. The analysis of the PSSUQ data seen in figure <ref type="figure" target="#fig_4">5</ref> indicates that users generally find the system functional, with lower scores reflecting better usability and satisfaction. However, significant variability in satisfaction levels was observed. Notably, questions related to error messages (Q9) and overall satisfaction (Q19) exhibit higher scores and outliers, suggesting inconsistent user experiences in these areas. This inconsistency underscores the need for targeted improvements in error message clarity and overall system responsiveness. Additionally, the higher median scores for some questions indicate areas where users are less satisfied, highlighting the necessity for comprehensive enhancements in interface design and functionality.</p><p>The implications of these findings suggest that while the InterDev platform serves its primary purpose, there is substantial room for improvement. Enhancing error message clarity can significantly reduce user frustration and improve task efficiency, allowing more intuitive interaction with the system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Qualitative Results</head><p>Table <ref type="table">1</ref> summarizes the thematic analysis for the first iteration of InterDev user testing, providing further insights into user feedback. Usability and Interface Design (UID), which encompasses overall design, intuitiveness, and ease of use, had the highest frequency with 19 mentions, indicating that users frequently commented on the visual layout, ease of finding information, and general user experience. Guidance and Navigation (GN) had 13 mentions, highlighting user comments on the clarity of instructions, ease of navigation, and suggestions for improving user guidance, such as better task prompts and visual cues. Functionality and Features (FF) was mentioned 12 times, reflecting feedback related to the platform's functionalities, including search capabilities, filtering options, and specific features like collection management and submission forms. Efficiency and Performance (EP), with 9 mentions, included observations related to the speed and efficiency of completing tasks, as well as any technical issues or bugs encountered during use.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Thematic analysis summary for the first iteration of InterDev user testing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Theme</head><p>Code Description Frequency Usability and Interface Design (UID)</p><p>Overall design, intuitiveness, and ease of use of the platform's interface. This includes feedback on visual layout, ease of finding information, and general user experience.</p><note type="other">19</note><p>Guidance and Navigation (GN)</p><p>Comments on the clarity of instructions, ease of navigation, and suggestions for improving user guidance, such as better task prompts and visual cues.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>13</head><p>Functionality and Features (FF)</p><p>Feedback related to the platform's functionalities, such as search capabilities, filtering options, and specific features like collection management and submission forms.</p><p>12 Efficiency and Performance (EP) Observations related to the speed and efficiency of completing tasks, as well as any technical issues or bugs encountered during use. 9</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>The initial evaluation of InterDev demonstrates its potential to enhance data discovery and usability for researchers and policymakers in international development. While users found the platform generally functional, significant improvements are needed, particularly in filtering, error message clarity, search capabilities, and overall interface design. Quantitative and qualitative feedback from our user study highlighted key areas for enhancement, such as better guidance, improved navigation, and more intuitive features. These insights will guide the iterative refinement of InterDev to better meet user needs. While InterDev shows promise, further continuous user-centered development is required. Future iterations will address identified challenges, further refine user needs, and aim to improve the user experience and maximize the platform's utility in making international development data more accessible and actionable.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Overview of Implementation and Methodology.</figDesc><graphic coords="3,99.25,85.05,424.90,159.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Overview of Framework.</figDesc><graphic coords="4,99.25,370.56,424.90,157.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Dashboard View.</figDesc><graphic coords="5,102.90,264.37,424.90,207.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Boxplot of time spent to complete each task in the usability evaluation.</figDesc><graphic coords="8,154.50,324.10,285.99,171.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Bar chart of the number of interventions required for each task.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Boxplot of PSSUQ scores, based on a Likert 7-point scale where lower values indicate a higher satisfaction. System Usefulness (SysUse), information Quality (InfoQual) and Interface Quality (IntQual) and Overall are aggregated metrics based on questions 1-8, 9-15, 16-18 and 1-19 respectively.</figDesc><graphic coords="9,100.50,85.05,408.20,204.10" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This research was conducted with the financial support of Science Foundation Ireland under Grant Agreement No. <ref type="bibr" target="#b12">13</ref>/RC/2106_P2 at the ADAPT SFI Research Centre at Trinity College Dublin. ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded by Science Foundation Ireland through the SFI Research Centres Programme.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>De Walque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fernald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gertler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hidrobo</surname></persName>
		</author>
		<title level="m">Cash transfers and child and adolescent development</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Do conditional cash transfers improve economic outcomes in the next generation? Evidence from Mexico</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">W</forename><surname>Parker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Vogl</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>National Bureau of Economic Research</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Relative effectiveness of conditional and unconditional cash transfers for schooling outcomes in developing countries: a systematic review</title>
		<author>
			<persName><forename type="first">S</forename><surname>Baird</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">H</forename><surname>Ferreira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Özler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Woolcock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Campbell Syst. Rev</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="124" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Cash transfers: what does the evidence say</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bastagli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Rigorous Rev. Programme Impact Role Des. Implement. Featur. Lond. ODI</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">7</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">How to do a good systematic review of effects in international development: a tool kit</title>
		<author>
			<persName><forename type="first">H</forename><surname>Waddington</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Dev. Eff</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="359" to="387" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Public health economics: a systematic review of guidance for the economic evaluation of public health interventions and discussion of key methodological issues</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">T</forename><surname>Edwards</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Charles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lloyd-Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">BMC Public Health</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Policy evaluation, randomized controlled trials, and external validity-A systematic review</title>
		<author>
			<persName><forename type="first">J</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Langbein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Roberts</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Econ. Lett</title>
		<imprint>
			<biblScope unit="volume">147</biblScope>
			<biblScope unit="page" from="51" to="54" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The FAIR Guiding Principles for scientific data management and stewardship</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Wilkinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sci. Data</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="9" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">ERCT: An Ontology for Describing Randomised Controlled Trials in the Social Sciences</title>
		<author>
			<persName><forename type="first">M</forename><surname>Murtagh-White</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Analysis of 2018 international linked data survey for implementers</title>
		<author>
			<persName><forename type="first">K</forename><surname>Smith-Yoshimura</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Code4Lib J</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Enhancing rare disease research with semantic integration of environmental and health data</title>
		<author>
			<persName><forename type="first">A</forename><surname>Navarro-Gallinad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Orlandi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>O'sullivan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Joint Conference on Knowledge Graphs</title>
				<meeting>the 10th International Joint Conference on Knowledge Graphs</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="19" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">RDFlib</title>
		<author>
			<persName><surname>Rdflib</surname></persName>
		</author>
		<ptr target="https://pypi.org/project/rdflib/" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">RDF Schema 1.1</title>
		<author>
			<persName><forename type="first">D</forename><surname>Brickley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Guha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mcbride</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">W3C Recomm</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="2004" to="2014" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The YASGUI family of SPARQL clients 1</title>
		<author>
			<persName><forename type="first">L</forename><surname>Rietveld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hoekstra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="373" to="383" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">3ie Development Evidence Portal</title>
		<ptr target="https://www.3ieimpact.org/evidence-hub" />
	</analytic>
	<monogr>
		<title level="m">International Initiative for Impact Evaluation</title>
				<imprint>
			<date type="published" when="2022-04">Apr. 04, 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Trial Data Access</title>
		<ptr target="https://www.socialscienceregistry.org/site/data" />
	</analytic>
	<monogr>
		<title level="m">Trial Data Access</title>
				<imprint>
			<publisher>American Economic Association</publisher>
			<date type="published" when="2021-07-13">Jul. 13, 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Thinking aloud: Reconciling theory and practice</title>
		<author>
			<persName><forename type="first">T</forename><surname>Boren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ramey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Prof. Commun</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="261" to="278" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Psychometric evaluation of the PSSUQ using data from five years of usability studies</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Lewis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Hum.-Comput. Interact</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="463" to="488" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Thematic analysis: Striving to meet the trustworthiness criteria</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Nowell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Norris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>White</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">J</forename><surname>Moules</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Qual. Methods</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">1609406917733847</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
