<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Combining Large Language Model Classifications and Active Learning for Improved Technology-Assisted Review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michiel P. Bron</string-name>
          <email>m.p.bron@uu.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Berend Greijn</string-name>
          <email>b.greijn@uu.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bruno Messina Coimbra</string-name>
          <email>b.messinacoimbra@uu.nl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rens van de Schoot</string-name>
          <email>a.g.j.vandeschoot@uu.nl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ayoub Bagheri</string-name>
          <email>a.bagheri@uu.nl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>The Netherlands' National Police</institution>
          ,
          <addr-line>The Hague</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Utrecht University, Department of Information and Computing Sciences, Faculty of Science</institution>
          ,
          <addr-line>Utrecht</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Utrecht University, Department of Methods and Statistics, Faculty of Social Sciences</institution>
          ,
          <addr-line>Utrecht</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <fpage>77</fpage>
      <lpage>95</lpage>
      <abstract>
        <p>Technology-assisted review (TAR) is software that aids in high-recall information retrieval tasks, such as abstract screening for systematic literature reviews. Often, TAR systems use a form of Active Learning (AL); during this process, human reviewers label documents as relevant or irrelevant according to a screening protocol, while the system incrementally updates a classifier based on the reviewers' previous decisions. After each model update, the system uses the classifier to rerank the remaining workload by prioritizing predicted relevant documents over irrelevant ones, enabling a reduced workload. Recently, studies have been performed that study the ability of solely using Large Language Models (LLMs) to perform this task by supplying the LLM prompts that contain the task, screening protocol, and a document from the corpus. The LLM then provides a classification of the document in question. While the results of these studies are promising, the LLM's predictions are not error-free, resulting in a recall or precision that is lower than desired. In this work, we propose a new Active Learning method for TAR that integrates the results of the LLM in the review process that may correct some of the shortcomings of the LLM results, leveraging a reduced workload with respect to current TAR systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;technology-assisted review</kwd>
        <kwd>active learning</kwd>
        <kwd>large language model</kwd>
        <kwd>information retrieval</kwd>
        <kwd>weak supervision</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Technology-assisted review (TAR) is software that aids in high-recall (information) retrieval (HRR)
tasks. An example of such a task is performing a Systematic Literature Review (for example, in medicine
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]), but there are also applications in the legal domain (e.g., e-Discovery [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], but also the processing
of Freedom of Information Act Requests, criminal investigation, etc.). For all these search tasks, it is
important that nearly all relevant information is found, so these have a recall target of 75 – 100 % [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>In these extensive studies, the researchers, attorneys, or investigators gather evidence or information
by screening documents stored in large databases or corpora. The task is to find nearly all information
relevant to the subject of the investigation. In the case of Systematic Literature Reviews, the researcher
starts by using specialized search queries to select documents from databases. Formulating these queries
is not a trivial task, as it is the objective to capture (nearly) all relevant documents. These queries should
not be too restrictive to minimize the chance that a relevant document is missed; researchers often
use disjunctions rather than conjunctions. Consequently, the resulting set of candidate documents the
researchers process is often enormous, while the prevalence of relevant documents within these sets
can be very low.</p>
      <p>
        More formally, we can specify this task as follows: we have a dataset  containing all the candidate
documents found after the initial keyword search. During the review process, these documents are read
by the domain experts and labeled as either relevant or irrelevant. Read documents are referred to as
labeled. During the process, we maintain two sets ℒ+ and ℒ− for the labeled relevant and irrelevant
documents. The remaining unlabeled documents belong to the set  . Traditionally, researchers screened
all documents in . Technology-Assisted Review are then systems or algorithms that aid the reviewers
in reducing the reviewing workload [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], while still aiming to find all relevant documents +.
      </p>
      <p>
        Early TAR methods consisted of first creating a randomly sampled subset of  and training a classifier
on the labeled dataset ℒ. Then, that classifier is used to classify the remaining documents in  [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Many recent TAR systems use a form of Active Learning to update the classifier after each or several
review decisions iteratively [
        <xref ref-type="bibr" rid="ref10 ref11 ref6 ref7 ref8 ref9">6, 7, 8, 9, 10, 11</xref>
        ]. AL is a Machine Learning technique that is used to train
a classifier with fewer labeled data points while retaining good performance. In this setting, the model
can interactively query an oracle (i.e., the domain expert) to label data points with the desired output
of the Machine Learning model (i.e., in the case of a classification task, the class of the data point). In
our case, the model should predict each document’s relevancy or inclusion status. In canonical Active
Learning, the selection strategy aims to select the “most informative” examples from the perspective of
the classifier. An example of such a strategy is Uncertainty Sampling [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The goal of canonical AL
is to create a good inductive classifier, that can be used to classify previously unseen documents not
found in the pool of potential training examples.
      </p>
      <p>
        Within TAR, the model is used in a transductive setting only, i.e., the model is only used to retrieve
the relevant data within the pool. The model is not used after the retrieval task has been completed
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Many TAR systems (e.g., [
        <xref ref-type="bibr" rid="ref14 ref7 ref8 ref9">14, 7, 9, 8</xref>
        ]) use relevance sampling [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], a greedy batch sampling method
that selects a batch ℬ with the top- documents with the highest probability of belonging to the class
of relevant documents according to the trained model. After the annotation of each document in ℬ, the
model is retrained, and a new ranking for the documents in  is produced. The objective is then to find
all the remaining unlabeled relevant documents belonging to the set  +, while minimizing reading
documents that belong to the set  − .
      </p>
      <p>For abstract screening,  consists of title-abstract pairs, which the reviewers for eligibility for the
researcher’s systematic review or meta-analysis. The researchers follow a protocol that consists of
inclusion and exclusion criteria to determine the eligibility of a record (in Section 4 - Figure 2, an
example of such a protocol is displayed). This protocol should be followed strictly to ensure fairness
and mitigate bias. Typical statistics of this process are given in Table 1.</p>
      <p>Eligibility cannot always be determined from the title-abstract pair only due to the limited amount
of information stored there, so reading the full-text of the paper is necessary to decide on definitive
eligibility. Reading the full-text is associated with a high cost. Title-abstract screening greatly reduces
the number of papers that have to be screened fully. TAR systems then aid in reducing the number of
irrelevant title-abstract pairs so that not all records have to be screened.</p>
      <p>
        Recently, methods have been proposed that use generative Large Language Models (LLMs) systems
to perform title-abstract screening (inter alia [
        <xref ref-type="bibr" rid="ref11 ref16 ref17 ref18">16, 17, 11, 18</xref>
        ]). The main approach is to prepare a prompt
that delineates the task and specifies the criteria, followed by the title and abstract. After supplying
the prompt to the LLM, it will provide an answer and a decision on the inclusion status of that record.
Obtaining results can be automated by making a program or script that automatically processes a
dataset through the models’ API. In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], the authors report a mean accuracy of ± 90 % with a recall
of 76 %. However, the performance varied per dataset, with recall scores ranging from 59 % to 100 %.
In another study, the reported precision is low for some datasets [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], which may result in a higher
screening workload than current AL-based systems ofer.
      </p>
      <p>
        LLMs are prone to hallucination, where the LLMs generate responses that seem plausible but are
factually incorrect [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Moreover, LLMs are very eager to provide an answer even though there is no
information provided in the LLM’s training data or within the prompt to give a good answer [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. With
the current limitations, using the LLMs to determine the inclusion status of the title and abstract pairs
may not be reliable enough.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], the authors propose a system that combines (canonical) AL with Weak Supervision (e.g.,
noisy labels provided by a black-box model). To our knowledge, a TAR method that combines AL and
noisy labels (e.g., from an LLM or another model) has not been presented yet. In this work, we propose
a system that combines LLM classifications and Active Learning to improve the eficacy of the TAR
procedure. Our main contributions can be summarized as follows:
1. A system that provides more detailed LLM classifications for all the criteria in the screening
protocol instead of a single binary label for inclusion.
2. A system that makes LLM classifications more transparent by making the LLM provide a detailed
explanation for each classification.
3. An Active Learning method that incorporates the LLM results to reduce the workload of the
review.
4. A preliminary experimental evaluation of our method and several suggestions for future work.
      </p>
      <p>In the following section, we will briefly overview previous work on TAR, LLM classification and
techniques for combining weak supervision and AL. After that, we will explain our method, which
consists of an LLM classifier and an Active Learning method that incorporates its predictions. As the
LLM classifier assigns labels to each specific criterion, we introduce a case study in which we study
a novel dataset that contains labels for each record at the criterion level, enabling us to assess the
performance of our method. Finally, we will present our initial experiments and results, followed by a
discussion and suggestions for future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Most TAR approaches are based on the Continuous Active Learning (CAL) algorithm (see Algorithm 1)
[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. In this process, a model is trained on the documents that have already been reviewed. The model
is then used to rerank the remaining documents in  . Several CAL procedures [
        <xref ref-type="bibr" rid="ref23 ref7 ref8 ref9">8, 23, 9, 7</xref>
        ] require a set
of seed documents provided by the reviewer. This set needs to contain at least one relevant document,
but it does not need to be a document from ; it may also contain a description of the research topic as
a pseudo-document. Additionally, one example of an irrelevant document is needed.
      </p>
      <p>
        AutoTAR [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] extends the CAL procedure, which is still considered state-of-the-art and has been
included in many studies as a baseline, for example, when studying ideal performance vs. the
performance of a stopping criterion [
        <xref ref-type="bibr" rid="ref10 ref24 ref25">10, 24, 25</xref>
        ]. Instead of just training on the labeled documents ℒ+, ℒ− , it
samples a set of documents from the unlabeled set  , which are temporarily assumed to be irrelevant; a
fair assumption, given the low prevalence of relevant documents in most datasets. ASReview [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
opensource TAR software specialized for abstract screening, resamples the data to improve the performance
in the presence of imbalanced training data. FASTREAD2 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] modifies the CAL procedure with the goal
of detecting human errors during the review procedure, as noisy human labels may occur [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
      </p>
      <p>
        CAL, as described in Algorithm 1, leaves the question of a Stopping Criterion open (i.e., the
StoppingCriterion procedure, line 15 in Algorithm 1, is not given). Formulating a good stopping criterion
is an area of active research. Some practitioners use pragmatic criteria based on time constraints or stop
when the returns diminish (e.g., when TAR proposes  irrelevant documents in a row; however,
specifying  is target and topic dependent)[
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. Several heuristics [
        <xref ref-type="bibr" rid="ref14 ref27 ref28 ref7">14, 7, 28, 27</xref>
        ] (for example, characteristics
of the recall curve) have been proposed, as well as methods that change the CAL procedure to allow the
use of statistical methods that predict when a recall target has been achieved (inter alia [
        <xref ref-type="bibr" rid="ref10 ref23 ref24">10, 23, 24</xref>
        ]).
Algorithm 1 The Continuous Active Learning algorithm. The algorithm requires as parameters a
dataset , an unlabeled set of documents  , labeled documents ℒ+, ℒ− , a classifier , a batch size .
The Active Learning procedure selects new documents according to the relevance predictions of the
classifier , which are updated after each batch of labeling decisions.
◁ Returns the relevance score for all  in 
◁ Gets the top- documents
      </p>
      <p>
        The classifiers that are used in these systems are often based on classical Machine Learning algorithms
like Multinomial Naïve Bayes, Logistic Regression (AutoTAR), and Support Vector Machines combined
with TF-IDF features. However, some recent studies explore using neural networks and deep learning
(e.g., [
        <xref ref-type="bibr" rid="ref29 ref3">3, 29</xref>
        ]).
      </p>
      <p>
        This work focuses on applying TAR to aid abstract screening for systematic reviews. In this field,
state-of-the-art systems can find (nearly all) after screening 5 – 40 % of the corpus by using this general
methodology [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ], but performance is dataset and query dependent. A frequently used metric to assess
the eficacy of TAR systems metric is Work Saved over Sampling (WSS) which indicates the work savings
over the use of random sampling (i.e., traditional screening) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This metric can be calculated after
the procedure was terminated after a stopping criterion was triggered or when a recall target has been
achieved according to the ground truth; WSS@95, which indicates the the work savings over random
sampling at the moment when 95 % recall is achieved, is a frequently used metric for TAR systems
targeting Systematic Literature Reviews (inter alia [
        <xref ref-type="bibr" rid="ref23 ref8 ref9">23, 8, 9</xref>
        ]).
      </p>
      <p>
        In contrast to the AL-based methods, after the popularization of generative Large Language Models
like ChatGPT-3.5 and GPT-4 [30], systems have been proposed that use these models to perform
screening tasks. The main approach is to prepare a prompt that delineates the task and specifies the
criteria, followed by the title and abstract [
        <xref ref-type="bibr" rid="ref16 ref17 ref18 ref30">16, 17, 31, 18</xref>
        ]. Many approaches use ChatGPT-3.5 or GPT-4
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], several [
        <xref ref-type="bibr" rid="ref11 ref18">11, 18</xref>
        ] use open-source LLMs such as Llama 2 [
        <xref ref-type="bibr" rid="ref31">32</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], a large simulation study is
performed to assess the performance of several LLMs on popular TAR datasets (CLEF2017, CLEF2018,
CLEF2019) [
        <xref ref-type="bibr" rid="ref32 ref33 ref34">33, 34, 35</xref>
        ]; however, in this study, the LLM predicts the inclusion status only on the title of
the systematic review, not its screening protocol (the CLEF datasets do not ofer a lot of information on
the screening protocol, although the keyword searches are available and a topic description is available).
Contrary to the other methods, [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] compares the next token probabilities of yes and no (which are
used to indicate the inclusion decision), which can be used as a measure of confidence.
      </p>
      <p>
        There have been several works that combine or compare LLMs and Active Learning. For example, in
[
        <xref ref-type="bibr" rid="ref35">36</xref>
        ], the authors compare the performance of LLMs and models that have been trained with Active
Learning. One of the findings is that with a limited number of labeled documents, the AL-trained
models outperform the LLMs that perform zero-shot classification despite being significantly smaller in
terms of training parameters. In [
        <xref ref-type="bibr" rid="ref36">37</xref>
        ], a method is proposed that integrates an LLM as an annotator for
the creation of Named Entity Recognition (NER) models in underrepresented languages (e.g., African
languages). Another work presents a method that generates synthetic data with LLMs, which are used
to select the most interesting examples from the pool of unlabeled documents[
        <xref ref-type="bibr" rid="ref37">38</xref>
        ].
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], the authors present a method that combines AL with Weak Supervision and Transfer Learning.
They present their results on training a classifier for classifying financial transactions (text data) in
the presence of a black-box model (BBM) (a rule-based system). In this study, an annotator model is
trained on agreement labels between the black-box model and the oracle’s labels for each iteration
along the typical classifier. The annotator model is used to determine per selected instance if the BBM’s
label can be trusted and accepted or if the human oracle should label it instead. With this method, the
authors show that they could significantly lower annotation costs while retaining an accuracy close to
the traditional AL setting.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>In this section, we describe the general architecture of our method. Our TAR procedure consists of two
main components: a method to obtain classifications from the LLM and an Active Learning procedure
that is used to rank the records during the review phase. Our AL procedure, LLM+CAL, uses the results
of the LLM to reduce the review workload further.</p>
      <sec id="sec-3-1">
        <title>3.1. Obtaining LLM classifications</title>
        <p>
          In [
          <xref ref-type="bibr" rid="ref16 ref30">31, 16</xref>
          ], a prompt contained the task and the full screening protocol. The task for the LLM was then
to answer only with a final inclusion decision (e.g., choose between INCLUDE or EXCLUDE). This setup
can be regarded as a black-box system, as it is impossible to determine any of its reasoning for making
the decision. Also, the LLM does not provide any information about the confidence in its prediction
besides a probability of predicting the token that represents the word INCLUDE or EXCLUDE over the
space of all possible output tokens.
        </p>
        <p>
          Chain-of-thought prompting is a method to improve the accuracy of LLMs when performing complex
reasoning. With this method, it is specifically requested in the prompt to think step-by-step in addition
to a few examples of appropriate answers. The aim is to let the LLM reason about its “thought process”
verbosely, which results in a higher probability that the final answer is correct [
          <xref ref-type="bibr" rid="ref38">39</xref>
          ]. By adjusting the
prompt to let the LLM respond with chain-of-thought steps in a structured way, we aim to make the
process more transparent for the reviewer. In addition, we ask the LLM to provide rationales (i.e., select
fragments cited directly from the record in question), which enables tracing the decision to the source
document. In Figure 1, we display the prompt template that we use in our experiments, which contains
besides the instruction - a few examples of appropriate answers. We wrote a parser that parses the LLM
answer into a structured datatype. In a real-world application, the rationales can be used to highlight
fragments in the abstracts used in the LLM’s decision-making, enabling easy verification and correction
for the end-user in an annotation interface. Another significant diference between the studies in
earlier work and ours is that we consider each criterion in the protocol separately. We noticed many
classification errors in initial experiments when the whole screening protocol was considered. We list
some major error categories below:
Hallucination. The model makes up factually incorrect but seemingly plausible answers.
Missing knowledge or context. The model does not know enough information about a topic that a
human reviewer might know (e.g., technical jargon)
Incorrect reasoning. The information extraction works correctly, but the inclusion rules are not
followed, causing a misclassification.
        </p>
        <p>Ignoring instructions. Only a part of the screening protocol was used according to the LLM’s
chain-ofthought response. Some LLMs have problems following all instructions in the prompt, especially
when the instructions are long and complex. Larger models like GPT-4 are less prone to this but
have a higher computational and financial cost.</p>
        <p>Often, the LLM followed the protocol partially: consider a dataset with four criteria, the LLM
considered three criteria correctly but mistakenly ignored one of them, causing a misclassification of</p>
        <sec id="sec-3-1-1">
          <title>ASSIGNMENT: You are a helpful assistant who helps screen abstracts and titles of scientific papers. You answer</title>
          <p>questions by citing evidence in the given text followed by a YES or NO or UNKNOWN decision. When there is no
evidence in the title and abstract, decide with UNKNOWN. Only answer with NO if there is absolute evidence given
that the answer is NO. In the absence of evidence or when nothing is mentioned, always answer UNKNOWN. Use the
following format:</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>REASONING: (Think step by step to answer the question; use the information in the title and abstract and</title>
          <p>work your way to an answer. Your full reasoning and answer should be given in this field)</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>EVIDENCE: (List sentences or phrases from the title and abstract used to answer the question in the previous field.</title>
        </sec>
        <sec id="sec-3-1-4">
          <title>Answer in bullets (e.g., - "quoted sentence"). Each quoted sentence should have its own line. If there is no evidence,</title>
          <p>write down []). In this field, only directly cite from the TITLE and ABSTRACT fields. DO NOT USE YOUR OWN</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>WORDS, AND ADHERE TO THE LIST FORMAT!</title>
        </sec>
        <sec id="sec-3-1-6">
          <title>ANSWER: (Summarize your answer from the REASONING field with YES or NO or UNKNOWN. DO NOT WRITE</title>
        </sec>
        <sec id="sec-3-1-7">
          <title>ANYTHING AFTERWARDS IN THIS FIELD.)</title>
        </sec>
        <sec id="sec-3-1-8">
          <title>Write nothing else afterward.</title>
        </sec>
        <sec id="sec-3-1-9">
          <title>EXAMPLE RESPONSE 1:</title>
        </sec>
        <sec id="sec-3-1-10">
          <title>REASONING: To answer the question, we need to find information about [. . .]. The title and the abstract mention that</title>
          <p>[. . .]. Furthermore, the study aims to [. . .], suggesting that this is indeed the case. So, the answer to this question is
YES.</p>
        </sec>
        <sec id="sec-3-1-11">
          <title>EVIDENCE:</title>
          <p>- "Sentence evidence 1"
- "Sentence evidence 2"</p>
        </sec>
        <sec id="sec-3-1-12">
          <title>ANSWER: YES</title>
        </sec>
        <sec id="sec-3-1-13">
          <title>EXAMPLE RESPONSE 2:</title>
        </sec>
        <sec id="sec-3-1-14">
          <title>REASONING: To answer the question, we need to find information about [. . .]. The title and abstract say something</title>
          <p>about [. . .] but do not mention anything about [. . .]. As there is no definitive evidence, the answer should be</p>
        </sec>
        <sec id="sec-3-1-15">
          <title>UNKNOWN.</title>
        </sec>
        <sec id="sec-3-1-16">
          <title>EVIDENCE: []</title>
        </sec>
        <sec id="sec-3-1-17">
          <title>ANSWER: UNKNOWN</title>
        </sec>
        <sec id="sec-3-1-18">
          <title>EXAMPLE RESPONSE 3:</title>
        </sec>
        <sec id="sec-3-1-19">
          <title>REASONING: To answer the question, we need to find information about [. . .]. The title and abstract say something</title>
          <p>about [. . .]. This statement rules out that [. . .]. As there is evidence to the contrary, the answer should be NO.</p>
        </sec>
        <sec id="sec-3-1-20">
          <title>EVIDENCE:</title>
          <p>- "Sentence evidence 1"</p>
        </sec>
        <sec id="sec-3-1-21">
          <title>ANSWER: NO</title>
          <p>TITLE: {title}
ABSTRACT: {abstract}</p>
          <p>QUESTION: {question}
the whole instance due to a mistake. This setup makes it challenging to detect failures due to a specific
criterion. Mistakes become only apparent by combing through the (semi-structured) LLM answers
containing information on all criteria.</p>
          <p>We aim to mitigate this by considering each criterion separately, making the set of instructions
shorter and less complex, which results in a higher accuracy. The system can then infer the inclusion
status of a record by applying a simple logical formula to the model’s decision on the criteria (for
example, Figure 2).</p>
          <p>Despite the reduced complexity, it is still possible that the LLMs make classification errors, for
example, due to hallucination, possibly because of missing knowledge. We hypothesize that these
errors will not always happen at random, especially for the latter cause. Suppose the LLM makes an
incorrect classification for a specific criterion due to missing knowledge. In that case, the LLM will
likely make a similar mistake for instances similar to the one in question. Collecting the rationales and
chain-of-thought fragments of misclassifications and training models on them might aid in predicting
when the LLM makes a mistake or a correct decision.</p>
          <p>
            We used LangChain [
            <xref ref-type="bibr" rid="ref39">40</xref>
            ] to build our LLM classification pipeline. This package enables us to target
multiple Large Language Models. In our experiments, we only worked with ChatGPT-3.5 (specifically,
version 0301); however, the method can be applied to GPT-4 or models of other vendors, such as
open-source models published on repositories like HuggingFace [
            <xref ref-type="bibr" rid="ref40">41</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Active Learning method</title>
        <p>As in canonical TAR, we represent each document as a high-dimensional vector. A typical feature
extraction method is a bag-of-words method like TF-IDF that TAR systems frequently use. Combining
sparse feature matrices and classical machine learning methods ofers fast retraining and reranking of
the documents in  . The AutoTAR baseline uses TF-IDF combined with a Logistic Regression classifier.
In our approach, we will also use TF-IDF and Logistic Regression to ensure that changes in performance
are not due to changes in the document representation.</p>
        <p>During the process, the labeling task is specified as follows: we have a feature space tiab, which
contains the feature vectors of the title-abstract (tiab) records. Each document presented to the oracle
gets, for each of the criteria (see Figure 2), a label in the space crit = {+, ?, ¬} corresponding to True
(Yes), Unknown, False (No). The option Unknown is vital in this phase, as it is not always the case that
the information needed to determine eligibility for a criterion is present in the title and abstract.</p>
        <p>Our method, LLM+CAL, consists of two phases: the first phase is called LLMPreferred, which is - in
essence - a version of the method AutoTAR, but in this version constrained to select from the unlabeled
{+,?}). As initial training data, the whole screening
documents that are included by the LLM ( ∩ ℒLLM
protocol is given in addition to a random sample of 100 LLM-excluded documents (ℒ−LLM). This phase is
applied until 25 consecutive irrelevant documents are proposed, which might indicate that the set of
relevant documents may be exhausted.</p>
        <p>Because the possibility exists that there are relevant documents that the LLM does not find, we will
switch to the CriteriaWSA method, which can query all documents within  . First, all labeled data ℒ
from the first phase is transferred to this method. Then, several machine learning models are trained:
Inclusion Judgment Classifier. A Binary Classifier trained on the labeled data after transforming
the data to binary = {+, ¬}, trained on the data in ℒ, in a similar fashion as AutoTAR. The
criterion judgments are transformed using the formula specified in Figure 2, which will result in
a label in the space ternary = {+, ?, ¬}. We can then transform ternary to binary by changing
each ? into a +.</p>
        <p>Acceptance Classifier. A Binary Classifier that determines Acceptance for each inclusion criterion.</p>
        <p>
          This is similar to a method presented in [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Here, for each criterion , we obtain binary agreement
labels  ∈ , where  = {0, 1}. This is determined by comparing the LLM predictions and the
labeled data in ℒ: each instance receives a label Accept (1) if the LLM prediction agrees with
the human-annotated label. Otherwise, the label Reject (0) is given. However, contrary to the
other models in our system and the method in [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], the model is not trained on the Title-Abstract
records (tiab), but on the LLM’s reasoning fragments ans (see Figure 4 for example data) of
criterion .
        </p>
        <p>Given a TAR task that has four inclusion criteria ({, , , }), we obtain the following pairs for each
labeled for each labeled record:
• tiab ×  crit ×  crit ×  crit ×  crit
• tiab ×  binary
• tiab ×  ternary
• ans ×  
• ans ×  
• ans ×  
• ans ×</p>
        <p>During each annotation round, a batch of ten documents is given to the oracle using relevance
sampling based on the ranking produced by the inclusion judgment classifier. The batch size of ten is
an initial default value for this parameter. Smaller, larger, and dynamic batch sizes can be explored in
future work. Another ten documents are sampled based on a ranking that is based on the predictions of
the LLM and the Acceptance Classifier using the following equation:
score(^L LM, acc) =
⎧ 0.75 + 0.25acc if ^LL LLMM == ?+
⎨ 0.5 + 0.25acc
⎩ 0.5(1 − acc) iiff ^^L LM = ¬
.</p>
        <p>Equation 1 is calculated for each study criterion , where ^L LM is the LLM’s prediction for criterion 
and acc is the corresponding acceptance probability. The mean of those scores is calculated for each
of the unlabeled documents. Then, this score is used to rank the remaining documents in  . The
rationale behind Equation 1 is that instances with a higher probability to be relevant (instances with
criteria that have more True labels) are put before documents that have Unknown labels, followed by
documents that have False labels. Labels that have False labels and a low acceptance probability will
have a higher probability of being selected than documents with False labels that are certain. For the
True and Unknown labels, the inverse holds if there is a higher acceptance probability, they are preferred
over instances with lower acceptance probability. This is still an initial formulation that may not always
work optimally; other options can be explored in future research.</p>
        <p>After this batch of twenty documents has been prepared, they are given to the oracle for labeling
unless the LLM has found exclusionary evidence for a specific criterion and its acceptance probability
is above 80 % (unless that criterion is a reason for exclusion for all remaining documents in  ); these
examples are skipped but may be proposed again in another round if the acceptance probability drops
below 80 %.</p>
        <p>This process is repeated until a stopping criterion is triggered, the oracle decides to stop the review,
or  is exhausted. In our experiments, we will stop querying after reviewing |ℒL{L+M,?}| documents.
(1)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Case Study</title>
      <p>
        In this work, we compare the performance of various TAR methods on a dataset that is collected for a
systematic review (at the time of writing in preparation) that aims to identify common latent groups or
classes of PTSS/PTSD (Post-traumatic Stress Symptoms / Post-traumatic Stress Disorder) trajectories,
as well as their prevalence and predictors, which may give a better understanding how and under what
circumstances PTSS/PTSD presentations may develop [
        <xref ref-type="bibr" rid="ref41">42</xref>
        ]. For this purpose, researchers reviewed a
large corpus of records after querying several databases. During the review, the records were labeled
on various levels, which we list below.
Inclusion Criteria:
 : Is the study a longitudinal/prospective study with at least three time point assessments?
 : Does the study assess PTSD symptoms as a continuous variable? [Followed by a list of eligible
scales]
 : Does this study mention that individuals are exposed to traumatic events?
 : Did the study conduct a PTSD trajectory analysis? [Followed by a list of eligible methods]
A study  can be included in the review when all criteria are satisfied (so, ∀ ∈ +, () ∧ () ∧ () ∧ ()).
      </p>
      <p>Title. Some documents can be excluded by considering the title only. For example, animal studies are
never eligible, and the fact that a study is an animal study can become clear from reading the
title. We only study the records that have not been excluded by title screening.</p>
      <p>Criterion. The eligibility of a study for inclusion depends on four inclusion criteria (see Figure 2). For
each criterion  ∈ {, , , }, a label crit = {+, ?, ¬}, corresponding to True, Unknown, False
can be given. In Figure 3, some statistics per criterion are displayed.</p>
      <p>Title Abstract. Using the logical formula in Figure 2, an inclusion judgment can be made for each
criterion, so this level can be derived from the criterion level without additional human efort.
This will result in a label in the space ternary = {+, ?, ¬}. Because an instance can have an
Unknown label for one or more criteria, the final eligibility of such a study must be determined
by reading the entire paper without exclusionary evidence in the record.</p>
      <p>Full-text level: Final eligibility depends on reading the full-text of the study. This level is not
considered in this work because this label needs more information than is available in this dataset (i.e.,
the full-text of every record).</p>
      <p>
        This dataset is unique compared to other frequently used datasets used for benchmarking TAR
systems (e.g., [
        <xref ref-type="bibr" rid="ref32 ref33 ref34">33, 34, 35</xref>
        ]) have only binary inclusion information, sometimes only on the full-text level.
Moreover, while these datasets are based on real-world search tasks, there is little to no information
about the inclusion/exclusion criteria available. The SYNERGY [
        <xref ref-type="bibr" rid="ref42">43</xref>
        ] corpus consists of several systematic
reviews (including an earlier version of the PTSS dataset [
        <xref ref-type="bibr" rid="ref43">44</xref>
        ]) with links to the publications from which
the screening protocols can be obtained. Unfortunately, only inclusion labels on the full-text level are
included, so we cannot study retrieval eficacy fairly (we can only consider recall of the set of papers
that are included based on the full-text, which is a subset of the Title-Abstract included papers; therefore,
we cannot distinguish title-abstract inclusions from the false positives). To our knowledge, the dataset
used in this case study (for the systematic review in [
        <xref ref-type="bibr" rid="ref41">42</xref>
        ]) is the only systematic review with labels on
the criterion level.
      </p>
      <p>We will consider only the records of one reviewer after title screening here, which results in a
set of 4836 records after some data cleaning. Our dataset then contains |{+,?}| = 183 records that
are included on the title-abstract level, resulting in a prevalence of 3.78 %. One observation that can
be drawn from Figure 3 is that criterion  determines the title-abstract inclusion label (displayed as
judgment) the most.</p>
      <p>Label statistics and relations
5000
4000
s
t
n
e
c
o
D
f
for example, that only a tiny subset of the documents for which  is False, criterion  is True. Also, it becomes
clear that criterion  excludes the most documents of all the criteria.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental evaluation</title>
      <p>We compare several methods in a small simulation study on the dataset described in the previous
section.</p>
      <p>• AutoTAR, a state-of-the-art TAR method,
• The LLM Classifier, as described in Section 3.1,
• LLM+CAL, our AL method that integrates the predictions of the LLM Classifier, as described in</p>
      <p>Section 3.2).</p>
      <p>In this study, we only compare retrieval eficacy as we leave the question of a good stopping criterion
open. Therefore, we constrain the run to the number of documents that are predicted by the LLM to be
still eligible for inclusion (i.e., the number of documents with the label for which the inclusion judgment
prediction is True or Unknown, |ℒLLM |
we can compare the performance of the LLM classifier and the AL-based methods with the same review
efort. During the experiment, we will record when various recall levels are triggered. We will record
{+,?} ). We let each algorithm run until this number is reached. Then,
the following metrics (calculated in the space binary).</p>
      <p>Recall. The percentage of relevant documents found based on the a priori knowledge from the ground
truth dataset.</p>
      <p>
        = |ℒ |
+
+
| |
Work Saved over Sampling. This metric expresses the work reduction over random sampling [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>We calculate this as follows. We will record this value for several recall targets:
  = | |
||
−
︂(
1
−
|ℒ |
+ )︂
+
| |
Equation 3 is used in the AL setting. In the context of a classifier, we equate  to the set of
documents predicted to be irrelevant (the reviewers do not read those documents). For the LLM
(2)
(3)</p>
      <p>Classifier, we can adapt the equation as follows.</p>
      <p>= |ℒ−LLM|
−
︂(
1</p>
      <p>−
||</p>
      <p>+
|ℒLLM|</p>
      <p>+
| |
︂)
(4)</p>
      <p>The rest of the section is structured as follows: first, we describe the results of the LLM classification,
followed by the results of a simulation study in which we compare the aforementioned AL-based TAR
methods.</p>
      <sec id="sec-5-1">
        <title>5.1. LLM Classification results</title>
        <p>In Figure 4, we display an example of an annotated record. After parsing the response, we can highlight
the fragments the LLM used in its decision-making. This overview is available for every instance in
the dataset. When used in an annotation interface, the LLM explanations might aid users in their
decision-making process, possibly reducing the screening time per document.</p>
        <p>In Table 2, confusion matrices per criterion are displayed. A clear observation from Table 2 is that the
LLM is more cautious in excluding papers than the human reviewer: the confusion matrices show high
numbers of studies with ground truth False and predictions Unknown for all criteria. One of the causes
is that when there is no written evidence to make a decision about a criterion, for example, whether or
not a PTSD trajectory analysis (criterion ) was performed, the LLM would predict Unknown. This might
seem like the correct decision in this situation. However, experienced human reviewers might exclude
a paper based on their knowledge of the field by inferring that from other characteristics (for example,
when the abstract describes a methodology that makes it impossible to use one of the eligible methods).</p>
        <p>The LLM’s definition of specific terms or the meaning of concepts might diverge from the reviewers’.
For example, for criterion , in some cases, the LLM eagerly infers from the descriptions of the studied
populations that these might be exposed to trauma, which might not explicitly be mentioned in the
record. Fortunately, the number of falsely excluded documents per criterion is low.</p>
        <p>When combining the LLMs prediction, we can infer the title-abstract level predictions using the
logical formula specified in Figure 2. In Table 2, the confusion matrix for this level is displayed, both on
the ternary and binary levels. On this level, we obtain an accuracy of 78.52 % (ternary level), with a
recall of 91.26 % on the binary level. In absolute numbers, this results in the fact that only 16 studies
were missed out of the 183. The precision on the binary level is 12.9 %, resulting in a Work Saved over
Sampling of 64.48 % (with Equation 4).
Document M8746
Result: ¬, ?, , ¬ vs. Ground Truth ¬, ?, , ¬</p>
        <sec id="sec-5-1-1">
          <title>Title: Gender-based violence and its association with mental health among Somali women in a Kenyan refugee camp:</title>
          <p>a latent class analysis</p>
          <p>c
Abstract: BACKGROUND: In conflict-afected settings, women and girls are</p>
          <p>c c
vulnerable to gender-based violence (GBV). GBV is associated with poor long-term mental health such
c
as anxiety, depression and post-traumatic stress disorder (PTSD). Understanding the interaction between current
violence and past conflict-related violence with ongoing mental health is essential for improving mental health service
d
provision in refugee camps. METHODS: Using data collected from 209 women attending GBV
d d
case management centres in the Dadaab refugee camps, Kenya, we grouped women by recent experience of GBV
d d
using latent class analysis and modelled the relationship between the groups and symptomatic scores for anxiety,
d
depression and PTSD using linear regression. RESULTS: Women with past-year experience of intimate partner
violence alone may have a higher risk of depression than women with past-year experience of non-partner violence
c,d
alone (Coef. 1.68, 95% CI 0.25 to 3.11). Conflict-related violence was an important risk factor</p>
          <p>c,d c,d
for poor mental health among women who accessed GBV services, despite time since occurrence (average time
c,d c,d
in camp was 11.5 years) and even for those with a past-year experience of GBV (Anxiety: 3.48, 1.85-5.10;
c,d</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>Depression: 2.26, 0.51-4.02; PTSD: 6.83, 4.21-9.44). CONCLUSION: Refugee women who experienced past-year</title>
          <p>intimate partner violence or conflict-related violence may be at increased risk of depression, anxiety or PTSD. Service
providers should be aware that compared to the general refugee population, women who have experienced violence
may require additional psychological support and recognise the enduring impact of violence that occurred before,
during and after periods of conflict and tailor outreach and treatment services accordingly.</p>
          <p>Reasoning:
¬ The title and abstract do not mention anything about a longitudinal/prospective study with multiple
time-point assessments. As there is no evidence to suggest that the study is longitudinal/prospective with at
least three-time point assessments, the answer is NO.
? The question asks if the study assesses PTSD symptoms as a continuous variable and provides a list of eligible
and ineligible PTSD scales. The abstract mentions that the study modelled the relationship between groups of
women and symptomatic scores for anxiety, depression, and PTSD using linear regression. However, the
abstract does not mention any specific PTSD scales used in the analysis. Therefore, it is not clear whether the
study used any of the eligible or ineligible PTSD scales provided in the question, and the answer should be
UNKNOWN.
 The study examines the association between gender-based violence (GBV) and mental health, including anxiety,
depression, and post-traumatic stress disorder (PTSD), among women in a Kenyan refugee camp. The abstract
mentions that conflict-afected settings put women at risk for GBV, which is associated with poor long-term
mental health such as anxiety, depression, and PTSD. The study also found that conflict-related violence was
an important risk factor for poor mental health among women who accessed GBV services, despite time since
occurrence. Therefore, it is likely that the individuals in the study were exposed to traumatic events.
¬ The study aimed to investigate the relationship between GBV and mental health among Somali women in a</p>
        </sec>
        <sec id="sec-5-1-3">
          <title>Kenyan refugee camp. However, the methods section does not mention conducting a PTSD trajectory analysis.</title>
        </sec>
        <sec id="sec-5-1-4">
          <title>Therefore, the answer is NO.</title>
        </sec>
        <sec id="sec-5-1-5">
          <title>Criterion a</title>
        </sec>
        <sec id="sec-5-1-6">
          <title>True</title>
        </sec>
        <sec id="sec-5-1-7">
          <title>Unknown</title>
        </sec>
        <sec id="sec-5-1-8">
          <title>False</title>
        </sec>
        <sec id="sec-5-1-9">
          <title>Criterion c</title>
        </sec>
        <sec id="sec-5-1-10">
          <title>True</title>
        </sec>
        <sec id="sec-5-1-11">
          <title>Unknown</title>
        </sec>
        <sec id="sec-5-1-12">
          <title>False</title>
        </sec>
        <sec id="sec-5-1-13">
          <title>Inclusion</title>
        </sec>
        <sec id="sec-5-1-14">
          <title>True</title>
        </sec>
        <sec id="sec-5-1-15">
          <title>Unknown</title>
        </sec>
        <sec id="sec-5-1-16">
          <title>False</title>
        </sec>
        <sec id="sec-5-1-17">
          <title>True 744 116 254</title>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Active Learning methods</title>
        <p>After obtaining the LLM’s results, we conducted several simulation runs of the AutoTAR baseline and
our LLM+CAL method. Because both methods contain components in which random sampling takes
place, we performed 30 runs per method to account for this. We stopped each simulation run after
supplying the oracle 1295 papers, which is the number of documents the LLM predicted to be included
(|ℒL{L+M,−} |). Stopping at this moment allows a comparison of the LLM’s recall to those of these methods
given the same human reviewing efort. The recall curves of the methods are displayed in Figure 5.
The mean recall (after stopping the simulation) of the AutoTAR method is 96.52 %, which is above
the recall obtained with the LLM given the same human review efort. With the combined method, a
similar recall is obtained (96.68 %), finding 177 out of 183 documents, reducing the number of missed
studies from 16 to 6.</p>
        <p>The mean recall after stopping the simulation is roughly the same for both AL methods. However,
when considering other recall targets, it is evident that our combined method outperforms the baseline.
For example, at 95 % recall, our method has a mean WSS@95 of 80.53 % versus 71.41 % of AutoTAR.
This indicates that using the LLM predictions gives an additional advantage in retrieving relevant
documents faster. In Figure 6, we give an overview of the performance for several other targets, of
which all indicate that the LLM+CAL method outperforms the AutoTAR baseline.
100 % recall (183)
95 % recall (174)
Exp. found at random
# by AutoTAR
WSS@60 - 54.1 %
WSS@65 - 58.6 %
WSS@70 - 63.6 %
WSS@75 - 67.8 %
WSS@80 - 70.9 %
WSS@85 - 73.5 %
WSS@90 - 72.7 %
WSS@95 - 71.6 %
250
s
t
n
e
m
cu200
o
d
t
n
a
lve150
e
r
d
e
ive100
tr
e
fr
o
re 50
b
m
u
n</p>
        <p>0
0 500 1000
number of read documents
0 500 1000
number of read documents
(a) AutoTAR (baseline)
(b) LLM+CAL (ours)
Run on a dataset with 183 inclusions out of 4836</p>
        <p>Run on a dataset with 183 inclusions out of 4836
6
0
%
6
5
%
7
0
%
7
5
%
8
0
%
8
5
%
9
0
%
9
5
%</p>
        <p>Work Saved over Sampling (%)</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>
        We have shown some preliminary results on our method, which indicate that adding LLM predictions
is beneficial to obtaining relevant documents at a lower cost than with the state-of-the-art method
AutoTAR as our LLM+CAL method yields higher work savings at several recall targets. Moreover, a
reviewer could achieve a better recall and WSS than obtained using only the LLM classifier. We have
presented a system that builds upon earlier LLM methods for Systematic Literature Reviews by making
the predictions more fine-grained by addressing each inclusion criterion separately. Moreover, our
approach aims to make the predictions more accurate and explainable by leveraging chain-of-thought
reasoning and asking the LLM to cite from the title-abstract record directly. Our method takes some
ideas from [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] in combining AL and the noisy labels from, in our case, an LLM annotator.
      </p>
      <p>We evaluated our method on a single dataset, which may impact the generalizability of our results.
Unfortunately, testing on more datasets is not feasible at the time of writing, as our method requires
that the dataset has criterion-level labels. It may be interesting if we can adapt the method to work with
feedback on the binary inclusion level, which might enable us to consider more datasets that do not
have labels this fine-grained. Another interesting avenue is comparing the performance of our method
on diferent LLM results than presented here. The LLM predictions may slightly difer when another
model is used or when alternative formulations of inclusion criteria and general instructions are used.
Further investigation is needed to determine what impact non-optimal instructions have on the LLM’s
accuracy and the ability of our method to correct lower-quality weak labels.</p>
      <p>
        The method we presented here is still relatively simple; several extensions can be made that might
further improve the eficacy. For example, incorporating Transfer Learning (as in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]). Another area
that can be explored further is the sampling strategy. Currently, our sampling strategy is based on a
binary Logistic Regression classifier and TF-IDF features (as in AutoTAR). Considering other classifiers
like Neural Networks and text embeddings like SentenceBERT [
        <xref ref-type="bibr" rid="ref45">46</xref>
        ] might yield additional performance
gains over traditional methods.
      </p>
      <p>We currently do not use the criterion-level labels during model training and subsequently rank
documents in  with those models. Designing a good method that combines the results of the four
classifiers in a ranking is not trivial. Equation 1 is a starting point (now applied to LLM only) but not
optimal. Relations between criteria have also not been taken into account yet. For example, assume a
scenario where, within nearly all labeled records in ℒ, the proposition  ∧  ∧  →  holds. When, for
a new instance, the LLM predicts the following labels {, , ¬, }, this record may be an interesting
example to review for the oracle because it is an exception to what has been seen so far.</p>
      <p>
        So far, the LLM rationales have not been used to train the classifier. In [
        <xref ref-type="bibr" rid="ref46">47</xref>
        ], (human annotated)
rationales were used as additional training data besides tiab for TAR for Systematic Literature Reviews,
suggesting it might be beneficial to consider the LLM rationales during training as well.
      </p>
      <p>As mentioned before, we have left the question of a stopping criterion open. One avenue could be to
combine the method with an existing stopping criterion or to use the LLM predictions to determine an
optimal stopping point.</p>
      <p>
        During a review, regardless of whether it is performed in the traditional setting or with TAR, labeling
mistakes occur due to human error [
        <xref ref-type="bibr" rid="ref26 ref7">7, 26</xref>
        ]. As in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], our method assumes that the oracle always
makes the correct decision; however, this may not always be the case. Presenting the LLM rationales
and chain-of-thought fragments (like in Figure 4) may help the oracle to make better decisions and
prevent some mistakes, but the extent of this has to be further investigated. Also, the Active Learning
part of our method could be adapted to consider the possibility of human errors.
      </p>
      <p>We believe several ideas presented here might also benefit research areas other than TAR. For example,
the LLM framework presented here can be applied to text classification tasks in general. However,
adapting our method to a canonical AL setting is more appropriate in this setting. The framework
we presented here enables obtaining weak labels at a low cost, with little engineering efort besides
writing a good labeling protocol, and chain-of-thought prompting may aid in spotting errors within
them, enabling more eficient creation of text classification models.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>We thank the anonymous reviewers for their insightful comments, which helped improve this article’s
quality. This work was sponsored by a grant from the Dutch Research Council (Domain Social Sciences
and Humanities [SSH]), with file no. 406.22.GO.048. Moreover, this work was sponsored by a grant
from the Human-Centered Artificial Intelligence focus area at Utrecht University.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Wallace</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Trikalinos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Brodley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Schmid</surname>
          </string-name>
          ,
          <article-title>Semi-automated screening of biomedical citations for systematic reviews</article-title>
          ,
          <source>BMC Bioinformatics</source>
          <year>2010</year>
          11:
          <fpage>1</fpage>
          <lpage>11</lpage>
          (
          <year>2010</year>
          )
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .1186/
          <fpage>1471</fpage>
          -2105-11-55.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Baron</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Losey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Berman</surname>
          </string-name>
          , American Bar Association (Eds.), Perspectives on Predictive Coding:
          <article-title>And Other Advanced Search Methods for the Legal Practitioner</article-title>
          , American Bar Association, Chicago, Illinois,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Yang</surname>
          </string-name>
          , S. MacAvaney, D. D.
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Frieder</surname>
          </string-name>
          , Goldilocks:
          <article-title>Just-Right Tuning of BERT for Technology-Assisted Review</article-title>
          , in: M.
          <string-name>
            <surname>Hagen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Verberne</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Macdonald</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Seifert</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Balog</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Nørvåg</surname>
          </string-name>
          , V. Setty (Eds.),
          <source>Advances in Information Retrieval</source>
          , volume
          <volume>13185</volume>
          , Springer International Publishing, Cham,
          <year>2022</year>
          , pp.
          <fpage>502</fpage>
          -
          <lpage>517</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -99736-6_
          <fpage>34</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. W.</given-names>
            <surname>Oard</surname>
          </string-name>
          , W. Webber, Information Retrieval for E-Discovery,
          <source>Foundations and Trends® in Information Retrieval</source>
          <volume>7</volume>
          (
          <year>2013</year>
          )
          <fpage>99</fpage>
          -
          <lpage>237</lpage>
          . doi:
          <volume>10</volume>
          .1561/1500000025.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. R.</given-names>
            <surname>Hersh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Peterson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. Y.</given-names>
            <surname>Yen</surname>
          </string-name>
          ,
          <article-title>Reducing workload in systematic review preparation using automated citation classification</article-title>
          ,
          <source>Journal of the American Medical Informatics Association</source>
          <volume>13</volume>
          (
          <year>2006</year>
          )
          <fpage>206</fpage>
          -
          <lpage>219</lpage>
          . doi:
          <volume>10</volume>
          .1197/jamia.M1929.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G. V.</given-names>
            <surname>Cormack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Grossman</surname>
          </string-name>
          , Engineering Quality and
          <article-title>Reliability in Technology-Assisted Review</article-title>
          ,
          <source>in: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '16</source>
          , ACM Press, New York, New York, USA,
          <year>2016</year>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>84</lpage>
          . doi:
          <volume>10</volume>
          .1145/2911451.2911510.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          , T. Menzies,
          <article-title>FAST2: An intelligent assistant for finding relevant papers</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>120</volume>
          (
          <year>2019</year>
          )
          <fpage>57</fpage>
          -
          <lpage>71</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2018</year>
          .
          <volume>11</volume>
          .021.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K. E. K.</given-names>
            <surname>Chai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L. J.</given-names>
            <surname>Lines</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Gucciardi</surname>
          </string-name>
          , L. Ng, Research Screener:
          <article-title>A machine learning tool to semi-automate abstract screening for systematic reviews</article-title>
          ,
          <source>Systematic Reviews</source>
          <volume>10</volume>
          (
          <year>2021</year>
          )
          <article-title>93</article-title>
          . doi:
          <volume>10</volume>
          .1186/s13643-021-01635-3.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9] R. van de Schoot, J. de Bruin,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zahedi</surname>
          </string-name>
          , J. de Boer,
          <string-name>
            <given-names>F.</given-names>
            <surname>Weijdema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kramer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huijts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hoogerwerf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ferdinands</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Harkema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Willemsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hindriks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tummers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Oberski</surname>
          </string-name>
          ,
          <article-title>An open source machine learning framework for eficient and transparent systematic reviews</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>3</volume>
          (
          <year>2021</year>
          )
          <fpage>125</fpage>
          -
          <lpage>133</lpage>
          . doi:
          <volume>10</volume>
          .1038/s42256-020-00287-7.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kanoulas</surname>
          </string-name>
          ,
          <article-title>When to Stop Reviewing in Technology-Assisted Reviews: Sampling from an Adaptive Distribution to Estimate Residual Relevant Documents</article-title>
          ,
          <source>ACM Transactions on Information Systems</source>
          <volume>38</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>36</lpage>
          . doi:
          <volume>10</volume>
          .1145/3411755.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Dennstädt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Putora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hastings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cihoric</surname>
          </string-name>
          ,
          <article-title>Title and abstract screening for literature reviews using large language models: An exploratory study in the biomedical domain</article-title>
          ,
          <source>Systematic Reviews</source>
          <volume>13</volume>
          (
          <year>2024</year>
          )
          <article-title>158</article-title>
          . doi:
          <volume>10</volume>
          .1186/s13643-024-02575-4.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>D. D. Lewis</surname>
            ,
            <given-names>W. A.</given-names>
          </string-name>
          <string-name>
            <surname>Gale</surname>
          </string-name>
          ,
          <article-title>A Sequential Algorithm for Training Text Classifiers</article-title>
          , in: W. B.
          <string-name>
            <surname>Croft</surname>
          </string-name>
          , C. J. van Rijsbergen (Eds.),
          <source>Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval</source>
          . Dublin, Ireland, 3
          <article-title>-6 July 1994 (Special Issue of the SIGIR Forum)</article-title>
          , ACM/Springer,
          <year>1994</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4471</fpage>
          -2099-5_
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lombaers</surname>
          </string-name>
          , J. de Bruin, R. van de Schoot,
          <article-title>Reproducibility and Data Storage for Active LearningAided Systematic Reviews</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <article-title>3842</article-title>
          . doi:
          <volume>10</volume>
          .3390/app14093842.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G. V.</given-names>
            <surname>Cormack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Grossman</surname>
          </string-name>
          ,
          <article-title>Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review</article-title>
          ,
          <year>2015</year>
          . arXiv:
          <volume>1504</volume>
          .
          <fpage>06868</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G.</given-names>
            <surname>Salton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Buckley</surname>
          </string-name>
          ,
          <article-title>Improving retrieval performance by relevance feedback</article-title>
          ,
          <source>J. Am. Soc. Inf. Sci</source>
          .
          <volume>41</volume>
          (
          <year>1990</year>
          )
          <fpage>288</fpage>
          -
          <lpage>297</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>E.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Paget</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Naugler</surname>
          </string-name>
          ,
          <source>Automated Paper Screening for Clinical Reviews Using Large Language Models: Data Analysis Study, Journal of Medical Internet Research</source>
          <volume>26</volume>
          (
          <year>2024</year>
          )
          <article-title>e48996</article-title>
          . doi:
          <volume>10</volume>
          .2196/48996.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wilkins</surname>
          </string-name>
          ,
          <article-title>Automated title and abstract screening for scoping reviews using the GPT-</article-title>
          4
          <string-name>
            <surname>Large Language</surname>
            <given-names>Model</given-names>
          </string-name>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2311.07918. arXiv:
          <volume>2311</volume>
          .
          <fpage>07918</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Scells</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Koopman</surname>
          </string-name>
          , G. Zuccon, Zero-shot
          <source>Generative Large Language Models for Systematic Review Screening Automation</source>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2
          <volume>401</volume>
          .06320. arXiv:
          <volume>2401</volume>
          .
          <fpage>06320</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Guerreiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Alves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Waldendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Haddow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Birch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Colombo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F. T.</given-names>
            <surname>Martins</surname>
          </string-name>
          , Hallucinations in Large Multilingual Translation Models,
          <source>Transactions of the Association for Computational Linguistics</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>1500</fpage>
          -
          <lpage>1517</lpage>
          . doi:
          <volume>10</volume>
          .1162/tacl_a_
          <fpage>00615</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Balachandran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tsvetkov</surname>
          </string-name>
          ,
          <string-name>
            <surname>Don't Hallucinate</surname>
          </string-name>
          ,
          <article-title>Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration</article-title>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2
          <volume>402</volume>
          .00367. arXiv:
          <volume>2402</volume>
          .
          <fpage>00367</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Rauch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Huseljic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <article-title>Enhancing Active Learning with Weak Supervision and Transfer Learning by Leveraging Information and Knowledge Sources</article-title>
          , in: D.
          <string-name>
            <surname>Kottke</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Krempl</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          Hammer (Eds.),
          <source>Proceedings of the Workshop on Interactive Adaptive Learning Co-Located with European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD</source>
          <year>2022</year>
          ), Grenoble, France,
          <year>September 23</year>
          ,
          <year>2022</year>
          , volume
          <volume>3259</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>27</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>G. V.</given-names>
            <surname>Cormack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Grossman</surname>
          </string-name>
          ,
          <article-title>Evaluation of machine-learning protocols for technology-assisted review in electronic discovery</article-title>
          ,
          <source>in: Proceedings of the 37th International ACM SIGIR Conference on Research &amp; Development in Information Retrieval</source>
          , SIGIR '14,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2014</year>
          , pp.
          <fpage>153</fpage>
          -
          <lpage>162</lpage>
          . doi:
          <volume>10</volume>
          .1145/2600428.2609601.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Callaghan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Müller-Hansen</surname>
          </string-name>
          ,
          <article-title>Statistical stopping criteria for automated screening in systematic reviews</article-title>
          ,
          <source>Systematic Reviews</source>
          <volume>9</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . doi:
          <volume>10</volume>
          .1186/s13643-020-01521-4.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Bron</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. G. M. van der Heijden</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          <string-name>
            <surname>Feelders</surname>
            ,
            <given-names>A. P. J. M.</given-names>
          </string-name>
          <string-name>
            <surname>Siebes</surname>
          </string-name>
          ,
          <article-title>Using Chao's Estimator as a Stopping Criterion for Technology-Assisted Review</article-title>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2404.01176. arXiv:
          <volume>2404</volume>
          .
          <fpage>01176</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stevenson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bin-Hezam</surname>
          </string-name>
          ,
          <source>Stopping Methods for Technology-assisted Reviews Based on Point Processes, ACM Transactions on Information Systems</source>
          <volume>42</volume>
          (
          <year>2023</year>
          )
          <volume>73</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>73</lpage>
          :
          <fpage>37</fpage>
          . doi:
          <volume>10</volume>
          .1145/3631 990.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>W.</given-names>
            <surname>Harmsen</surname>
          </string-name>
          , J. de Groot,
          <string-name>
            <given-names>A.</given-names>
            <surname>Harkema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. van Dusseldorp</given-names>
            ,
            <surname>J. de Bruin</surname>
          </string-name>
          , S. van den Brand, R. van de Schoot,
          <article-title>Machine learning to optimize literature screening in medical guideline development</article-title>
          ,
          <source>Systematic Reviews</source>
          <volume>13</volume>
          (
          <year>2024</year>
          )
          <article-title>177</article-title>
          . doi:
          <volume>10</volume>
          .1186/s13643-024-02590-5.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>J.</given-names>
            <surname>Boetje</surname>
          </string-name>
          , R. van de Schoot,
          <article-title>The SAFE procedure: A practical stopping heuristic for active learningbased screening in systematic reviews and meta-analyses</article-title>
          ,
          <source>Systematic Reviews</source>
          <volume>13</volume>
          (
          <year>2024</year>
          )
          <article-title>81</article-title>
          . doi:
          <volume>10</volume>
          .1186/s13643-024-02502-7.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>E.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Frieder</surname>
          </string-name>
          ,
          <article-title>Heuristic stopping rules for technology-assisted review</article-title>
          ,
          <source>in: DocEng 2021 - Proceedings of the 2021 ACM Symposium on Document Engineering</source>
          , ACM, Limerick, Ireland,
          <year>2021</year>
          , pp.
          <volume>31</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          :
          <fpage>10</fpage>
          . doi:
          <volume>10</volume>
          .1145/3469096.3469873.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Teijema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hofstee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brouwer</surname>
          </string-name>
          , J. de Bruin, G. Ferdinands, J. de Boer,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vizan</surname>
          </string-name>
          , S. van den Brand, C. Bockting, R. van de Schoot,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bagheri</surname>
          </string-name>
          ,
          <article-title>Active learning-based systematic reviewing using switching classification models: The case of the onset, maintenance, and relapse of depressive disorders</article-title>
          ,
          <source>Frontiers in Research Metrics and Analytics</source>
          <volume>8</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .3389/frma.
          <year>2023</year>
          .
          <volume>1178</volume>
          181.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>E.</given-names>
            <surname>Syriani</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. David</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Kumar, Assessing the Ability of ChatGPT to Screen Articles for Systematic Reviews</article-title>
          ,
          <source>CoRR abs/2307</source>
          .06464 (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .48550/ARXIV.2307.06464. arXiv:
          <volume>2307</volume>
          .
          <fpage>06464</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>H.</given-names>
            <surname>Touvron</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Albert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Almahairi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Babaei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bashlykov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Batra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhargava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhosale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bikel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Blecher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Canton-Ferrer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cucurull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Esiobu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fernandes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Fuller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hartshorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hosseini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Inan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kardas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kerkez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Khabsa</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kloumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Korenev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Koura</surname>
          </string-name>
          , M.
          <article-title>-</article-title>
          <string-name>
            <surname>A. Lachaux</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Lavril</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Liskovich</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Mao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Martinet</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Mihaylov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mishra</surname>
            , I. Molybog,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Nie</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Poulton</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Reizenstein</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Rungta</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Saladi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Schelten</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>E. M.</given-names>
          </string-name>
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Subramanian</surname>
            ,
            <given-names>X. E.</given-names>
          </string-name>
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Taylor</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>J. X.</given-names>
          </string-name>
          <string-name>
            <surname>Kuan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Yan</surname>
            , I. Zarov,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Fan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kambadur</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Narang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rodriguez</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Stojnic</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Edunov</surname>
          </string-name>
          ,
          <source>T. Scialom, Llama</source>
          <volume>2</volume>
          :
          <string-name>
            <given-names>Open</given-names>
            <surname>Foundation</surname>
          </string-name>
          and
          <string-name>
            <surname>Fine-Tuned Chat</surname>
            <given-names>Models</given-names>
          </string-name>
          ,
          <source>CoRR abs/2307</source>
          .09288 (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .48550/ARXIV.2307.09288. arXiv:
          <volume>2307</volume>
          .
          <fpage>09288</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kanoulas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Azzopardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Spijker</surname>
          </string-name>
          ,
          <article-title>CLEF 2017 technologically assisted reviews in empirical medicine overview</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <year>1866</year>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kanoulas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Azzopardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Spijker</surname>
          </string-name>
          ,
          <article-title>CLEF 2018 technologically assisted reviews in empirical medicine overview: 19th Working Notes of CLEF Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2018</year>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2125</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kanoulas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Azzopardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Spijker</surname>
          </string-name>
          ,
          <article-title>CLEF 2019 technology assisted reviews in empirical medicine overview</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2380</volume>
          (
          <year>2019</year>
          )
          <fpage>9</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T. Lu, T. J.
          <string-name>
            <surname>-J. Li</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Human Still Wins over LLM: An Empirical Study of Active Learning on Domain-Specific Annotation Tasks</article-title>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .485 50/arXiv.2311.09825. arXiv:
          <volume>2311</volume>
          .
          <fpage>09825</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kholodna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Julka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Khodadadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Gumus</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Granitzer, LLMs in the Loop: Leveraging Large Language Model Annotations for Active Learning in Low-Resource Languages</article-title>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2404.02261. arXiv:
          <volume>2404</volume>
          .
          <fpage>02261</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Behrendt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ziegele</surname>
          </string-name>
          , S. Harmeling,
          <article-title>SQBC: Active Learning using LLM-Generated Synthetic Data for Stance Detection in Online Political Discussions</article-title>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv .2404.08078. arXiv:
          <volume>2404</volume>
          .
          <fpage>08078</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuurmans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bosma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ichter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , Chain-ofThought
          <source>Prompting Elicits Reasoning in Large Language Models, Advances in Neural Information Processing Systems</source>
          <volume>35</volume>
          (
          <year>2022</year>
          )
          <fpage>24824</fpage>
          -
          <lpage>24837</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chase</surname>
          </string-name>
          , LangChain,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Debut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sanh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chaumond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Delangue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cistac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rault</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Louf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Funtowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Davison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shleifer</surname>
          </string-name>
          , P. von Platen, C. Ma,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jernite</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Plu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. L.</given-names>
            <surname>Scao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gugger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Drame</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Lhoest</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Rush</surname>
          </string-name>
          ,
          <article-title>HuggingFace's Transformers: State-of-the-art</article-title>
          <source>Natural Language Processing</source>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1910</year>
          .
          <volume>03771</volume>
          . arXiv:
          <year>1910</year>
          .03771.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [42]
          <string-name>
            <surname>R. van de Schoot</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Coimbra</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Evenhuis</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Lombaers</surname>
          </string-name>
          , M. van
          <string-name>
            <surname>Zuiden</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Grandfield</surname>
            ,
            <given-names>J.</given-names>
            de Bruin, J.
          </string-name>
          <string-name>
            <surname>Teijema</surname>
            , L. de Bruin,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Neeleman</surname>
          </string-name>
          , E. Jalsovec,
          <article-title>Trajectories of PTSD following traumatic events: A systematic and multi-database review</article-title>
          ,
          <source>PROSPERO</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [43]
          <string-name>
            <surname>J. de Bruin</surname>
            , Y. Ma, G. Ferdinands,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Teijema</surname>
          </string-name>
          , R. van de Schoot, SYNERGY - Open
          <source>machine learning dataset on study selection in systematic reviews</source>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .34894/HE6NAQ.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [44] R. van de Schoot,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sijbrandij</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Depaoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. D.</given-names>
            <surname>Winter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Olf</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. E. van Loey</surname>
          </string-name>
          ,
          <article-title>Bayesian PTSDTrajectory Analysis with Informed Priors Based on a Systematic Literature Search</article-title>
          and
          <string-name>
            <given-names>Expert</given-names>
            <surname>Elicitation</surname>
          </string-name>
          ,
          <source>Multivariate Behavioral Research</source>
          <volume>53</volume>
          (
          <year>2018</year>
          )
          <fpage>267</fpage>
          -
          <lpage>291</lpage>
          . doi:
          <volume>10</volume>
          .1080/00273171.
          <year>2017</year>
          .
          <volume>1412293</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hossain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Pearson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McAlpine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Bacchus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Spangaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Muthuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Muuo</surname>
          </string-name>
          , G. Franchi,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hess</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bangha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Izugbara</surname>
          </string-name>
          ,
          <article-title>Gender-based violence and its association with mental health among Somali women in a Kenyan refugee camp: A latent class analysis</article-title>
          ,
          <source>Journal of Epidemiology and Community Health</source>
          <volume>75</volume>
          (
          <year>2021</year>
          )
          <fpage>327</fpage>
          -
          <lpage>334</lpage>
          . doi:
          <volume>10</volume>
          .1136/jech-2020-214086.
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>N.</given-names>
            <surname>Reimers</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Gurevych</surname>
          </string-name>
          , Sentence-BERT:
          <article-title>Sentence Embeddings using Siamese BERT-Networks, in: K. Inui</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          Wan (Eds.),
          <source>Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Hong Kong, China,
          <year>2019</year>
          , pp.
          <fpage>3982</fpage>
          -
          <lpage>3992</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D19</fpage>
          -1410.
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>C.</given-names>
            <surname>Shama Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Milios</surname>
          </string-name>
          ,
          <article-title>Active neural learners for text with dual supervision</article-title>
          ,
          <source>Neural Computing and Applications</source>
          <volume>32</volume>
          (
          <year>2020</year>
          )
          <fpage>13343</fpage>
          -
          <lpage>13362</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00521-019-04681-0.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>