<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AI Enhanced Intelligent Texts and Learning Gains</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Scott Crossley</string-name>
          <email>scott.crossley@vanderbilt.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joon Suh Choi</string-name>
          <email>joon.suh.choi@vanderbilt.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wesley Morris</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Langdon Holmes</string-name>
          <email>langdon.holmes@vanderbilt.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Joyner</string-name>
          <email>djoyner3@gatech.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Georgia Institute of Technology</institution>
          ,
          <addr-line>225 North Avenue NW, Atlanta, GA 30313</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vanderbilt University</institution>
          ,
          <addr-line>2201 West End Ave, Nashville, TN 37235</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study provides evidence that students in an introductory computer science class benefitted from the use of an intelligent text that enhanced the reading experience by making it more interactive through the use of large language models (LLMs) as compared to the use of static, digital textbook. Results indicate that higher performing students score higher in a post-test when reading an intelligent text. Secondary results show that learning gains were greater in the pre-test than the post-test for students using the intelligent text. Overall, the study indicates the AI enhanced texts can lead to learning gains in computer science classrooms.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Intelligent texts</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Reading</kwd>
        <kwd>Computer Science1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As the prevalence of computers continues to play an important role in business, education, and the
arts, computational thinking remains a critical skill for solving important, real-world problems using
complex solutions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, acquiring computational thinking is difficult and requires sustained
efforts, diverse abilities, and specialized teaching environments. Unfortunately, there is a consensus
that traditional textbooks are not as effective for computational thinking because it requires
procedural knowledge that is difficult to demonstrate in static texts [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        To assess whether students learned better from a traditional or intelligent text, Crossley et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
examined the efficacy of an interactive intelligent text in a computer science class. The intelligent
text was developed using Intelligent Texts for Enhanced Lifelong Learning (iTELL), which is a
framework that simplifies the creation and deployment of intelligent texts with integrated interactive
features. iTELL leverages Large Language Models (LLMs) to develop and deploy interactive content
including constructed response items and summaries that are automatically scored. The LLMs
simultaneously provide feedback to users to help with revisions.
      </p>
      <p>
        ITELL was designed to provide students with interactive read-to-write tasks known to lead to
increased learning gains [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
        ] based on generation effect theories [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Read-to-write tasks require
readers to extract and integrate text information into their writing allowing which helps them
construct knowledge during the reading process [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8-11</xref>
        ]. Read-to-write tasks like constructed
responses [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and summaries [
        <xref ref-type="bibr" rid="ref4 ref5">4-5</xref>
        ] are effective learning strategies that can increase learning gains.
      </p>
      <p>The iTELL text was deployed within a Python-based computer science course designed to help
students understand and process information about computational thinking and programming.
Outtake survey results indicated that students were satisfied with the intelligent text and that
students felt the interactive tasks were easy to work with, were accurate, and improved learning.
Comparisons between learning gains for the students that used the intelligent text as compared to the
traditional digital text reported a small relationship. Specifically, students that used the intelligent
texts reported increased gains of ~5% between a pre-test and a post-test. Overall, the study found that
intelligent texts seem to improve student learning through greater interactivity, student engagement,
and dynamic assessments.</p>
      <p>
        However, there were limitations to the original analysis conducted by Crossley et al. The study
used a quasi-experimental design where students had the option to use a digital or an intelligent text.
Students were also with provided extra credit to use the intelligent text, which may have introduced a
self-selection bias [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Self-selection also led to fewer students in the intelligent text (n = 79) than in
the digital text condition (n = 277). Lastly, the analysis included no co-variates of overall student
performance in the class to account for student differences.
      </p>
      <p>
        The goal of the current study is to provide a re-analysis of the data reported in Crossley et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
by selecting participants for the intelligent text and digital text conditions using propensity score
matching. Propensity score matching can be used to develop balanced samples of participants
between experimental and control groups in observational studies. Specifically, propensity scores are
used to match each individual in the experimental group to an individual in the control group based
on the probability of participants being in the treatment group as calculated from a range of
covariates including demographic and/or individual differences. This matching helps to remove overt
bias caused by self-selection and outcomes from matched participants can be used to better estimate
the effect of the experimental condition.
2. Method
2.1. iTELL
iTELL transforms conventional educational resources into interactive texts utilizing an sophisticated
content authoring system. Within the content authoring system, instructional materials are
segmented into pages and smaller sections known as chunks. A chunk typically lies under a single
sub-header and includes 1-3 paragraphs of text or one instructional video. iTELL then creates
learning activities for these divisions, leveraging reading comprehension theories to help users
develop a deeper understanding of the material through constructed response items (CRIs) and
summary writing tasks.
      </p>
      <p>
        For each chunk, a CRI is crafted using GPT-3.5-turbo with human oversight. In any given iTELL
volume, each chunk has a 1/3 probability of generating a constructed response item for the user,
ensuring at least one CRI per page. Users must complete at least one CRI before advancing to
subsequent chunks. The submitted CRIs are evaluated for accuracy, and feedback is provided by two
distinct fine-tuned language models [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        A summary must be completed at the conclusion of each page, which is then screened
automatically for plagiarism, relevance, length requirements, offensive language, and appropriate
language use. Summaries that pass these criteria are evaluated by a single language model for content
accuracy, assessing whether they capture the essential elements of the original material.
Additionally, iTELL incorporates an AI-driven chatbot based on Llama 3, enhanced with retrieval
augmented generation (RAG) to ensure responses remain relevant. The chatbot also include
safeguards against academic dishonesty and misuse. The chatbot is accessible anytime to assist users
with inquiries regarding text-related content or the iTELL platform [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. See Figure 1 for screenshots
of the iTELL interface.
      </p>
      <sec id="sec-1-1">
        <title>2.2. Class and textbook</title>
        <p>Data was collected from an Introduction to Computing class taught at a large technology university
in the southeastern United States. The course covered the basics of computing, presupposing no prior
programming ability. The course began with the basics of procedural programming, moved through
control structures and data structures, and concluded with object-oriented programming and
algorithm development. After each unit, students were required to take a unit test. The tests
accounted for 40% of students' grades in the class, 10% for each of four tests.</p>
        <p>Demographic and individual difference information was collected through surveys at the
beginning of the class. The survey collected demographic data such as age, gender, race or ethnicity,
country of origin, and first language background. Students also provided information related to
programming experience (from no experience to successful completion of programming classes) and
their year of study (freshman, junior, sophomore, or senior).</p>
        <p>
          Data was collected from the third unit of the course textbook [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], which covered Control
Structures. This unit was ingested into an iTELL volume that comprised an overview page
introducing iTELL and 5 additional pages with each page referencing a chapter from the unit. Each
page faithfully represented the original text. However, screenshots of integrated development
environments (IDEs) in the textbook used to demonstrate Python code and output were replaced with
a Python interactive sandbox. Student test scores from the second unit (Procedural Programming)
were used as a pre-test, and student test scores from the Control Structure unit (the third unit) were
used as post-test scores.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>2.3. Participants</title>
        <p>
          We used the same 476 students enrolled in the class as reported by Crossley et al [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Most of the
students were freshmen or juniors. Of these, 121 students self-selected to use the iTELL volume and
356 used the traditional digital version of the text (i.e., a pdf version). The students that used iTELL
were given 1% extra credit to their overall course grade. Of the 121 students that elected to use iTELL,
79 students completed all requirements for inclusion in the analyses including age and consent
requirements, and completion of the pre-test, and the post-test. Of the 79 students in the iTELL
condition, 72 had complete demographic and individual difference survey data while 238 of the 356
students in the digital text condition had complete survey data. All these students completed the class
and thus received a final class score, which indicated their overall performance in the class.
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>2.4. Propensity matching</title>
        <p>
          We used the MatchIt package [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] in R for propensity matching for the students that had complete
data. The package implements the suggestions of Ho, Imai, King, and Stuart [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] for preprocessing
data using semi-parametric and non-parametric matching methods. The purpose of matching data is
to reduce bias in observational or quasi-experimental studies by simulating the balance one would
expect to find in a randomized design. The end goal of matching is to create a dataset that has
covariate balance among participants where the distribution of covariates in groups is similar to that
which would be found in a randomized control trial. Covariate balance allows for increased
confidence and robustness in subsequent statistical analyses. Matching has three key steps: 1)
planning, 2) matching the data, 3) examining the quality of matches [15].
        </p>
      </sec>
      <sec id="sec-1-4">
        <title>2.4.1. Planning</title>
        <p>Planning involves the selection of an outcome variable and selection of covariates that need to be
balanced to increase the likelihood of an unbiased estimate of the treatment effect. Here, the outcome
variables are performance on the class tests from Unit 2 (pre-test) and Unit 3 (post-test). The
treatment effect is whether students read Chapter 3 text in either digital or iTELL format.</p>
        <p>Our selection of covariates included both demographic variables and individual differences.
Demographic variables were selected to ensure that students’ experiences across conditions were
strongly matched. The selected demographic variables were gender, age, country of birth, and
race/ethnicity. To control for background knowledge, we selected programming experience and
years of study as individual differences.</p>
      </sec>
      <sec id="sec-1-5">
        <title>2.4.2. Matching</title>
        <p>We selected a distance parameter that used a generalized linear model to calculate propensity scores.
We used the “nearest” method parameter to select nearest matches and selected a one-to-one ratio for
matching so that the number of matches was the same between the digital and iTELL text conditions.
A one-to-one ratio helps to ensure stronger matches, but it does dis-card more potential matches.
These matching parameters led to the selection of 72 participants from the digital text condition (i.e.,
the control condition) that matched the 72 participants from the iTELL condition (i.e., the treatment
condition).</p>
      </sec>
      <sec id="sec-1-6">
        <title>2.4.3. Examine quality of matching</title>
        <p>
          The MatchIt package reports the empirical cumulative distribution function (eCDF) mean as a
measure of balance between the selected participants in the control and treatment groups. The eCDF
mean is the mean difference in the cumulative distribution functions of the propensity scores where a
small value indicates a better balance (and an eCDF mean of 0 representing perfect balance). The
eCDF mean for the propensity matches between the digital and iTELL conditions reported a M = .023,
which indicates strong balance [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. This distribution is plotted in Figure 2 below.
        </p>
      </sec>
      <sec id="sec-1-7">
        <title>2.5. Statistical Analyses</title>
        <p>Our first analysis reported descriptive statistics for the two conditions across tests scores. The second
analysis examined difference between pre-tests and post-tests across conditions using a linear mixed
effects (LME) model. For this analysis, we used R [16] and the lme4 package [17]. The outcome
variable was all test scores, with the timing of the test administration (pre or post) coded as a fixed
effect (Time). Other fixed effects were Condition (Digital or iTELL text) and Final Class Score. We
included final class score as a measure of performance to control for potential effects of background
knowledge, motivation, and/or ability during data collection. We did not include Class Score as a
variable during propensity matching because we wanted to examine potential interaction effects
during learning (specifically, was learning differential based on class performance). All numeric
variables were scaled. Participants were included as a random effect. We developed an LME using a
maximum three-way interaction that included Time, Condition, and Final Class Score.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Results</title>
      <p>Descriptive statistics for test scores by pre-test and post-test for the matched participants in the
digital and iTELL conditions indicated that students in the digital text condition (M = .869, SD = .157)
scored higher than students in the iTELL condition in the pre-test (M = .783, SD = .189). This trend
held in the post-test, but students in the iTELL condition (M = .834, SD = .263) showed learning gains
(Delta = .051) while students in the digital text condition (M = .867, SD = .215) did not (Delta = -.002).</p>
      <p>Results from the linear mixed effects models are presented in Table 1. The model indicated a
significant three-way interaction between Test, Condition, and Final Class Score (see Figure 1 for
interaction plot). The plot shows that students with high Final Class Score showed greater learning
gains from reading in the iTELL condition as compared to the Digital condition. The model also
yielded a two-way interaction between Time and Final Class Scores. The interaction indicates that
students with a higher final class scores showed greater gains in the post-test. The model also showed
a significant two-way interaction between Test and Condition. The interaction indicates that
students in the iTELL condition showed greater learning gains between the pre- and post-test than
students in the Digital condition. Lastly, the results showed a suppression effect for pre- and post-test
where the estimate is negative (whereas mean scores show it should be positive). The main effect for
Final Class Score approached significance and indicated that students with higher Final Class Scores
showed higher test scores in general.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Discussion</title>
      <p>This study provides evidence that students in an introductory computer science class benefitted from
the use of an intelligent textbook that enhanced the reading experience by making it more
interactive. Our main result indicates that higher performing students (as indicated by final class
scores) score higher in the post-test in the iTELL condition than in the digital text condition.
Secondary results show that test scores were greater from the pre-test to the post-test for the iTELL
condition compared to the digital text condition. From a pedagogical perspective, these findings show
the intelligent texts may help beginning computer scientists develop computational thinking and
computer programming skills. These skills may help students perform better in subsequent course
and reduce the high failure and dropout rates [18] found in computer science programs.</p>
      <p>
        Overall, the results indicate that intelligent texts that leverage LLMs to provide interactive reading
environments that capitalize on read-to-write tasks [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
        ] to increase the likelihood of generation
effects in readers [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] lead to learning gains, at least in this setting. It is important to note that the
three-way interaction showed that learning gains were greatest for students with higher class scores
at the end of the semester, indicating that iTELL may work best for stronger students. There are a few
plausible reasons that this may be the case. First, stronger students may be more determined and thus
work through the text more thoroughly, including spending more time reading. Second, stronger
students may be more diligent and attend to the text generation interventions more completely.
      </p>
      <p>In all cases, we presume that iTELL helps readers construct better mental models of the text
leading to greater learning gains, but more evidence is needed to support this assertion. Key to this is
addressing many of the limitations found in this study. First, this study only focused on a single
course within a single academic domain (computer science). More courses across a wider range of
topics are needed. We also offered students an incentive to use iTELL, which may attract certain
types of students over others. While propensity matching can control for some differences, it may not
capture individual differences that explain selecting the iTELL condition. We do not know if students
selecting to use the iTELL system wanted the extra credit because they were performing poorly, were
struggling in other ways, or were highly motivated to succeed regardless of performance.
Additionally, the pre-test (Procedural Programming) and post-test (Control Structures) assessed
nonidentical skills, so raw gains may partly reflect differences in quiz difficulty rather than learning. We
tried to address this by co-varying class grade, but a better developed pre- and post-test design could
provide stronger evidence of learning. Lastly, propensity matching removed around 40% of the
original data leaving us with 72 students in each condition. While this sample is large enough for
statistical analysis, it is unlikely to generalize past the current population. In general, we suggest that
future studies focus on different topics areas and different student populations with larger sample
sizes. Additionally, future studies should include multiple experimental conditions to disaggregate
simple text production from text production that includes AI feedback and should include multiple
measures of reading skill and background knowledge to help control for these differences during
learning.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>This material is based upon work supported by the National Science Foundation under Grant
2112532. Any opinions, findings, and conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We
are thankful to suggestions made by Ken Koedinger about using propensity matching at Educational
Data Mining, 2024.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[15] N. Greifer. MatchIt: Getting Started. https://cran.r-project.org/web/packages/</p>
      <p>MatchIt/vignettes/MatchIt.html, last accessed 2025/1/28
[16] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for</p>
      <p>Statistical Computing, Vienna, Austria Available at: https://www.R-project.org/. (2021)
[17] D. Bates, M. Maechler, B. Bolker, S. Walker, R. H. B. Christensen, H. Singmann, B. Dai, F. Scheipl,
G. Grothendieck, P. Green, J. Fox, A. Bauer, P. N. Krivitsky, E. Tanaka, M. Jagan. lme4: Linear
mixed-effects models using 'Eigen' and S4 (Version 1.1-36) [R package], (2015)
[18] A. Robins, J. Rountree, N. Rountree. Learning and Teaching Programming: A Review and
Discussion. Computer Science Education, 13(2), 137-172. (2003)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Wing</surname>
          </string-name>
          .
          <article-title>Computational thinking</article-title>
          .
          <source>Communications of the ACM 49, 3 (March</source>
          <year>2006</year>
          ),
          <fpage>33</fpage>
          -
          <lpage>35</lpage>
          . (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Mendes</surname>
          </string-name>
          .
          <article-title>Learning to program difficulties and solutions</article-title>
          .
          <source>In International Conference on Engineering Education-ICEE (Vol. 7)</source>
          , (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Crossley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Holmes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Joyner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gupta</surname>
          </string-name>
          .
          <article-title>Using Intelligent Texts in A Computer Science Classroom: Findings from an iTELL Deployment</article-title>
          .
          <source>Proceedings of 8th Educational Data Mining in Computer Science Education Workshop (CSEDM</source>
          <year>2024</year>
          )
          <article-title>at the 17th International Conference on Educational Data Mining (EDM). Atlanta, GA (</article-title>
          <year>2024</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Limongi</surname>
          </string-name>
          .
          <article-title>Writing to learn increases long-term memory consolidation: A mentalchronometry and computational-modeling study of “Epistemic writing”</article-title>
          .
          <source>Journal of Writing Research</source>
          ,
          <volume>11</volume>
          (
          <issue>1</issue>
          ),
          <fpage>211</fpage>
          -
          <lpage>243</lpage>
          , (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Campione</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Day</surname>
          </string-name>
          .
          <article-title>Learning to learn: On training students to learn from texts</article-title>
          .
          <source>Educational researcher</source>
          ,
          <volume>10</volume>
          (
          <issue>2</issue>
          ),
          <fpage>14</fpage>
          -
          <lpage>21</lpage>
          , (
          <year>1981</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bensoussan</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kreindler.</surname>
          </string-name>
          <article-title>Improving advanced reading comprehension in a foreign language: Summaries vs. short‐answer questions</article-title>
          .
          <source>Journal of Research</source>
          in Reading,
          <volume>13</volume>
          (
          <issue>1</issue>
          ),
          <fpage>55</fpage>
          -
          <lpage>68</lpage>
          , (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bertsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Pesta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wiscott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>McDaniel</surname>
          </string-name>
          .
          <article-title>The generation effect: A meta-analytic review</article-title>
          .
          <source>Memory &amp; Cognition</source>
          .
          <volume>35</volume>
          (
          <issue>2</issue>
          )
          <fpage>201</fpage>
          -
          <lpage>210</lpage>
          , (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y. A.</given-names>
            <surname>Delane</surname>
          </string-name>
          .
          <article-title>Investigating the reading-to-write construct</article-title>
          .
          <source>Journal of English for academic purposes</source>
          ,
          <volume>7</volume>
          (
          <issue>3</issue>
          ),
          <fpage>140</fpage>
          -
          <lpage>150</lpage>
          , (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W.</given-names>
            <surname>Grabe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. L.</given-names>
            <surname>Stoller</surname>
          </string-name>
          . Teaching and researching reading. Routledge. (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Nelson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Calfee. The</surname>
          </string-name>
          Reading-Writing Connection Viewed Historically. Teachers College Record,
          <volume>99</volume>
          (
          <issue>6</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>52</lpage>
          , (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Nelson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>King</surname>
          </string-name>
          .
          <article-title>Discourse synthesis: Textual transformations in writing from sources</article-title>
          .
          <source>Reading and Writing</source>
          ,
          <volume>36</volume>
          (
          <issue>4</issue>
          ),
          <fpage>769</fpage>
          -
          <lpage>808</lpage>
          , (
          <year>2023</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Tripepi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Jager</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. W.</given-names>
            <surname>Dekker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zoccali</surname>
          </string-name>
          .
          <article-title>Selection bias and information bias in clinical research</article-title>
          .
          <source>Nephron Clinical Practice</source>
          ,
          <volume>115</volume>
          (
          <issue>2</issue>
          ),
          <fpage>c94</fpage>
          -
          <lpage>c99</lpage>
          , (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Joyner</surname>
          </string-name>
          . Introduction to Computing.
          <string-name>
            <surname>McGraw-Hill Education</surname>
            <given-names>LLC</given-names>
          </string-name>
          . (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Imai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Stuart.</surname>
          </string-name>
          <article-title>MatchIt: Nonparametric Preprocessing for Parametric Causal Inference</article-title>
          .
          <source>Journal of Statistical Software</source>
          ,
          <volume>42</volume>
          (
          <issue>8</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>