=Paper= {{Paper |id=Vol-2128/industrial5 |storemode=property |title=The Business Case for the Learning Sciences |pdfUrl=https://ceur-ws.org/Vol-2128/industrial5.pdf |volume=Vol-2128 |authors=David Matthew Belenky,Kristen DiCerbo }} ==The Business Case for the Learning Sciences== https://ceur-ws.org/Vol-2128/industrial5.pdf
                  The Business Case for the Learning Sciences
                             Daniel Matthew Belenky, Kristen DiCerbo, Pearson,
                          Dan.Belenky@pearson.com, Kristen.DiCerbo@pearson.com

         Abstract: Designing commercial educational technologies requires balancing many different
         considerations. Specifically, the needs of instructors, students, and institutions need to be
         addressed in ways that will allow all to meet their educational goals. Designers of educational
         technologies may sometimes focus intently on meeting those needs without necessarily
         ensuring that the product also aligns with what researchers have found best supports important
         learner outcomes, like learning material well and progressing to subsequent courses. The
         Learning Research and Design team at Pearson helps ensure that our products are likely to
         deliver those outcomes, by providing expert guidance on how to integrate research-based
         insights, from the learning sciences, into educational products and services. In this paper, we
         describe our five-stage model for supporting this effort: 1) reviewing research, 2) iteratively
         testing new designs, 3) providing implementation support, 4) measuring impact on learner
         outcomes, 5) using evidence to help grow market share.


Background of our model
In 2013, Pearson announced that we would publicly report on learner outcomes for products, similar to how we
report on our financial results. The first set of these audited reports were published in 20181. Pearson made a bet
that sharing rigorous evidence that our products improve student learning would be good for business. To help
achieve the goal of delivering impactful educational technologies, our Learning Research and Design team has
devised a five-step approach to help ensure that Pearson products, informed by learning science, are designed
and supported to help deliver learner outcomes (see Figure 1). This approach makes explicit why learning
science is important to integrate into our design process. While there are many influences on decisions about
how a given digital learning experience is designed (including cost, time, learner preference, teacher preference,
and many others), our model makes clear how building on a foundation of learning science research is the right
thing to do, both to help improve learner outcomes as well as to contribute to growing our business. In this way,
we are working towards helping more learners to learn more. In this paper, we will describe each of the five
stages, and highlight key challenges we have observed, as well as ways we have begun to overcome them. Note
that although this is depicted linearly, this is an ongoing and iterative process, both within individual product
development cycles and across the organization, with insights gained from one cycle informing future
development.




                   Figure 1. Our model for connecting learning science to impact and growth.
Conducting and synthesizing research
The first stage is focused on “doing research,” which includes synthesis of existing research, as well as new
learning experiments and creative design work. That is, to help products achieve particular outcomes, we need
to understand the research on a) defining the knowledge and skills underlying those outcomes, b) instructional
approaches that may be able to impact those outcomes, and c) how different learners may be best supported in
achieving those outcomes.
          There are a number of challenges to finding and applying relevant research. A critical one has to do
with the “grain-size” of research published in academic journals, which is frequently done on the level of single
experiments, whether in laboratory or classroom settings. Given the large variability in learners’ prior
knowledge and skills, educational contexts, particular courses and many other factors, it is not clear how
replicable and applicable prior research will be. One approach we have focused on is conducting “learning
experiments” where we directly test an innovative feature, designed in alignment with prior research, embedded
in our products and used by learners in their courses. This allows us to test whether these research-based
insights hold, when scaled and implemented directly in digital learning technologies; this “in vivo” approach
has been championed by Koedinger, Booth & Klahr (2013) as a fruitful avenue for education researchers to
explore.
          Another issue pertaining to grain size is the integration of research findings into larger-scale units of
instruction. For example, while individual, short-term experiments may find a benefit for certain instructional
approaches, larger efforts to integrate these ideas into semester-long instructional sequences are infrequent, and
design characteristics (e.g., sequences, timing, etc.) are not always reported in adequate detail. Even when
specific designs are reported, they may be applicable for an individual unit of instruction but may not work as
well over long periods, or for all types of content a learner needs to encounter throughout a semester. For
instance, worked examples have been found to be quite useful for novices learning new procedures (Atkinson,
Derry, Renkl, & Wortham, 2000), but detrimental to students who have achieved some level of mastery already
(Kalyuga, Chandler, Touvinen, & Sweller, 2001), and may in some cases reduce students’ deep cognitive
engagement (Schworm & Renkl, 2006). To address this issue, we have focused on generalizable principles that
underlie diverse findings and research areas. Various reviews and frameworks have been published that prove
useful in those efforts, such as Chi’s Interactive-Constructive-Active-Passive (ICAP) framework (2009). We
have developed and shared a set of Learning Design Principles, based upon a variety of research areas, which
provide a common grounding for conversations around optimizing design for learning. We have made these
freely available under a Creative Commons license2.

Iterative learning design
Using existing research as a guide, we can put forth general principles that we are confident will help improve
learner outcomes. Translating these general ideas into specific designs requires a unique blend of activities and
skill sets. Our learning designers work collaboratively with many stakeholders, including user experience
designers, content developers, and even authors, helping to ensure that learning science informs those
conversations. It can be a challenge to balance the competing visions that these stakeholders have, but one
critical contribution of the Learning Research and Design team has been to help provide common frameworks,
terminology, and background research with which to begin those conversations. The Learning Design
Principles, described above, represent one such touchpoint, but numerous materials (reports, presentations,
annotated designs, etc.) have been created by our team to help in that effort.
          Another key activity that our team engages in is “design-based research.” Our DBR team works to
constantly get input from learners through a variety of methods, including surveys, focus groups, and co-design
sessions. Through these interactions, we are able to validate whether new designs align with learners’ needs, if
they would use new features as intended or in other ways, and to learn more about what kinds of supports
students feel they need to succeed in their courses. We are then able to quickly iterate upon early-stage ideas
until they are ready for a more comprehensive design and testing process.

Implementation support
Once a curriculum or tool is released, it is then used in a variety of ways, some of which designers may have
intended, and others they did not. Many product designers and developers seek to influence this implementation
through user manuals, nudges in products, and professional development, as the ultimate effectiveness of a
product is highly dependent on how it is implemented. We have experienced two main challenges in this area:
1) scaling support for implementation and 2) coaching teachers who are experts in teaching.
         When releasing a learning product to thousands of classrooms, it is difficult to find ways to support
implementation at scale. In the K-12 environment, major education technology adoptions are often accompanied
by one-day, in-service trainings. These often focus primarily on the mechanics of onboarding and navigating the
system and only secondarily on the underlying pedagogy and interactions between the teachers, students, and
technology. There is a substantial research literature suggesting that one-time professional development
opportunities do less to impact student outcomes than more extended programs (Garet, Porter, Desimone,
Young, & Yoon, 2001). However, even one day trainings are more than most university instructors receive.
Instead, they are often given enormous pdf files containing some combination of technological and pedagogical
guidance. From the industry standpoint, when curricula and tools are distributed at scale, the personnel
requirements to individually support every institution become large. At the same time, we continue to see that
the impact of products is highly dependent on how they are used pedagogically in the classroom. Everything
from when students take quizzes to the weight given to online homework assignments in the final grade impacts
the relationship between the use of a program and student learning outcomes. There are not easy solutions to
this challenge, but we are currently exploring several potential options, including building tooltips and “nudges”
for instructors into products. However, this is in primarily at the discussion stage, and has a long way to go
before its effectiveness is clear.
          Related to the challenge of scaling support is finding the right tone in the guidance offered to
instructors. On one hand, many instructors view themselves as the masters of their classroom and believe they
do not need to be told how to teach their subject area. So, even offering “nudges” might be interpreted as
insulting, unless the tone is correct. On the other hand, many higher education instructors have had little to no
training in pedagogy and learning science research. On top of this, in some areas there is concern that education
technology will be trying to replace teachers in the classroom, setting up an adversarial relationship from the
start of the engagement. To address these issues, we try to use a coaching analogy. Professional athletes are
generally better players than their coaches but it does not mean that their coaches cannot offer advice that
improves their game. However, as with coaches, that improvement has to become clear to the players for them
to continue to accept coaching.

Measuring impact
A thorough discussion of measuring impact of learning tools and curricula is beyond the scope of this paper.
There have been many efforts by numerous organizations, including the What Works Clearinghouse (part of the
U.S. government’s Institute of Education Sciences), who have provided guidance on measuring impact. Pearson
essentially tries to answer the questions: Does this work? For whom? Under what conditions? We have defined
various levels of evidence and the claims we can make based on the kind of evidence available 3. One of the
challenges for learning science is getting beyond the “does it work?” question when measuring impact.
          There are many goals for impact evaluation beyond a simple estimation of whether a learning product
“works.” From a learning science perspective, the information from a richer investigation of the impact of
different features of the product and their interaction can potentially advance our understanding of learning.
From a product perspective, obtaining information about how to improve the product is nearly as important as
understanding whether it works as currently created. However, basic impact evaluation studies compare a group
using the new product with a group not using it on a measure of achievement; this yields a result that indicates
only whether students using the new product scored significantly better than the control group, or not, and
averages over many different students, teachers, classrooms, and schools whose characteristics may influence
the effectiveness of the product. While statistical techniques can help understand how effects may differ for
specific types of learners, instructors, or institutions, many other issues that may influence the impact of a
product are not easily quantified in a single variable. In order to address the challenge of learning more from
impact evaluations, we have developed a number of activities that work in combination with the traditional
impact evaluation procedure. For example, we conduct an implementation study prior to the impact study to
understand the variability in implementation that should be captured in the impact study. We have also
expanded our use of analytics using large-scale data from learning platforms to understand patterns of usage and
their relationships to engagement and persistence.

Adoption
The final step, from an industry perspective, is to translate positive impact evaluation results into market success
via increased adoptions. This step is outside the expertise of most research scientists, while most people with
marketing and sales expertise have less understanding of learning science and research; as such, this step
requires close collaboration between people who speak different languages.
         We have started a program to make evidence a core component of conversations between Pearson sales
representatives and instructors deciding to use our content and digital services. For example, we have a person
whose role is fully dedicated to the effort of translating impact evaluation results into methods for selling with
evidence, and have developed a number of activities that help sales staff talk about evidence. These include
scripted statements we have written for sales people in the field to use evidence from our research in response to
concerns raised by instructors. This has allowed our sales teams to learn about evidence we have been able to
collect, and to feel confident talking about how our product can address particular classroom problems that
instructors and administrators may have. In addition, we have written stories of individual instructors’ success
that are supported by impact evaluation results. Our experience suggests that decisions about which products to
use are often influenced by what has been reported successful with their contacts at similar schools. The stories
we have (from real instructors) allow us to articulate how the product has been implemented with positive
results at different types of schools with different implementation models. These tools have helped us move “the
last mile” in the chain to making sales based on evidence.
          We have observed the power of this approach, moving rapidly from a successful pilot program with a
few members of the sales team responsible for developmental math to using this approach with all our largest
sales forces in the US higher education courseware market. Over a five-month period in early 2018, this
approach contributed to over $15m in adoptions, as reported by the sales force themselves. As the program
continues to grow, we are building in opportunities to understand how to maximize the potential impact of our
“evidence-based selling” approach, both by looking at the data generated in our customer relationship
management tools, as well as through regular communication with the sales teams.

Summary
It is clear that moving towards integrating learning science more fully into the design and experience of
commercial digital learning technologies is challenging, but with potentially great rewards, in terms of
increasing impact on learner outcomes. We feel we have made great progress since the efficacy mission began
in 2013, in terms of improving how we use learning science research to inform internal decision making.
However, in industry we must show a relationship between learning science and economic impact. We have
found that laying out our envisioned path from learning science research to ultimate impact on sales has
clarified our assumptions about how the impact occurs, as well as allowed us to test those assumptions. At each
stage we can examine our challenges and identify whether we are successful with that stage. Did we
successfully get the research into the designed and developed product? Is the product being implemented as
intended? Are sales conversations that include evidence of impact more successful? While each step may seem
obvious, we have found that their explication has also helped in our internal communication. Having a clear and
consistent approach that describes why using learning science research will ultimately be better for students and
better for business has helped us be more effective. In particular, our educational technologies are created with
input from numerous stakeholders, and our model has helped us align our efforts in ways that make it more
likely our recommendations are adopted.

References
Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: Instructional
        principles from the worked examples research. Review of Educational Research, 70(2), 181-214.
Chi, M. T. (2009). Active‐constructive‐interactive: A conceptual framework for differentiating learning
        activities. Topics in Cognitive Science, 1(1), 73-105.
Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What makes professional
        development effective? Results from a national sample of teachers. American Educational Research
        Journal, 38(4), 915-945.
Kalyuga, S., Chandler, P., Touvinen, J., & Sweller, J. (2001). When problem solving is superior to worked
        examples. Journal of Educational Psychology, 93(3), 579-588.
Koedinger, K. R., Booth, J. L. & Klahr, D. (2013). Instructional complexity and the science to constrain it.
        Science, 342, 935-937.
Schworm, S., & Renkl, A. (2006). Computer-supported example-based learning: When instructional
        explanations reduce self-explanations. Computers & Education, 46, 426-445.


Endnotes
(1) See https://www.pearson.com/corporate/efficacy-and-research.html
(2) See https://www.pearson.com/corporate/efficacy-and-research/our-methods/learning-design-principles.html.
(3) See https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/efficacy-and-
    research/methods/Efficacy-Framework-Slide.pdf