=Paper=
{{Paper
|id=Vol-2676/paper1
|storemode=property
|title=Card-based approach to coordinate Learning Analytics policymaking and implementation at scale
|pdfUrl=https://ceur-ws.org/Vol-2676/paper1.pdf
|volume=Vol-2676
|authors=Tom Broos,Martijn Millecamp,Katrien Verbert,Tinne De Laet
|dblpUrl=https://dblp.org/rec/conf/ectel/BroosMVL20
}}
==Card-based approach to coordinate Learning Analytics policymaking and implementation at scale==
Card-based approach to coordinate Learning Analytics policy making and implementation at scale Tom Broos[0000-0002-4139-2608.], Martijn Millecamp [0000-0002-5542-0067], Katrien Verbert [0000-0001-6699-7710], and Tinne De Laet [0000-0003-0624-3305] KU Leuven, Leuven, Belgium firstname.lastname@kuleuven.be Abstract. Over a decade of work in the domain of learning analytics (LA) is available to inspire higher education institutions. While several have launched smaller experiments, examples of LA implemented at scale remain scarce. In recent years, several guidelines and frameworks have been published to support institutions in the development of their LA policies. Likewise, multiple implementation project methodologies are available. However, little guidance is pro- vided on how policy making and implementation for LA projects at scale can be matched. In this work, we start from a recently published approach to coordinate a concrete implemen- tation timeline with policy making efforts supported by the SHEILA framework. We extend this method originally aimed at LA experts to larger audiences of educational and institutional pro- fessionals with less LA experience. This is accommodated by a reusable workshop format using a set of cards and worksheets. In this paper we share the methodology and particular approach of the workshop and describe our findings from two runs of the workshop: one in the context of LA scale-up project and one focusing on a broader scope of innovative TEL projects. Our main ob- servation is that the workshop format fosters constructive discussion and improves awareness and buy-in, while also delivering valuable input for the project team. With this paper, we share a pragmatic approach and instrument to engage stakeholders in the realization of projects aiming at realizing educational technology innovation at institutional scale. Keywords: Learning Analytics, Policy, Scalability, Stakeholders 1 Introduction Learning Analytics was defined by Duval & Verbert (2012) [1] as “collecting traces that learners leave behind and using those traces to improve learning”. Despite the ma- turing of the Learning Analytics research field and the reports around many experi- mental Learning Analytics projects, the at-scale implementations of Learning Analytics remain scarce. The review of Viberg et al. [2] indicated that 94% of the papers focusing on Learning Analytics in higher education propose solutions that do not scale. The lack of scientific reports focusing on at-scale implementations of learning analytics [3,4] are believed to originate from particular difficulties, such as resistance to change, that hin- der scaling. Researchers have developed models to increase their understanding of the acceptance of technology, such as the Technology Acceptance Model [5], and the aca- demic resistance to change, such as the Academic Resistance Models [6]. For Learning Analytics in particular Tsai et al [7] identified different barriers for adaption at institu- tional scale: stakeholder engagement and buy-in, weak pedagogical grounding, re- source demand, and ethics and privacy. At the same time, guidelines and frameworks Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 2 have been published to support higher education institutions in the development of Learning Analytics policies that are believed to be essential to realize sustainable at- scale Learning Analytics implementations and help to address the former challenges. The best-known policy framework, and the one that this paper builds on, is the SHEILA framework [8,9]. SHEILA proposes an iterative process to develop evidence-based pol- icy through active engagement with relevant stakeholders. The process builds on 181 items, believed to be essential to realize Learning Analytics at institutional scale, that were composed after different consultations with a wide variety of stakeholders. The third iteration of SHEILA [10] resulted in an interactive web-implementation where the SHEILA items can be structured and ordered, forming the basis of an institutional pol- icy for Learning Analytics. Broos et al. [11] recently criticized that policy building frameworks such as SHEILA lack a connection with actual Learning Analytics imple- mentation. At the same time, they show using on two cases studies that cross-fertiliza- tion between policy making and actual Learning Analytics implementation is essential to realize Learning Analytics at scale [11]. Based on these experiences, they propose a coordination model that provides additional guidance for Learning Analytics imple- mentation at scale, by complementing existing Learning Analytics policy frameworks with an approach to orchestrate the interaction between policy making and implemen- tation [11]. In this paper, we present a workshop format called WETS (Workshop For Obtaining Educational Technology at Scale) that can serve as a particular instrument to support the realization of Learning Analytics at scale by the engagement of stakeholders. Hereby, it primarily addresses the first barrier, stakeholder engagement and buy-in, that Tsai et al. identified for Learning Analytics at scale [7] WETS builds on the coordina- tion model [11] and the SHEILA framework [8,9] to let stakeholders collaboratively develop a concrete implementation timeline with policy making efforts that can guide a particular educational technology project. As SHEILA, on which WETS builds, in- cludes items relating to pedagogical grounding, resource demand, and ethics and pri- vacy, WETS helps to address the three remaining barriers for Learning Analytics at institutional scale identified by Tsai et al. [7]. In Section 2 we share the methodology and the approach of the workshop. Section 3 describes our findings from two runs of the workshop: one in the context of Learning Analytics scale-up project and one focus- ing on a broader scope of innovative educational technology projects for feedback and permanent evaluation. Finally, Section 4 provides a short discussion and states the most important conclusions. 2 Card-based workshop format 2.1 Introduction Active engagement and consulting of stakeholders is considered key to ensure ac- ceptance of institution-wide technologic innovations. Based on the coordination model for Learning Analytics policy making and implementation at scale, proposed by Broos et al. [11], and the SHEILA framework for building Learning Analytics policy [8,9], Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 3 we developed a workshop format that allows to actively engage a wide variety of stake- holders, including both experts in educational technology, end-users (students, teach- ers, and advisors), IT-staff, policy makers, … Participants can therefore have a wide variety of experience with Learning Analytics. We name this workshop format WETS: Workshop for obtaining Educational Technology at Scale. The goal of WETS is three- fold: 1. to inform stakeholders, ranging from policy makers to practitioners and in- volving both experienced and less-experienced in educational technology, around the educational technology project, 2. to create awareness around the typical steps an educational technology project has to take, and 3. to collect feedback from stakeholders regarding the project regarding both the the priorities and timeline for the particular project, and additional stakehold- ers that could or should be involved. In WETS the stakeholders jointly work towards a common goal: a timeline of prior- ities that must be handled to realize the institution-wide implementation of a particular Learning Analytics innovation. To this end WETS builds on the coordination model [11] and the priorities for Learning Analytics policy building of SHEILA [8,9]. The next paragraphs provide the required background regarding both. Fig. 1. Coordination model as proposed by Broos et al.[10] The coordination model of Broos et al. [11] introduced a specific implementation timeline to provide support in the realization of institutional Learning Analytics adop- tion, and in particular to provide structure to the interplay between policy making and actual Learning Analytics implementation, which in the coordination model are advised to happen concurrently (Fig. 1). As WETS strongly builds on the four-phased imple- mentation timeline of the coordination model we summarize the four phases here: “Initialisation phase: in the first phase it is important to create a common understanding of which problems will be targeted and what will be the basic needs for the project.[..]. This phase may include a consideration of the current state of the art, as a source for inspiration and potential reuse. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 4 Prototyping phase: When prototyping, one or more artefacts are being pro- duced, not with the intention of finality, but as an instrument to support design activities, discussion and improvement through iteration. […] Piloting phase: This phase aims at testing the solution design in a natural set- ting. It involves the use of real data that will be accessed by real users in a context that is representative for the intended end goal of the solution. It differs from the subsequent scaling phase in that only a subset of the intended user population is targeted. [..] Scaling phase: This last phase starts from what was learned from the previous phases, especially from the piloting phase, to re-implement, or at least re-de- ploy, the envisioned solution at scale. Here the full population is targeted: all intended courses, programs, and faculties. Several challenges related to the scalability of the solution will need to be tackled. This includes, but is not limited to, technical issues, e.g., the requirement for system resilience, and maintenance. […]” [11] The SHEILA framework is a process model [4] that presents Learning Analytics policy building as an iterative and continuous process. SHEILA provides a set of 181 concrete items (49 action points, 69 challenges, and 63 policy questions), which are believed to support an iterative development of institutional policies and strategic plan- ning for Learning Analytics. The SHEILA items are structured along six dimensions: map political context, identify key stakeholders, identify desired behavior changes, de- velop engagement strategy, analyze internal capacity to effect change, and establish monitoring and learning frameworks. As criticized by Broos et al. [10] SHEILA how- ever does not provide guidelines on how to structure the iterations inherent to the policy building. 2.2 Tangible artefacts to build timeline of priorities WETS challenges the stakeholders to jointly structure priorities, consisting of a sub- set of the 181 SHEILA items, along a timeline, consisting of the four implementation phases. The format of the workshop consists of structured discussions around tangible representations of both the four implementation phases and the SHEILA items. Tangi- ble objects or artefacts are believed to facilitate and enhance the discussion in the work- shop. The physical representation of the four implementation phases are four large A3 papers, containing the title of the phase and a short description. The physical represen- tation of the 181 SHEILA items are a physical set of cards: each card represents one SHEILA item (Fig. 2). Color coding is used to represent the six SHEILA dimensions and icons on the background of each card represent if the item is an action point, chal- lenge, or policy question. The final product of the workshop is one timeline with for each of the four imple- mentation phases the most important SHEILA items to be handled. All workshop ac- tivities, elaborated in the next section, work towards that final goal. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 5 Fig. 1. Visualization of how the 181 physical cards, one for each SHEILA item were com- posed. The color represents the dimension of the item (in the example card “Analyze Internal Capacity”, the icon on the background the type of item (in the example card “Challenge”). 2.3 Workshop structure and format The basic workshop structure, assuming a duration of three hours, consists of four parts: 1. plenary introduction (±25 minutes), where the goal of the workshop is introduced, the particular educational technology project is proposed, and the workshop formats are clarified; 2. group work in jigsaw format (±1 hour 50 minutes), where the participants work in smaller groups to structure a subset of the cards along the timeline; 3. plenary discussion (±40 minutes, Fig. 6), where the results of the different groups are collected on one joint timeline and discussed; and 4. closing (±5 minutes). The group work and plenary discussion are detailed below. The format is based on a basic number X, accommodating 2X2 participants, which are engaged om X jigsaw rounds. Examples and figures are given with X=3. As elaborated in Section 2.4, chang- ing the basic number X is one of the ways to up-or downscale the WETS format flexi- bly, depending on the number of participants or the available workshop space. Fig. 2. Set of three tables “A”, “B”, and “C” with on each table four papers representing the timeline with the four implementation phases. At each table three participants take place (green chairs). On the chairs, the participant cards are visualized. These cards contain three letters in- dicating to which subsequent tables they have to go in the three jigsaw rounds. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 6 The jigsaw format consists of X jigsaw rounds, structured around sets of X tables (in the example: A, B, and C) (Fig. 3). Each set of X tables will iterate during X subsequent jigsaw rounds to jointly structure in each round 1/(2Xth) of the SHEILA items. There- fore, if the goal is to handle all SHEILA items at least two such sets of X tables should be organized. For each particular workshop, depending on the experience and job de- scription of the participants and their involvement in the educational technology pro- ject, the different roles, and if needed, experience levels are defined. As the goal of the workshop is to trigger discussion between stakeholder groups, the groups are formed to be as heterogeneous as possible, not only in the first round but also in the subsequent rounds. When arriving, all participants receive a pre-printed card with their name and a set of three letters, representing the sequence of tables they should go to in each of the X jigsaw rounds. The individual table sequences are defined to ensure that individuals are regrouped in each round. One experienced participant -we advise to use people familiar with the project and the workshop format- will stay at one table during all X jigsaw rounds and is responsible for the coordination. Moreover, we advise to start with a di- verse set of roles at each table. In the first, most elaborate round (50 minutes) each table gets the instruction to place the cards assigned to their table to one of the four phases according to during which phase most attention should be paid to it. If they strongly believe the card belongs to multiple phases or if they have additional remarks regarding this item, they can add an amendment (post-it) to the item. In the final five minutes of the first round, all partici- pants are requested to individually add one (red) priority sticker to the five items at their table they believe to be most important. Fig. 4. presents the result of one the first round at one table. The placement of the cards on the timeline does not change in the next rounds. Fig. 3. Result after the first jigsaw round at one of the tables. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 7 For the second round (30 minutes) participants move to their next table, indicated by the second letter on their name cards. The moderator who stayed at the table introduces the now fixed timeline obtained in the previous round and initiates discussion around this timeline and its items. Amendments (post-its) can be added to a card if the partici- pants do not agree with the card’s placement or if they have other remarks concerning the item. The second round again concludes with an individual 5-minute prioritization. The next X-2 rounds are a repetition of the second round, every time at new tables. Fig. 4. Example of output of workshop for the Initializing Phase. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 8 At the end of these X rounds all participants, except the moderators at each table, have attended a discussion at each of the X tables in one series. The plenary discussion, which starts from the timeline at the tables, is organized in three steps. First, each of the participants returns to their initial table and the moderator is asked to introduce the final result, focusing on the cards that received the highest number of priority-votes. Next, one participant per table brings the 10 highest priority items to the plenary timeline (Fig. 5). The workshop moderator initiates a discussion around the results and asks the participants to clarify particular choices and to comment (Fig. 6). Finally, each of the participants is requested to add names of experts or stake- holders that could or should be consulted for particular prioritised items. Fig. 5. Picture illustrating the plenary discussion round. 2.4 Up- or downscaling of the workshop format The proposed workshop format can be up- or downscaled to better accommodate the actual number of participants and the available workshop space: - The number of participants (X) at one table can be adapted. We advise to aim for between 3-6 participants. Together with the number of participants at one table, the number of tables and the number of jigsaw rounds will change ac- cordingly. One could e.g. accommodate five people at each table, divide the SHEILA items over five tables, and organize five jigsaw rounds. - When accommodating many participants in one workshop one could also or- ganize N parallel series, hereby allowing for N*X2 participants. 3 Results A webpage [11] containing instructions for organizing WETS workshops and the ma- terial needed to this end was constructed. Two WETS were organized, of which the context, the course of the workshop, and the results are discussed below. A planned third round of WETS involving over 80 participants was postponed to a later date due to the COVID-19 pandemic. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 9 3.1 Learning Analytics Scale-up project Context KU Leuven, a general university in Flanders, Belgium is currently up-scaling three Learning Analytics Dashboards that were developed in two European projects: (i) LISSA, a dashboard supporting the dialogue between student advisors and students [13, 14] and (ii) REX, a student-facing learning dashboard to trigger reflection around aca- demic achievement [15], and (iii) LASSI, a student-facing dashboard to trigger reflec- tion around learning and study skills [16]. After a successful experience within 26 pro- grams, KU Leuven started a scale-up project [17] to institutionalize these dashboards. The goal of the scale-up project is to embed these dashboards within the existing uni- versity systems and processes. A project core team was assembled consisting of re- searchers, students, teachers, ICT-staff, student advisors, and policy makers. Actual workshop To prioritize the activities of the KU Leuven scale-up project, they organized a work- shop with the project’s multi-disciplinary core team, including policy makers (delegate of vice-rector of education, head of study advice service of KU Leuven), IT staff, data managers, student advisors (practitioners), researchers, and a student. This workshop was also the first WETS workshop organized and was therefore used to gather experi- ence in order to improve the workshop format. 11 people attended the workshop. The workshop was organized with a setup of X=3 and half of the SHEILA items. Results The 11 participants succeeded in identifying 23 high-priority SHEILA items and placing them on the implementation timeline (Table 1). They placed 10 items in the initialization phase, 4 each in the prototyping and piloting phase, and five in the scaling phase. All SHEILA dimensions were present, in order of prevalence: develop engage- ment strategy (8 items), analyse internal capacity to effect change (6 items), Identify key stakeholders (5 items), Establish monitoring and learning frameworks (2 items), Identify desired behaviour changes (1 items), and map political context (1 item). Table 1. Results of WETS for the KU Leuven scale-up project. The type of the item is indi- cated with the following abbreviation: C(hallenge), A(ction), P(olicy question). Type Dimension Item Initialization Phase Define ownership and responsibilities among diverse professional groups C Identify key stakeholders within the university. Identify internal and external drivers for learning analytics (e.g., prob- A Map political context lems to solve or areas to enhance). Develop engagement strat- Will learning analytics be used as a management tool to monitor students P egy or staff? Identify primary users of learning analytics (e.g., students, teaching staff, A Identify key stakeholders and senior managers). Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 10 Develop engagement strat- Feedback is provided without proper support, which leaves students in C egy anxiety or complacency, thereby demotivating them. Establish monitoring and What defines success or failure? How will success be measured? What P learning frameworks are success indicators? Identify desired behaviour What positive changes will learning analytics bring to the current situa- P changes tion (e.g., learning and teaching landscapes)? Will learning analytics exclude certain groups of students? Will there be P Identify key stakeholders mechanisms to address inequality? Analyse internal capacity to Will the design of selected learning analytics tools address teaching and P effect change learning needs? Analyse internal capacity to C Institution-wide buy-in is hard to reach. effect change Prototyping Phase Develop engagement strat- How will students’ responsibility for learning be highlighted and consid- P egy ered in the design and implementation of learning analytics? Identify required expertise (e.g., learning analytics expertise, IT exper- A Identify key stakeholders tise, statistical expertise, educational expertise, psychological expertise) Develop engagement strat- How will the results of analytics be communicated in a way that moti- P egy vates learning? Develop engagement strat- Decide forms of interventions (e.g., automatic systems, personal con- A egy tacts, learning resources). Piloting Phase Develop engagement strat- C Peer comparison may demotivate students. egy Analyse internal capacity to A Evaluate technological infrastructure. effect change P Identify key stakeholders Whose data will be collected? Stakeholder engagement Establish monitoring and Are there any measures to ensure that students are equipped with suffi- P learning frameworks cient knowledge to make opt-in/out decisions? Scaling Phase Develop engagement strat- Learning analytics is used as a metric to judge students and teachers ra- C egy ther than evidence to support learning and teaching. Raise awareness and understanding of learning analytics among teaching Develop engagement strat- A staff and students through publicity and meetings/ workshops/ confer- egy ences. What communication channels or feedback mechanisms will be in place? Analyse internal capacity to P How will the implementation address the problem of time poor among effect change teaching staff? Analyse internal capacity to Evaluate institutional culture (e.g., trust in data and openness to changes A effect change and innovation). Analyse internal capacity to A Establish indicators of data quality and system efficacy effect change The participants believed that the workshop format was well-suited to realize the goals of the workshop. They suggested that the moderator at each table should be some- one who is well-informed about the project and is well-briefed about the workshop format and goals. This moderator should focus on moderating the discussion, managing time, clarifying in case of doubt, and repeating the practical arrangement to participants. The workshop participants suggested to translate the SHEILA items to the native lan- guage of the participants (Dutch) for subsequent formats and asked to simplify overly Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 11 complex language whenever possible. To ease the processing of the results it was de- cided to add the item number to the cards. 3.2 Educational Technology SEED projects Context After the successful first run of WETS we were invited to organize a second run for project teams working on a KU Leuven SEED-project. SEED projects are part of the KU Leuven approach to stimulate innovation around educational technology [17]. The projects involved in the workshop were the eight projects selected in the 2019 call fo- cusing on feedback and permanent evaluation. The projects had been running for 8 months at the time of the workshop. Actual workshop The workshop was adapted to a shorter 2-hour format by removing the final jigsaw round. Based on the feedback of the previous workshop all SHEILA items were trans- lated to Dutch and wherever possible the language was simplified. 15 project collabo- rators participated in the workshop. It was clarified that although some of the items referred to Learning Analytics, they should interpret this broadly and think about edu- cational technology for feedback. Results The participants found the workshop format useful to get a picture of the many things educational technology projects should consider. They recognized many of the items from their experiences in the projects, including the hard experiences of realizing that something was omitted or handled too late into the project. Therefore, the attendees stated that such a workshop should be organized earlier on in the project phase and that WETS should be part of the support package developed by the universities’ support teams. They also believed that WETS could be useful to engage and consult their stake- holders. On the negative side, they indicated that the items were too strongly focused on Learning Analytics, which is not surprising as the SHEILA items were designed with this domain in mind. Therefore, in the future the workshop could be organized with items adapted to more educational technologies. 4 Discussion and conclusion In this paper we introduced a reusable hands-on workshop approach, WETS, to sup- port higher education institutions in the realization of innovative educational technol- ogy projects at scale. The WETS approach builds on the recently proposed coordination model [10], coordinating an implementation timeline with policy making efforts, and on the SHEILA framework providing 181 particular items to discuss to realize Learning Analytics at scale. WETS is conceptualized to engage a large group of stakeholders Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 12 with diverse backgrounds including both Learning Analytics experts and larger audi- ences of educational and institutional professionals with less Learning Analytics expe- rience. The format relies on a set of cards as physical artefacts to enhance discussion during the collaborative construction of a timeline of priorities for realizing a particular educational technology project at scale. Two runs of the workshop were organized: one in the context of Learning Analytics scale-up project and one focusing on a broader scope of innovative educational technology projects. Based on the first observations, we found that WETS fosters constructive discussion and improves awareness and buy- in. The output of WETS, a timeline of priorities, is valuable input for the project team. Further workshops will be organized to further improve and evaluate the WETS format, as the current evaluation is still limited to observations made during to try-outs. Future work should, beside setting up a thorough evaluation of the approach, study the impact of a WETS workshop on the projects aiming at institutional adoption of educational technology. Additionally, if WETS is used for educational technology projects not fo- cusing on learning analytics, more general items should be developed that can be in- cluded in the WETS approach. Finally, the outcomes of WETS workshops should be analysed in more detail to look for particular trends that exist over different institutional projects and in different contexts. The output of the workshops also acts as additional supporting evidence for the find- ings of Broos et al. [10] that policy development can be distributed over time, and could therefore be partly or completely coincidental with such implementation. References 1. Duval, E. (2012). Learning analytics and educational data mining. Erik Du- val’s weblog, 30 January 2. Viberg, O., Hatakka, M., Bälter, O., and Mavroudi, A. (2018). The current landscape of learning analytics in higher education. Computers in Human Be- havior, 89:98–110. 3. Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., and Alexander, S. (2014). Setting learning analytics in context: Overcoming the barriers to large-scale adoption. In Proceedings of the Fourth International Conference on Learning Analytics And Knowledge, pages 251–253. ACM. Author, F.: Contribution title. In: 9th International Proceedings on Proceedings, pp. 1–2. Publisher, Location (2010). 4. Dawson, S., Poquet, O., Colvin, C., Rogers, T., Pardo, A., and Gasevic, D. (2018). Rethinking learning analytics adoption through complexity leadership theory. In Proceedings of the 8th International Conference on Learning Ana- lytics and Knowledge, pages 236–244. ACM. 5. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982–1003. URL: http://www.jstor.org/stable/2632151. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 13 6. Piderit, S. K. (2000). Rethinking resistance and recognizing ambivalence: A multidimensional view of attitudes toward an organizational change. The Academy of Management Review, 25, 783–794. doi:10.2307/259206. 7. Fernandez, A. R., Kloos, C. D., Scheffel, M., Jivet, I., Drachsler, H., et al. (2018). SHEILA: Support higher education to integrate learning analytics. Brussels Belgium: European Commission. 8. Tsai, Y.-S., Rates, D., Moreno-Marcos, P.M., Jivet, I., Scheffel, S., Drachsler, H., Delgado Kloos, C., and Gašević, D. (2020). Learning analytics in Euro- pean higher education - Trends and barriers, Computers & Education, 155, ISSN 0360-1315, https://doi.org/10.1016/j.compedu.2020.103933. 9. Tsai, Y.-S., Moreno-Marcos, P. M., Jivet, I., Scheffel, M., Tammets, K., Kol- lom, K., and Gasevic, D. (2018). The SHEILA framework: Informing institu- tional strategies and policy processes of learning analytics. Journal of Learn- ing Analytics, 5(3):5–20 10. SHEILA framework, third iteration, https://sheilaproject.eu/sheila-frame- work/create-your-framework/ 11. Broos, T., Hilliger, I., Pérez-Sanagustín, M., Htun, N.‐N., Millecamp, M., Pesántez-Cabrera, P., Solano‐Quinde, L., Siguenza‐Guzman, L., Zuñiga‐ Prieto, M., Verbert, K. and De Laet, T. (2020), Coordinating learning analytics policymaking and implementation at scale. Br J Educ Technol, 51: 938-954. doi:10.1111/bjet.12934 12. WETS homepage, https://set.kuleuven.be/LESEC/projecten/workshop-for- obtaining-educational-technology-at-scale-wets/ 13. Charleer, S., VandeMoere, A., Klerkx, J., Verbert, K., and De Laet, T. (2018). Learning analytics dashboards to support adviser-student dialogue. IEEE Transactions on Learning Technologies, 11(3):389–399. 14. Millecamp, M., Gutierrez, F., Charleer, S., Verbert, K., and De Laet, T. (2018). A qualitative evaluation of a learning dashboard to support advisor- student dialogues. In Proceedings of the 8th international conference on learn- ing analytics and knowledge, pages 56–60. ACM. 15. Broos, T., Verbert, K., Langie, G., Van Soom, C., and De Laet, T. (2017). Small data as a conversation starter for learning analytics: exam results dash- board for first-year students in higher education. Journal of Research in Inno- vative Teaching & Learning, 10(2):94–106. 16. Broos, T., Peeters, L., Verbert, K., Van Soom, C., Langie, G., and De Laet, T. (2017). Dashboard for actionable feedback on learning skills: scalability and usefulness. In International Conference on Learning and Collaboration Tech- nologies, pages 229–241. Springer. 17. De Laet, T., Broos, T., Wullaert, I., Cosemans, A., Craenen, K. (2020). Fertile breeding ground for learning analytics at scale: the KU Leuven approach. In Companion Proceedings of the 2020 Learning Analytics and Knowledge Con- ference. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).