Learning Analytics and Open Learning Modelling for Professional Competence Development of Firefighters and Future Healthcare Leaders Cecilie Hansen, Technology for Practice - Uni Research Health, cecilie.hansen@uni.no Grete Netteland, Department of Social Sciences, Sogn and Fjordane University College, grete.netteland@hisf.no Barbara Wasson, Centre for the Science of Learning and Technology, InfoMedia UiB, barbara.wasson@uib.no Abstract: One challenge faced by workplaces is having enough relevant data to support data- driven decision making related to the further education/training of their employees, to identify competence gaps within their work force, and to contribute to organisational learning. We are propose that data driven decision-making can be supported by the combination of competence mapping, collection of assessment and performance data from various sources, learning analytics of the data, and visualisation of the results in an open learner model (OLM). In the iComPAss project we work with two learning scenarios, health workers studying for a Masters in health leadership, and the on-the-job training of firefighters to study this challenge. The project builds on and extends ideas from the former EU-project Adapt-It, and on a learning analytics approach and an OLM developed in the EU NEXT-TELL project. This paper elaborates on our approach. Keywords: Learning Analytics; Competence development; 4C/ID, Open learner model; Visualization. Introduction In order to ensure active and participating citizens and a competitive workforce, modern societies need to have a continuous focus on education and training. The challenges are many. One challenge for educational and training organisations is to maintain an overview of their competence needs and the competence development of individual learners, and groups of learners (Bull et al, 2013) in order for decision-makers (e.g., students, employees, instructors, leaders, etc.) to make informed decisions about learning, teaching, management, and organizational development. To address these challenges, the iComPAss project draws on competence modelling, assessment and performance evaluation, learning analytics, and open learner models. In this paper we describe our approach to professional competence development that draws on these methods and models. Background needs The iComPAss project arises from the needs of two professional education/training situations that want to raise the quality of their education and training, and improve decision-making related to their students/employees and the organisations. Scenario 1 is a Master programme for Organization and Management (Health and Welfare line) at a University College. Scenario 2 is the education and training of firefighters, organized by a Fire and Rescue Service (FRS). Both scenarios are situated in Norway. The Master programme (Scenario 1) is a combination of formal learning and on the job competence development. The professions for which they educate their students have a demand for highly qualified professionals. During the part-time study the students attend lectures, write a master thesis and use their professional background (i.e., from where they are currently working) in order to complete their education. Then the students have competence to assume 1) a leadership position within their field of specialization, 2) administration positions within the healthcare or educational sectors, or 3) independent positions with leadership responsibilities. During the 4-years study programme, most students combine the part-time study with family obligations and practical health care work. This combination results in many challenges, resulting in a large need for individual adjustment of the study programme and logistic challenges. Furthermore, in the latest NOKUT Study Barometer (Lid, 2012), which examines student opinions about many aspects of their studies, two issues received low scores from the students and need to be improved: 1) the quality of the teachers’ feedback on the students’ learning tasks, and 2) student involvement in their studies. In addition, an internal evaluation of the programme it was detected that there was a need to review the competences that students need 87 Copyright © 2016 for this paper by its authors. Copying permitted for private and academic purposes. to develop during their studies, and to ensure that these are aligned with relevance for their future work practice. Specifically, defined goals in the education should also be more closely connected to relevance for future professional practice. HSF needs to rework the competence profile of their program, needs a way to measure (i.e., assessment) and to keep track of the competence development of their individual students in order to be able to improve the individual feedback to the students, and needs a way of engaging students in self-reflection of their own competence development. In Norway, potential firefighters are recruited from vocational schools with a focus on careers such as a carpenter or mason (Scenario 2). The Fire and Rescue Service (FRS) is then responsible for the education and training of these potential firefighters, and is further required to ensure that the firefighters maintain extreme skills and meet intensive fitness standards. The FRS is emergency response professionals and specialists in an enormously wide range of tasks, from the rescue and protection of people and property during fires, smoke, and accidents, to giving courses and informing on fire safety. For example, firefighters have to use a variety of techniques to control or suppress fires, but they may also perform initial medical treatment related to fire and accidents. The FRS has a high demand for employees with competence in several fields of rescue and fire prevention, and needs to have an overview of these competences, at an individual employee level, at a team level, and at organizational level. Another challenge within the FRS is how to combine on the job performance of skills and performance during formal training, as evidence of competence development. Although the two cases at a first glance seem very different, they have many similarities. Both units where the cases are anchored have a need to address quality issues, to focus on student/ employee learning and work, and ensure that competence goals within their education and training are tightly tied to their professional practices. To be more specific, our first workshop with the FRS revealed that there is a need for overview of the organisation’s competence exists on three levels: 1. Organisation: The organisation’s competence needs related to current manpower. The leadership needs an overview of the entire expertise situation of their manpower. For example, does their aggregated competence profile match the National rules and regulations for a fire corps? Another example is that the leader needs to plan for competence development within the corps a few years before a veteran firefighter retires (e.g., Is there a firefighter who with a little training will acquire new competence to cover that of a retiring firefighter?). 2. Team: Competence within a team. Do the team members (each firefighter belongs to a specific team) have the competence and updated certificates required for the team as a collective (for example, there if there is a requirement for 2 smoke divers does the team have this competence?). Does the team have the competence required when a specific emergency arises (e.g., a heathland fire)? 3. Individual: Each firefigher’s competence. Leadership and the individual firefighter need an overview of their competence profile and certifications. For example, the individual firefighter could use their own profile for self-reflection over where they stand on individual competencies, and also how they fit into the overall profile of the corps. The leadership could use the individual competence profiles to determine who needs training to meet certification standards, whom might be assigned to a new team, and who to develop further with thought of replacing a firefighter who has either resigned or gone off with pension. By combining the competence profiles of the individuals, the leadership can see the needs for each team and for the organisation as a whole. In FRS at the current time this is done manually. It is also a challenge for FRS to maintain a competence profile for each individual firefighter, as their competencies must be continually updated, both with theory, training and on-the-job practice, and certifications run out at certain dates, etc. Data from multiple sources such as emergency log files, experience reports, national and local authorities requirements and certification, etc. needs to be combined to determine competence for an individual, for teams (did the team perform well at the latest emergency situation, did one firefighter under preform (e.g., if a firefighter shows fear this needs to be recorded)), and for the organisation as a whole. Approach While these needs are not novel to FRS, our approach is unique in professional competence development (as far as we have ascertained) in that it investigates how to support data-driven decision-making by individuals, instructors, and leadership through a combination of competence mapping, data from multiple assessment and performance sources, learning analytics, and visualization of the learning data in an Open Learner Model (OLM), and investigates how this supports learning and managerial decision-making. 88 iComPAss builds on our previous work (i.e., in the Adapt-it project) using the instructional design for professional learning method 4C/ID (van Merriënboer & Kirschner, 2013), which guides the identification of competences and performance measures within our domains, starting with the practices of health leadership and firefighting. An existing tool (from partner Enovate AS) supports this process and will extended to export data that can be used together with assessment, learning analytics, and open learner modelling. To data there are no general instruments connected to 4C/ID, except for two loosely coupled portfolios called Perflect (Becker, et at, 2014), and the e-pass (EPASS, 2016), and these do not fit our purposes. Thus, existing and new assessment methods will be supported, electronic if possible, and linked to the relevant competences. For example, a tool to support team leaders in reporting the performance of their team members during an incident (e.g., a house fire) will generate performance data related to particular competences, will be developed. Similarly, such a tool can be used to report on healthcare workers performance on leadership skills. As each evidence related to a single competence can come from several sources, we choose to build on experience from the Next-Tell project in using extending the use of learning analytics and educational data mining with features from the psycho-pedagogical framework Competence-based Knowledge Space Theory, known as CbKST (Albert & Lukas, 1999 ). The NEXT-TELL ProNIFA tool (Kickmeier-Rust &Albert, 2013) took this approach to supporting teachers in the assessment process. Others are adopting this approach, and leading the way is the EU project LEA’S BOX (www.leas-box.eu; the project leader is an advisor to iComPAss). The learning analytics output will feed a learner model and be visualised in an Open Learner Model (OLM) (Bull & Kay, 2007), which visualises competence based on inferences about learning. As Bull and Wasson (Bull & Wasson, 2016) explain “the OLM visualisations differ from many of the currently popular learning analytics dashboards (Verbert et al, 2013), in that they are based on an underlying inferred model of the learner’s current competences or understanding, rather than behaviour or performance data logged”. Bull and Wasson also give examples of visualisations such as skills meters, networks of competences, radar plots, word clouds, and tree maps (see also LEA’S BOX video for examples (Lea's box OLM visualisations, 2016). An OLM traditionally represents a learner’s competences and abilities while the learner interacts with learning material in an intelligent tutoring system (ITS), however, state-of-the-art research has extended the use of OLMs for multiple data sources feeding its learner model (Morales et al., 2009; Mazzola &Mazza, 2010; Reimann et al., 2010). One challenge in this work, however, has been to have the data available for the learner model underlying the OLM. In iComPAss we will carry out new research on how to automatically provide data to a learner model given our use domains, and investigates how to best visualise the competencies in an OLM for the individual learner (e.g., a firefighter), and for decision-makers interested in a subset of learners (e.g., a firefighting team), or the entire group of learners (e.g., the entire firefighters corps). Furthermore we will inform our approaches by visual analytics research (Thomas & Kielman, 2009; Thomas & Cook, 2005) in order to develop the potential to enable decision makers to see and explore learner data in order to have evidence for decision-making. In summary, in our approach there is a tight coupling between research on the pedagogical design of competence assessment and visualisation for learning, the technical development of these assessments and visualisations, extraction of data from multiple sources, learning analytics, and an evaluation of their impact on student learning and managerial decision-making. Challenges and impact As pointed out earlier, the two Scenarios, while different, share a number of similarities. While Scenario 1 is based on formal learning organised during the sessions and unorganised between the sessions, Scenario 2 is a mix of theoretical and practical learning at the workplace. Both, however, need support in gaining an overview of the competence development of their learners/employees. In our approach there are several challenges we have to address. First, one challenge lies in the identification of skill/competence hierarchies for the two domains. Second is the identification of relevant theoretical and performance data (i.e., assessment) and how to collect it from multiple sources, and third how the interaction between these data from multi data sources provides evidence of competence. .A fourth challenge is how to use learning analytics to analyse and interpret this data for the learner model, and fifth, how to best visualise the complex patterns and relations in an OLM to support decision-making by professionals with high competence demands. Though there are many challenges, conceptually and technically, both scenarios will have the potential to have valuable impact and improve on decision making concerning: • Training needs 89 • Ability to assess and identify competence gaps (individuals, teams, workforce) • Identification of necessary training and competence development before the need becomes problematic We will be developing a number of tools to support this research and in order to determine the impact of our approach we ask questions such as: Do the tools provide a better overview of the competence and training needs than current practice? What new insights can be gleaned? Do our OLM visualisations support individual or leadership decision making? Can the visualisations provide an overview of both the individual and collective expertise? The ability to develop practice through the inquiry of learner data should have impact and be of interest for a multitude of institutions organisations, as well as for teachers, health and school leaders and policy makers. Understanding the relationship between capturing data, visualising competences and professional development can be of major impact for these institutions. References Albert D. & Lukas J. (1999). Knowledge spaces: Theories, empirical research, and applications. Mahwah: Lawrence Erlbaum Associates. Beckers, J., Dolmans, D. & van Merriënboer, J. (2014). Design and Development Showcase Proposal Design and Development Division. Poster at AECT, Jacksonville FL, November 4-8. Bull, S. & Wasson, B. (2016). Competence Visualisation: Making Sense of Data from 21st Century Technologies in Language Learning. ReCALL 28(02):147-165 Bull, S., Johnson, M.D., Alotaibi, M., Byrne, W., & Cierniak, G. (2013). Visualizing Multiple Data Sources in an Independent Open Learner Model. In H.C. Lane, K. Yacef, J. Mostow & P. Pavlik (Eds.), Proceedings of the 16th International Conference on Artificial Intelligence in Education (AIED). Lecture Notes in Computer Science, Vol. 7926, 199-208. Berlin. Springer-Verlag Bull, S. & Kay, J. (2007). Student Models that Invite the Learner In: The SMILI Open Learner Modelling Framework, International Journal of Artificial Intelligence in Education 17(2), 89-120. EPASS http://www.epass.eu/en/ (Accessed 23.03.16). Kickmeier-Rust, M. & Albert, D. (2013). Learning Analytics to Support the Use of Virtual Worlds in the Classroom, in A. Holzinger & G. Pasi (eds.), Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data, Springer-Verlag, Berlin Heidelberg, 358-365. LEA’s BOX OLM VISUALISATIONS: https://www.youtube.com/watch?v=Snfi_qsutxc&ebc=ANyPxKraCAnohXPoOOFEs3mIiNRnNAX- hVlg7r952knUBvyqczVM9qGpHpITCNe3rDlIOCLVNe11ilYis8QaVnFhnB7-asUzBA (Accessed 23.03.16). Lid, S.E. (2012) R&D-based professional education – experiences from evaluations of teacher, engineering and pre-school teacher education. In NOKUT Report 2012-1. ISDN 1892-1604. Mazzola, L. & Mazza, R. (2010). GVIS: A Facility for Adaptively Mashing Up and Representing Open Learner Models, In M. Wolpers et al. (eds.), EC-TEL 2010, Berlin: Springer Verlag, 554-559. Morales, R., van Labeke, N., Brna, P. & Chan, M.E. (2009). Open Learner Modelling as the Keystone of the Next Generation of Adaptive Learning Environments. In C. Mourlas & P. Germanakos (eds.), Intelligent User Interfaces, 288-312, London: ICI. Reimann, P., Bull, S., Halb, W. & Johnson, M. (2011). Design of a Computer-Assisted Assessment System for Classroom Formative Assessment, CAF11, IEEE Computer Society Thomas, J. and Kielman, J. (2009). Challenges for visual analytics. Information Visualization, 8(4), 309-314. Thomas, J.J. & Cook, K.A. (Eds.)(2005). Illuminating the path: The research and development agenda for visual analytics. IEEE van Merriënboer, J. J. & Kirschner, P.A. (2013). Ten steps to complex learning: A systematic approach to four- component instructional design. Second edition. London. Routledge. Verbert, K., Duval, E., Klerkx, J., Govaerts, S., Santos, J.L. (2013). Learning Analytics Dashboard Applications, American Behavioral Scientist, DOI: 10.1177/0002764213479363. Acknowledgments The work is supported by the Norwegian Research Council grant number 246765/H20. The authors would thank the researchers we have been collaborating with in the previous projects upon which this project builds. The authors would also like to thank the reviewers for valuable feedback that resulted in a much improved paper. 90