Please refer to these proceedings as Adam Moore, Viktoria Pammer, Lucia Pannese, Michael Prilla, Kamakshi Rajagopal, Wolfgang Reinhardt, Thomas D. Ullmann & Christian Voigt (Eds.): Proceedings of the 2nd European Workshop on Awareness and Reflection in Technology Enhanced Learning. In conjunction with the 7th European Con- ference on Technology Enhanced Learning: 21st Century Learning for 21st Century Skills. Saarbrücken, Germany, September 18, 2012. Available online at http://ceur-ws.org/Vol-931/. c 2012 for the individual papers by the papers’ authors. Copying permitted for private and academic purposes. Re-publication of material from this volume requires permission by the copyright owners. The front-cover was created by Harriett Cornish (The Open University, KMi). Addresses of the editors: Wolfgang Reinhardt Thomas Daniel Ullmann Computer Science Education Group Knowledge Media Institute Department of Computer Science The Open University University of Paderborn Walton Hall Fürstenallee 11 Milton Keynes 33102 Paderborn MK7 6AA Germany United Kingdom wolle@upb.de t.ullmann@open.ac.uk Viktoria Pammer Adam Moore Know-Center Knowledge and Data Engineering Group School of Computer Science and Statistics Inffeldgasse 21A Trinity College 8010 Graz Dublin, D2 Austria Ireland vpammer@know-center.at adam.more@cs.tcd.ie Michael Prilla Christian Voigt Institute for Applied Work Science Centre for Social Innovation Ruhr University of Bochum Universitaetsstr. 150 Linke Wienzeile 246 44780 Bochum 1150 Wien Germany Austria michael.prilla@rub.de voigt@zsi.at Kamakshi Rajagopal Lucia Pannese CELSTEC imaginary srl Open Universiteit Innovation Network Politecnico di Milano Valkenburgerweg 177 Via Mauro Macchi, 50 6419 AT Heerlen 20124 Milano The Netherlands kamakshi.rajagopal@ou.nl lucia.pannese@i-maginary.it Editorial: Awareness and Reflection in Technology Enhanced Learning Considering the multitude of views on awareness and reflection distributed over a wide range of disciplines (CSCW, psychology, educational sciences, computer science...) the workshop’s theme is encapsulated in the following question: “What do awareness and reflection mean in the context of TEL, and how can technologies support either?” The ARTEL12 workshop was a direct follow-up to the 2011 EC-TEL workshops ”AR- NETS11 (Awareness and Reflection in Learning Networks, Vol. 790 of CEUR)” and ”ALECR11 (Augmenting the Learning Experience with Collaborative Reflection)”. AR- TEL12 pulled together research on awareness and reflection in Technology Enhanced Learning across disciplines (psychology, educational science, computer science) and across European TEL projects (MIRROR, ImREAL, STELLAR, MATURE, TellNET, TelMap as co-organising projects). The main audience of ARTEL12 were researchers and practition- ers in the field of TEL. The objective of this workshop was i) to provide a forum for presenting and discussing re- search on awareness and reflection in TEL and ii) to provide an interactive experience that connects participants’ research, the co-organizing projects’ latest prototypes and models with real end users’ learning experiences and needs regarding reflection technology. We received 12 submissions, of which 6 were accepted as full papers. The workshop was held on September 18, 2012. The workshop was organised in three sessions, where in the first session papers were presented and discussed that dealt with the topic of awareness whereas in the second session papers on reflection were presented and discussed. The fi- nal session was an interactive one, in which the participants collaboratively brainstormed about the connections between awareness and reflection. Moreover, the participants played educational games and worked with simulations, which have then been discussed consid- ering their particular impact on awareness and reflection. Papers on Awareness As indicated by its title, the paper “Understanding the meaning of awareness in Research Networks” by Reinhardt et al. provides a theoretically and empirically informed explo- ration of ’awareness’. Grounded in the analysis of 42 interviews, the authors suggest 6 forms of awareness including being aware of others’ activities, disciplinary differences in doing research or the geographical whereabouts of peers. A convincing argument outlines how these forms of awareness impact each other and lead to a layered model of awareness in research networks (LMARN). Although the LMARN is primarily presented as a heuris- tic device meant to guide the design of new tools supporting the formation of awareness, the paper also contributes to the wider discussion regarding novel forms of measuring the impact of scientific publications in Science 2.0 media. Reinhardt and colleague’s work, titled “Supporting Scholarly Awareness and Researchers’ Social Interactions using PUSHPIN” examines an application designed to empower Re- search 2.0. Taking the scientific publication as its central raison d’être, it creates a unifying 3 Papers on Reflection layer on top of researcher’s often fragmented communication and storage structures, cre- ating recommendations using Big Data analytics and the social graph. PUSHPIN attempts to build a system that recommends related reading based both on what members of the so- cial graph are also interested in but crucially additionally supported by content awareness of the publications within the system. Kurapati et al.’s paper “A Theoretical Framework for Shared Situational Awareness in Complex Sociotechnical Systems” develop a framework to categorise socio-technical sys- tems according to their purpose with respect to shared situational awareness. Socio- technical systems may support Perception (being aware of surroundings etc.), Prescription (being able to modify existing plans) and Participation (being able to carry out joint ac- tions). These levels of ’maturity’ as they are called in the paper, are being discussed for individual, team and organisational levels. The paper thus provides a way to categorise, analyse, and understand socio-technical systems with respect to shared situational aware- ness. In their paper on “Exploiting awareness to facilitate the orchestration of collaborative ac- tivities in physical spaces”, Hernandez-Leo et al. discuss how the Signal Orchestration System (SOS) can be used in the classroom to raise awareness in dynamic group work situations. The paper introduces the wearable technology and discusses how the adoption of SOS leads to improved ambient awareness of the teacher. Papers on Reflection Krogstie and Prilla’s contribution entitled “Tool support for reflection in the workplace in the context of reflective learning cycles” present a model for Computer Supported Reflec- tive Learning (CSRL), created in the MIRROR project. The authors argue for a 3-step approach to the analysis and design of supportive reflective learning in the workplace, which is illustrated with a case of physicians in a hospital setting. They also present the results of the evaluation of the CSRL model. Santos, Verbert, and Duval’s paper on “Empowering students to reflect on their activity with StepUp!” advances their interests in using Learning Analytics to build dashboards that visualize their traces through learning material in ways that help learners and/or teach- ers steer the learning process. Studies of two use cases reveal complex issues surrounding implicit and explicit tracking, the influence of complexity on comprehension and goal set- ting and evaluate time spent as an indicator of depth of study. They conclude that these issues remain complex and recommend further work on both measuring instruments and visualisation, proposing further deployments of visualisations that embed both individual achievement and reflect that within the wider learning community. In “Fostering reflective practice with mobile technologies”, Tabuenca et al. report on a study they have carried on 4 days with 37 college students, where students were reminded to reflect about their learning via SMS, and entered their responses into a specific response- system. The idea was that students train the “self-as-a-learner” - alongside the EU goals of fostering life-long-learning. The study suggests, that while students are ready to reflect 4 Papers on Reflection on their learning activities, they are not used to seeing themselves as active learners. Thomas Ullmann’s paper on “Comparing Automatically Detected Reflective Writings in a Large Blog Corpus” presents work done to identify reflective elements in written text by the example of analysing a corpus from blogs. It uses sophisticated methods of text analysis and shows how the results of this detection compares to the same task assigned to humans. The mechanisms presented in this paper are very promising and can be valuable means to detect and support reflection in organization as well as to identify current issue that need to be known on the organizational level. In their paper “The Functions of Sharing Experiences, Observations and Insights for Re- flective Learning at Work”, Pammer, Prilla and Divitini present preliminary work that investigates several apps in order to extract sharing functions that have impact on self- reflective learning. The three presented apps may assist knowledge workers to improve their work performance by critically reflecting their past activities. Nussbaumer et al. describe in their discussion paper ”Detecting and Reflecting Learning Activities in Personal Learning Environments” several building blocks, which have the potential to make learners aware of their self-regulated learning. The research challenge is to infer from measurable low-level data the high-level constructs of self-regulated learn- ing. The goal is to obtain a mapping between key actions extracted from Contextualized Attention Metadata (CAM) and a learning ontology, which consists of several cognitive and metacognitive learning activities. Degeling and Prilla report on their experiences implementing articulation support for col- laborative reflection. A theoretical introduction to reflection at the workplace sets the scene to the actual cases studies describing their findings. The central piece of their analysis re- lies on the reflections carried out by physicians in a hospital. The paper demonstrates the potential benefits of sharing experiences, especially in areas where learning is more the product of past work experience than formal education. However, from a design point of view, the paper also highlights the need for contextual design and frequent end-user inter- actions, as multiple corrective actions were needed to adapt the technology support to the conditions on site. You can find more information about the workshop and related workshops at the ”Aware- ness and Reflection in Technology-Enhanced Learning” group on TELeurope.eu: http://teleurope.eu/artel We want to use this opportunity to thank the authors for their contributions and the program committee for their support and reviewing activity. November 2012 Adam Moore, Viktoria Pammer Lucia Pannese, Michael Prilla Kamakshi Rajagopal, Wolfgang Reinhardt Thomas D. Ullmann, Christian Voigt 5 Organization Committee Organization Committee Adam Moore, Trinity College Dublin (Ireland), @adam moore Viktoria Pammer, Know Center (Austria), @contextgroupkc Lucia Pannese, imaginary (Italy), @lpannese Michael Prilla, University of Bochum (Germany) Kamakshi Rajagopal, Open Universiteit (Netherlands), @krajagopal Wolfgang Reinhardt, University of Paderborn (Germany), @wollepb Thomas Ullmann, The Open University (UK), @thomasullmann Christian Voigt, Centre for Social Innovation (Germany), @chrvoigt Figure 1: Parts of the organizing committee of the #ARTEL12 workshop (from the left to the right: Thomas Ullmann, Michael Prilla, Wolfgang Reinhardt, Viktoria Pammer, Lucia Pannese, Adam Moore) 6 Program Committee Program Committee Eileen O’Donnell, Trinity College Dublin, Ireland. Martin Wolpers, Fit Fraunhofer Society, Germany. Daniel Wessel, Knowledge Media Research Center, Germany. Angela Fessl, Know-Center, Austria. Carsten Ullrich, Jiao Tong University, China. Victoria Macarthur, Trinity College Dublin, Ireland. Peter Sloep, Open University, Netherlands. Rebecca Ferguson, The Open University, United Kingdom. Kristin Knipfer, Technische Universität München, Germany. Milos Kravcik, RWTH Aachen, Germany. Elizabeth FitzGerald, The Open University, United Kingdom. Fridolin Wild, The Open University, United Kingdom. Rory Sie, Celstec, Netherlands. 7 Supporting FP7 Projects Supporting FP7 Projects http://stellarnet.eu http://www.mirror-project.eu http://www.imreal-project.eu http://telmap.org/ http://www.tellnet.eun.org http://mature-ip.eu 8 Contents Editorial: Awareness and Reflection in Technology Enhanced Learning 3 Papers on Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Papers on Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Organization Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Program Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Supporting FP7 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Understanding the meaning of awareness in Research Networks Wolfgang Reinhardt, Christian Mletzko, Peter B. Sloep, and Hendrik Drachsler 13 Supporting Scholarly Awareness and Researchers’ Social Interactions using PUSH- PIN Wolfgang Reinhardt, Pranav Kadam, Tobias Varlemann, Junaid Surve, Muneeb I. Ahmad, and Johannes Magenheim 31 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems Shalini Kurapati, Gwendolyn Kolfschoten, Alexander Verbraeck, Hendrik Drach- sler, Marcus Specht, and Frances Brazier 47 Exploiting awareness to facilitate the orchestration of collaborative activities in physical spaces Davinia Hernandez-Leo, Mara Balestrini, Raul Nieves, and Josep Blat 55 Tool support for reflection in the workplace in the context of reflective learning cycles Birgit R. Krogstie, and Michael Prilla 57 Empowering students to reflect on their activity with StepUp!: Two case studies with engineering students Jose Luis Santos, Katrien Verbert, and Erik Duval 73 Fostering reflective practice with mobile technologies Bernardo Tabuenca, Dominique Verpoorten, Stefaan Ternier, Wim Westera, and Marcus Specht 87 Comparing Automatically Detected Reflective Texts with Human Judgements Thomas Daniel Ullmann, Fridolin Wild, and Peter Scott 101 9 CONTENTS The Functions of Sharing Experiences, Observations and Insights for Reflective Learning at Work Viktoria Pammer, Michael Prilla and Monica Divitini 117 Detecting and Reflecting Learning Activities in Personal Learning Environments Alexander Nussbaumer, Maren Scheffel, Katja Niemann, Milos Kravcik, and Diet- rich Albert 125 Improving Social Practice: Enhancing Learning Experiences with Support for Collaborative Reflection Martin Degeling, and Michael Prilla 133 10 11 12 Understanding the meaning of awareness in Research Networks Wolfgang Reinhardt1 , Christian Mletzko1 , Peter B. Sloep2 , and Hendrik Drachsler2 1 University of Paderborn Department of Computer Science Computer Science Education Group Fuerstenallee 11, 33102 Paderborn, Germany {wolle,letris}@uni-paderborn.de 2 Open University of the Netherlands Centre for Learning Sciences and Technologies 6401 DL Heerlen, The Netherlands {peter.sloep,hendrik.drachsler}@ou.nl Abstract The term awareness is often used in the context of CSCW research and connotes re-establishing face-to-face situations in so-called groupware applications. No understanding of it yet exists in the con- text of networked learning and networks of researchers. In this article we present a succinct description of awareness in Research Networks. It is grounded in guided, semi-structured interviews with 42 researchers that have extensive knowledge of cooperation in networked communities and the awareness issues it raises. From the analysis of the interview data we present six forms and five aspects of awareness in Research Net- works. Finally, we present a layer model of awareness that describes how researchers’ awareness is typically spread. Keywords: awareness, cscw, research networks, knowledge work, re- search 2.0 1 Introduction As early as 1959 Peter Drucker identified that society was moving into a post- industrial age, which was going hand in hand with a shift from manual to non- manual work [7]. While all kinds of jobs involve a mix of physical, social and men- tal work it is the perennial processing of non-standardized and non-linear tasks that characterize knowledge work; knowledge workers carry out these knowledge- intensive tasks during their daily work and researchers are the role models of knowledge workers. Looking at the work descriptions of researchers reveals that they have to analyze existing knowledge, deconstruct it, de- and re-contextualize it again in order to create new knowledge that then is disseminated in their Re- search Networks. So they need to be constantly aware of latest research results, 13 Understanding the meaning of awareness in Research Networks scientific trends and new technological developments that they can take into consideration in their own work. While research is often deemed to be solitary work, international cooperation has become the de facto standard. Large funding programs often even require transnational, interdisciplinary project consortia as it is believed they foster innovation, multiple views on a research topic and promote dissemination in the appropriate Research Networks. Such Research Networks may be viewed as a special kind of Learning Networks [23,28], online social networks whose members are researchers that use various learning services in order to reach individual and shared (learning) goals. Sometimes these goals are externally prescribed, at other times they are formed by the intrinsic motivation to know more about a topic. Research Networks are made up of people that interact with each other. Moreover, in them there are many relevant objects (e.g., publications, events, projects, people) that influence learning, knowledge gain and cooperation, and researchers aim to be aware of this. Despite the massive impact that Social Media have on the way research is conducted and communicated [17,27,31], it is still scientific conferences, fairs, journals and books that are most often used for the dissemination of research results. Research is currently shifting from closed to open, from hidden to visible and from passive consumptive to active, co-determinative (also see [17]). Even though the way of scientific publication has not changed much in the last 300 years, it does currently and will change massively over the course of the next 10 years. Not only the number of high-quality publication outlets has increased enormously, also the common understanding of authority in research has changed considerably. Scientific results do not need to be published in access-controlled journals anymore in order to receive notable attention. The number and citations of peer- reviewed publications are still the de-facto currency when it comes to professional evaluation of researchers’ work. However, this supremacy is beginning to crumble as an increasing number of researchers as well as society at large are digesting pre- mature results that researchers share in blog posts, presentations or tweets. Thus, there are well-known metrics for the impact of classic publications and there have to be new metrics that factor in impact and buzz in the Science 2.0 reality. Lately, many researchers are trying to establish alternative metrics that are able to assess the impact and reach of scientific publications in Science 2.0 media (see the #altmetrics movement and their manifesto [18]). Moreover, open access to scientific publications is gaining significant ground and an ever-increasing number of institutions are urging their employees not to publish research findings in closed, pay-to-access outlets or give the full copyright to publishers [4]. Traditionally the concept of awareness is used in the research field of Com- puter Supported Collaborative Work (CSCW) to re-establish conditions of face- to-face situations in the online realm, with visual cues showing, for example, who is online or working on a document. Research on awareness support in the CSCW context has often been directly related to the direct improvement of cooperative practices and measurable task performance improvements. 14 Understanding the meaning of awareness in Research Networks This paper presents parts of a larger study that deals with awareness issues in the context of Research Networks. In particular, we report about our findings on how properly to understand the notion of awareness in Research Networks. We hypothesize that the term awareness is more complex and touches on broader contexts than we know from existing CSCW research. The results of our study go beyond the perception of awareness as being a mere enabler and enhancer of collaborative work processes. The results are based on interviews with 42 researchers that took place in October and November 2010. First, we introduce the three research questions as well as the method of data gathering, data processing and analysis we applied. After that, we present a definition of awareness in the context of Research Networks that integrates the results of our interviews with established awareness research results. This includes the introduction of various forms and aspects of awareness in Research Networks. Synthesizing these results, we propose a layered model of awareness in Research Networks, which incorporates five layers of awareness. Finally, we summarize the results of our study, give an outlook on future research and dis- cuss important side effects of awareness in Research Networks and practical applications of the introduced model. 1.1 Research questions Three research questions were addressed in the research presented here: 1. How do researchers define awareness in the context of Research Networks? 2. What different forms and aspects of awareness in Research Networks are there? 3. What could a model of awareness in Research Networks look like? 1.2 Method We used open, in-depth and semi-structured interviews as our method of data collection. An interview manual provided the basis for open-ended questioning. Each interview was carried out by one of three different interviewers. In three cases the manual was sent to the interviewees via email beforehand. All par- ticipants were interviewed in their normal working context. The participants of the study have been asked explicitly for their approval to record the interview. In most cases the interviews were conducted remotely and recorded using the FlashMeeting service [29]. 1.3 Sampling The total population sampled consisted of all researchers that have been au- thors within the European Conference on Technology Enhanced Learning or were members of Technology Enhanced Learning (TEL) projects funded within the Framework Programme 7 (FP7). 15 Understanding the meaning of awareness in Research Networks 82 researchers from different research disciplines and different countries were asked for voluntary participation in the interview series via email. More than half of the invitees (43 researchers) agreed to be interviewed. Although 43 inter- views were conducted, one recording was not suitable for further analysis due to technical problems. 30 interviews were conducted in German, 12 in English. The age of the interviewees was between 27 and 61 years, 32.5 years on average. 35 out of the 43 participants are male (83%), 7 female (17%). The interviews lasted between 28 minutes and 126 minutes, 51 minutes on average. Table 1 gives the job locations of the interviewees. Table 1. Job locations of the interviewees Country No Country No Country No Austria 6 France 1 Sweden 1 Belgium 2 Germany 15 Switzerland 2 Canada 1 Ireland 1 The Netherlands 4 China 1 Israel 1 United Kingdom 5 Ecuador 1 Spain 1 Most of the participants are involved in the field of TEL and are in posses- sion of a PhD (44%) or Master (53%) as their highest degree. The extent of professional experience ranges from 1 to 30 years. The scope of research fields of the interviewees includes Computer Science Education, Recommender Systems, Knowledge Management, Human Computer Interaction, Semantic Web as well as Model-based Testing, Social Research and Psychology. 1.4 Analysis The coding of the transcribed interview data took place in multiple iterations and was supported by the Atlas.ti [26] qualitative data analysis software. The continuous process of close reading of the transcripts allowed the identification of concepts and labels, which then were coded in Atlas.ti in constant comparison to previous codes. Atlas.ti supported the merging and renaming of codes. Co- occurrence tests built into Atlas.ti helped spotting inconsistencies in the coding and automatically generated visualizations of code relationships were used to identify patterns. In the following we will quote from the interview transcripts. A 3-tupel, denoting the primary document number in the hermeneutic unit of Atlas.ti, the code number within the document and the line numbers for the pre- cise reference, will follow each quotation. Where needed, the authors translated quotes from German to English. 16 Understanding the meaning of awareness in Research Networks 2 Approaching a definition of awareness in Research Networks Awareness is an integral component of CSCW research. Dourish defined it as “awareness is an understanding of the activities of others, which provides a con- text for your own activity” [6]. In 2002, the influential CSCW researcher Kjeld Schmidt criticized the term for its fuzziness by pointing out that the term is found both “ambiguous and unsatisfactory” and that the notion of awareness would be “hardly a concise concept by any standard ” [25]. He outlines the dif- ferent awareness research strands by reviewing most of the existing literature and stresses the need for strong ties between awareness support and support for cooperative processes. In his understanding, any effort towards awareness sup- port should result in enhanced individual or group task performance. Gutwin also stress that awareness’ first mission should be to boost collaboration and particularly aspects of coordination, communication and assistance [12]. Awareness in Research Networks, however, concerns itself not solely with re-establishing face-to-face situations and direct impacts on bettering task per- formance. In Research Networks, awareness has a broader meaning and is related to trend-spotting, alerts to research results in a certain domain, changes in the structure of a network, personal changes within a project as well as knowledge about objects that may help carrying out one’s task (research question 1). The interviewees pointed out that awareness in Research Networks “is mainly to know what sort of people in the same field are doing” (P13, 15, ll. 9-10) or “is to know what is important to me and filter out what is not important to me” (P27, 36, ll. 40-42). Another researcher stresses, “If I have to search for something, that means for me, it’s an active action from my part. That’s not what I think about awareness. Awareness is something that is keeping remind me about some- thing, without me actually trying actively to search that information” (P27, 30, ll. 12-17). Moreover, “awareness ... can have impact on the individual method of operation ... as it triggers reflection” (P16, 58, ll. 306-320). Research shows that the availability of awareness support improves the effectiveness of how informa- tion is spread in communities [14] and positively influences social interactions taking place in those communities [11]. Most importantly, most of the intervie- wees stressed that they require “awareness functionality to be embedded in [their] regular workflow ” (P9, 21, ll. 174-175). It is quite difficult to keep up with who is doing what in the field, though many researchers are making quite an effort to monitor the data that is being spread on the Web by colleagues. In the past years research has explored collaboration of scientists by means of co-authorships of publications. In the TEL community, Henry et al., Wild et al. and Reinhardt et al. undertook such endeavors [13,21,32]. These have proven to be quite insightful, though they give only a snapshot of information and collaboration at a given moment, namely during the co- authorship of a conference paper. 17 Understanding the meaning of awareness in Research Networks 2.1 Relevant objects in Research Networks Scholarly communication is often understood to primarily refer to the publica- tion of scientific publications. Building on Thorin [30] and in line with Procter et al. [19], we understand scholarly communication to be broader in scope and incorporate all communicative activities carried out by researchers on a regular basis. In particular we include the joint developing of ideas, conducting research and carrying out experiments, discussing ideas with one’s Research Network as well as information seeking and dissemination of research outputs formally and informally. Thus, researchers are confronted with a wide number of objects that they either need to be or should be aware of: there are projects the researcher is directly affiliated with, interested in or that are somehow related to the re- searcher. Documents in any form are one core product of labor for researchers: notably publications written by the researcher herself, publications written by other researchers, as well as deliverables of projects, (micro-)blog entries, rules and regulations, best practice reports. People and groups of people are other objects that having awareness of is paramount. Awareness of people is relevant in multiple aspects at the same time and while it may be important to be highly aware in one particular aspect and not so in others, at other times the situation may be reversed. As researchers are often limited to a fixed domain, awareness of latest trends and new research findings in that domain and associated top- ics helps researchers to stay informed and up-to-date. Researchers often need to show that they are well informed about the state-of-the-art in their research domain and that they know about the key people, events and projects in that domain. Grounded in the conducted interviews, this article discerns six different forms of awareness that are partly known from CSCW research as well as five different aspects of awareness (research question 2). Whereas forms describe generic areas of awareness, aspects focus on specific awareness characteristics relevant for the awareness of different objects. 2.2 Six different forms of awareness 1. Activity awareness Activity awareness deals with the past, present and future of an object. For people this could be realized with “an activity stream about people that I am connected to” (P30, 82, ll. 438-439), which would hold the latest information about their work in general, planned event participations, new collaborations or published content. From a broader perspective, activity awareness for a research domain is concerned with the “state-of-the-art in a particular research area [...] where things are at the moment, who is contributing to that area, what is the latest thinking in that area” (P1, 37, ll. 13-16). Activity streams and awareness dashboards seem to be helpful tools to support awareness if they could provide historical data, trend detection and forecasting in order to make claims like “this author was very nice 10 years ago, but now is not any more. To know whose ideas are the current ones, it’s difficult” (P27, 56, ll. 186- 191). 18 Understanding the meaning of awareness in Research Networks 2. Cultural awareness Cultural awareness refers to a person’s knowledge and perceptions about foreign cultures, their values, beliefs and perceptions. Cul- tural awareness is crucial when interacting with people from other cultures [20]. At the same time, research cultures differ massively between research domains. Some interviewees explicitly referred to this by calling it “culturally informed awareness, e.g. where computer scientists have another focus than educational scientists” (P39, 64, ll. 337-339). Differences exist both implicitly and explic- itly in shared knowledge, social aspects of the research community, practices and conventions, common theories and cognitive processes, and with respect to theoretical assumptions. Awareness of those differences becomes increasingly im- portant, as research projects are ever more multi-cultural and multi-disciplinary. Whereas training for intercultural competence and sensitizing is very common in economy, academia is slow at offering it. 3. Social awareness Social awareness describes the things people become con- scious of in a social context. This includes information about the attentiveness of others, gestures and facial expressions that mirror the emotional state of a person as well as clues about a person’s interest in a topic. Whereas social awareness is easily realized when workers are co-located, it has to be mediated in distributed working environments. [2] point out that supporting social awareness will help to minimize unwanted interruptions and disturbances of individual work as co- workers are supported in “knowing that they’re available to talk, when they’re available to talk ” - (P8, 24, ll. 15-16). Social awareness also helps co-workers to align their work and alerts them about “what we can contribute to each other and how we can assist each other ” (P1, 43, ll. 26-27). 4. Workplace awareness Workplace awareness refers to knowledge about the workplace design and job characteristics of co-workers and is strongly related to other forms and aspects of awareness. For example, it is import to know about the affiliation of a colleague and about the people working there. Work- place awareness is strongly related to knowing what colleagues in one’s own research organization are working on, with whom they collaborate and “where are possibilities to collaborate” (P36, 39, ll. 294-295). Moreover, the interviewees expressed the need for background information about the job descriptions and responsibilities that their co-workers have within their affiliation and projects in order to enhance workplace awareness and subsequently improve the collabora- tive work. Information about the number of projects they are involved in, the thematic priority they have in their research projects, and if they are involved in teaching activities and supervision of PhDs would contribute in assessing the institutional involvement and engagement. 5. Location awareness Location awareness refers to knowing the physical lo- cation of an object. It can be related to one’s own location – “where am I right now ” (P17, 26, l. 40) – as well to the locations of others: “where is the other one 19 Understanding the meaning of awareness in Research Networks right now ” (P40, 20, ll. 33-34). Location-aware applications support the user with contextual access to information and user-specific recommendations. Location- based information systems help becoming aware of spatial collaboration patterns [16] and may support location-based task execution [24,1]. Many researchers di- rectly referred to “a location-based awareness, like offered by services like Dopplr, TripIt etc.” (P19, 42, ll. 187-194). They also underlined how such awareness im- pacted on social interaction opportunities: “It is relatively trivial but sometimes also very helpful to know that someone from my Research Network is accidentally in the same city or at the same conference at the same time. That way it is easy to find connections” (ibid.). 6. Knowledge awareness Knowledge awareness refers to the ability of a per- son to judge another person’s knowledge about a given object [8,5]. Moreover, knowledge awareness may refer to the knowledge about someone else’s compe- tencies and skills as well as his method of operation. The interviewees would have liked support to assess “which expertise has a person? ” (P16, 48, l. 227). Tradi- tionally, knowledge awareness is created through intensive social interactions like working on a joint artifact, in a common project, or sharing an office. With the advent of Social Media, knowledge awareness can be increasingly gained through following someone’s activities on the Web, the objects created and shared by him. Regarding the scientific publications of a researcher, knowledge awareness may be supported through “awareness of references, so that you can see what the person also published. So you would further narrow it down and understand how the authors works” (P26, 26, ll. 93-95). Besides these forms of awareness, the interviews pointed towards the exis- tence of five aspects of awareness that are relevant in the context of Research Networks. 2.3 Five different aspects of awareness The five different aspects of awareness are relevant in any of the above forms of awareness. The importance of a single aspect, however, strongly depends on the object of interest. A. The technological aspect of awareness The technological aspect of awareness is strongly affiliated with tools and techniques that are relevant for carrying out tasks. On the one hand there is always the question: “where do I get the information from? Now we’re on a technological level, which is more or less push or pull ” (P24, 28, ll. 32-34). On the other hand different technologies sup- port different forms of awareness. Answering the question “Which tool was used to create this object? ” may help repeating research results and understanding the methodology used. Moreover, answers to the questions “Which tools could I use to accomplish this collaborative task? ” and “How can I reach this person? ” are direct enablers of social interactions and cooperative work. With the increasing 20 Understanding the meaning of awareness in Research Networks number of tools that are used for consuming, producing and sharing informa- tion, awareness of one’s own digital representations and those of others becomes crucial. Being able to easily find out through which services one is connected to colleagues or which username someone is using in a given tool constitute support of the technological aspect of awareness. This aspect of awareness is also related to the current trend of giving more people access to scientific resources. B. The relationship aspect of awareness Awareness in Research Networks is strongly enhanced by providing information about the existing relations be- tween objects, their status and dynamics. Researchers mention the need to know about “the relations to people and groups of people that dealt with [an] artifact or where the artifact comes from” (P30, 35, ll. 35-38) but also how they are affiliated with other researchers, which of their colleagues may help them in contacting to a yet unknown person or which institutions and projects some re- searcher is affiliated with. Automated notification about the fact “that someone is leaving an institution and someone new steps in” (P19, 22, ll. 65-67) would help researchers stay aware of changes in affiliations. Relationship awareness is also about connections between objects (e.g. by co-authorship or co-citation in the case of scientific documents but also by semantic relatedness or collaborative filtering in other cases) and people more specifically (what do these people have in common and what connects them?). C. The content aspect of awareness The content aspect of awareness in Research Networks is very important as most objects researchers deal with are at least partly textual. This awareness aspect deals with assisting to more easily grasp the content of an object, e.g. by providing visual analytics, content aggre- gations or presenting metrics about the content. One interviewee said, “Speaking about artifacts; in the case of research networks those artifacts are very often scientific papers, blog posts, presentations or even demonstrations that are avail- able as video. If I take such an object, such an artifact, awareness means to me to get an overview about it. How is this artifact connected to others? What is the content? I mean an aggregation of the content, so I can more easily understand what it is about.” (P30, 105, ll. 26-35). The content aspect of awareness is also about support to easily grasp the essence of a document, and the topics, the- ories and concepts that scientific work and projects are based on. Moreover, it is related to detecting and presenting trends, approaching which topic someone is working on and which sources he is using to do so. Another perspective is on the timeliness of information and the quality of information. D. The personal aspect of awareness The personal aspect of awareness is mostly relevant for people and groups of people. It is closely related to workplace awareness and refers to background knowledge about the persons one interacts with. Awareness of approaching deadlines or the family status contributes to a better collaboration with other people as it helps understanding and judg- ing certain activity patterns. Similarly, awareness of other people’s job status 21 Understanding the meaning of awareness in Research Networks (full-time, part-time, student assistant), their possible teaching obligations and involvement in other projects enhances mutual understanding and strengthens the ties between collaborators. Often, awareness on a personal level is also part of the more generic form of knowledge awareness, e.g. when “looking at how long they have been in the field ” (P37, 56, ll. 117-118). E. The contextual aspect of awareness The contextual aspect of awareness is complementary to location awareness. Whereas location looks at physical envi- ronments, context refers to other objects as well. Contextual awareness seems to be very relevant for people and groups of people, as the interviewees repeatedly expressed their “need for context-dependent awareness information” (P35, 47, ll. 236-243). Contextual awareness information for researchers would include infor- mation about where and when they last met or who is taking part in the same event or project. Moreover, this awareness aspect matters to both classic scien- tific media – “If one of my colleagues publishes today a paper on something that I’m also working on” (P9, 13, ll. 12-14) – and to more recent scientific objects – “in which context have those [Twitter] messages spread or haven’t spread ” (P39, 54, ll. 284-285). Finally, in Research Networks it is strongly related to one’s own writing and that of others. Recommendations for matching content is needed during both consuming existing and producing new writings: “based on your context and being aware of what you’re doing, we’ll suggest you, "Hey, here are actually slides that you did earlier that you may want to reuse now. And here are two slides that someone else has done and made available for reuse, etc." And so it becomes part of your workflow ” (P9, 24, ll. 194-199). Table 2. Overview of forms of awareness versus aspects of awareness. Asterisks (rang- ing from 1 to 5) indicate the relevance of particular aspects to a particular form of awareness B. Relationship aspect E. Contextual aspect A. Technical aspect D. Personal aspect C. Content aspect 1. Activity awareness ?? ? ? ? ? ? ?? ? ? ? ??? 2. Cultural awareness ? ? ?? ??? ?? 3. Social awareness ? ?? ?? ? ? ? ? ? ? ? ? ? ? 4. Workplace awareness ? ? ? ? ? ?? ?? ? ? ?? ??? 5. Location awareness ? ? ?? ? ? ? ? ? ?? ?? ????? 6. Knowledge awareness ? ? ??? ????? ??? 22 Understanding the meaning of awareness in Research Networks Table 2 presents a matrix of forms of awareness versus relevancies of aware- ness aspects. The analysis of the interview data reveals that relevancies very much depend on the object of interest. While some aspect might be highly rele- vant for a publication, it is pointless for a scientific event. Besides the above forms and aspects of awareness, the interviewed researchers discern different layers or circles of awareness. The next section introduces a layered model of awareness in Research Networks that reflects their distinctions. 3 A Layer Model of Awareness in Research Networks The Layer Model of Awareness in Research Networks (LMARN) describes how the overall awareness of objects declines the farther an object is away from oneself (Figure 1). Answering research question 3 the conducted interviews reveal five layers of awareness in Research Networks: 1. Self-awareness, 2. Awareness of current projects, 3. Awareness of the local research organization, 4. Awareness of the personal research network, and 5. Awareness of a research domain. The remainder of the research world surrounds the five layers. The LMARN also reflects the continuous competition for time that most researchers are faced with. They use a plethora of different tools, are often part of multiple projects, communities and sometimes even different research domains. Even though re- searchers are trained to work with multiple heterogeneous information sources, the advent of Research 2.0 has marked a new era of complexity, connectedness and information usage. The war for attention [10] as part of the attention econ- omy [9] underscores the need for individual awareness support for researchers. Knowledge workers can only give their attention to objects and circumstances that they are aware of and because attention is a good in very short supply, objects that they have stronger personal ties to or that are perceived as more appropriate to one’s own identity and task will more likely get the knowledge worker’s attention than other objects whose usefulness cannot be assessed easily. The LMARN is centered on an individual researcher for whom the model presents his individual reality. The t-axis of the model indicates that the socio- technical system surrounding the researcher is continuously changing together with the information he should be aware of. Objects may change their position within the model at any time. A spontaneous talk with a colleague from an- other research group, for example, will have immediate effect on the researcher’s awareness of the colleague. The LMARN is grounded in empirical data and aims at providing a reference scheme of how overall awareness of an object increases the closer its physical proximity. Any object in the awareness space of a researcher can be placed in one of the layers of the LMARN. However, there are exceptions where the overall aware- ness of an object in a layer further afar is higher than of one in a closer-by layer. 23 Understanding the meaning of awareness in Research Networks For instance, there are examples of researchers that have a much higher overall awareness of a colleague in their Personal Research Environment than of a col- league working in the same working group. Also, researchers will not be highly aware of all objects in their local research organization, especially if this is a large institution. The stronger personal ties become, the more personal details the collaborating partners have about each other and thus the higher the overall awareness in the described different aspects and forms of awareness is. research domain personal research network research domain local research organization personal research domain research network current projects self-awareness local research t-2 organization personal research network Distance current projects self-awareness t-1 local research current organi- projects self-awareness zation t Figure 1. A Layer Model of Awareness in Research Networks We will now describe the five layers of the LMARN that were derived from the interviewees’ descriptions and discuss what impacts the overall awareness of objects in the respective layers. 3.1 Self-awareness Self-awareness refers to a researcher’s consciousness of his own identity as a re- searcher and how colleagues assess his work. The critical approach to one’s own strengths and weaknesses, skills and competencies is also part of self-awareness as is the estimation of one’s research opportunities and connections. Self-awareness is heavily related to reflection about one’s own practices and how others per- ceive one’s work. Based on a clear understanding of one’s identity in a Research Network it becomes feasible to value recommendations, contextualize them and connect them to one’s own work (see Berlanga and Sloep [3] for related work on learner identities). 24 Understanding the meaning of awareness in Research Networks 3.2 Awareness of the local research organization The first layer of awareness that we could derive from the interviewees is aware- ness of the local research organization. This refers to the knowledge “about [one’s] own workplace, what is really happening in [one’s] own group” (P10, 23, ll. 251- 253). Depending on the size of the organization there might be additional nuances of awareness for one’s own small team, the group in which the team is located, as well as the institute or department in which the group is residing. The inter- viewees also were very clear about the fact that “the research organization [they] work in, is itself distributed and that’s quite a complex social and organizational network for awareness of what [they] are all doing with regard to [their] work together ” (P1, 39, ll. 32-38). 3.3 Awareness of current projects Also within the first layer of awareness is the awareness of current projects a researcher is involved in. Regardless of the specific role and position of the single researcher, being an active part of a project has major impact on the awareness of the activities, people and decisions within that project. Based on regular meetings and intensive collaborative work, project members are able to develop mutual awareness in multiple aspects, which could hardly be gained by outsiders to the same extent. This awareness often goes beyond the pure project-related issues and spans social, personal, and relational issues; it also strengthens the personal ties between project members and participating affiliations. 3.4 Awareness of the Personal Research Network The Personal Research Network is composed of people and objects that a re- searcher is interested in, that he worked with in the past or plans to do in the future. “Awareness of what people are doing within the broader [...] very dis- tributed community” (P1, 36, ll. 9-39) that they operate in and which is “akin to [their] personal learning network ” (ibid.) seems to be crucial in order to keep track of the work of close-by researchers. Often, ties to fellow researchers loose their strength once a common project has finished and thus the overall awareness of their activities is declining. Also, it often requires much personal engagement to keep the mutual awareness alive. If this effort is not fueled, it may happen that colleagues vanish in the less aware layers of a research domain. 3.5 Awareness of a research domain A research domain is the most abstract layer in the LMARN. Here, insight in the general connections, experts, projects and trends in a domain like TEL, Recommender Systems or Microbiology is relevant. Being able to trace “what projects are being started ” (P22, 33, l. 76) and “what are the latest, the hottest trends” (P39, 44, ll. 205-207) in a domain is deemed of great importance to stay updated. Many researchers said they serve as reviewer for conferences, journals 25 Understanding the meaning of awareness in Research Networks and books on a regularly basis in order to “get, you know, early copy of what the people are working on” (P13, 30, ll. 106-108). Researchers stated that they are “trying to follow what is done in the other research projects” (P34, 40, ll. 122- 123) in order to keep up-to-date about progress being made in their domain. Having awareness of a research domain is important for contextualizing one’s own ideas, approaches and methods but also matters when it comes to bids for funding. Then researchers need to know what has been done in the past, what is in the making presently and where the challenges for future research are. Being aware of where the research domain is moving and who is working on what then enables researchers to approach colleagues saying “I’m working on a similar thing, perhaps we could write a grant together ” (P15, 23, ll. 90-95). Based on the above elaborations and empirical results of the conducted in- terviews and contributing to the answer of research question 1, we propose a succinct description of awareness in the context of Research Networks: Awareness in the context of Research Networks is an understanding of one’s own work and that of others in a given research domain. It bears on many different objects and supports the perception of how one is con- nected to others, what they are doing and how those activities shape the Research Network as a whole. Awareness in Research Networks involves multiple forms and aspects and is dependent on the physical location and strength of relational ties of objects in the individual awareness space. Generally, the overall awareness of objects declines gradually the farther an object is away from someone’s current working focus and personal in- terest. Awareness is an enabler of social interactions, provides a frame- work for collaborative activities and may positively influence information sharing. 4 Discussion In this paper we presented the results of an interview study with 42 researchers that led to the empirical identification of six different forms and five aspects of awareness. Some of the identified forms are also commonly used in CSCW re- search. Knowledge and cultural awareness, however, have not yet been discussed within the CSCW community, as they not directly impact on the productivity of knowledge workers, which is an important criterion in the research community. The derived aspects of awareness, on the other hand, are indicators for areas to further support researchers’ awareness with future developments and specif- ically tailored tools. Awareness requires a general interest in others and their work and even the best tools to support scholarly awareness will not overcome narrow-mindedness and egocentrism. The layer model of awareness in Research Networks is directly derived from the interview data with experienced researchers and their gradations of awareness combined with the decrease in overall awareness. We acknowledge that this model is not universally valid but serves as a general heuristic of the awareness of objects 26 Understanding the meaning of awareness in Research Networks in Research Networks. The applied method, modeled after Mayring [15], limited our possibilities for interpretation as it only allows to inductively form categories and report about the statements of the interviewees. As it is generally true that researchers will be less aware of more distant objects, we also presented counter examples to this. Moreover, we know that often the presented layers will overlap and thus obfuscate the strict separation of the five layers. The presented succinct description of awareness in the context of Research Networks may help researchers to better grasp the complexity of the term in networked collaboration of researchers that is heavily entangled with staying up- to-date about activities, trends and social interactions. Different from the CSCW research, awareness support in Research Networks should therefore be broader in scope in its social, methodological and technological aspects. Moreover, the metrics of evaluating the success of awareness support have to be fundamentally different from those in CSCW research. Now that we have discerned various forms, aspects and layers of awareness in Research Networks, further research should investigate how the complex net- works of different objects can be visualized in a way that respects the privacy of single researchers and prevents the unwanted sharing of personal information. It could also seek to support researchers in identifying how their networks overlap with those of other researchers (P36, 34). Such representations need to allow for the interactive change of levels of details and would be best integrated in awareness dashboards for researchers. Such dashboards would allow access to relevant objects in the researchers’ Personal Research Network, from their Lo- cal Research Organization and from their current projects. Moreover, it would help researchers to retrieve their own objects and those from the overall research domain [22]. Finally, and paraphrasing one of our interviewees, it is important to state that awareness can be a problem when there is too little of it as this may lead to double work and delayed innovation. On the other hand, awareness can also be a problem if there is too much of it, as it may overburden the individual with too much allegedly relevant information. The key to creating added value with awareness support is to find the optimal balance. References 1. Apple Inc. Siri. the intelligent assistant that helps you get things done. In- formation available online at http://www.apple.com/iphone/features/ accessed 18.11.2011, 2011. 2. J. E. Bardram and T. R. Hansen. Context-Based Workplace Awareness. Computer Supported Cooperative Work, 19:105–138, 2010. 3. A.J. Berlanga and P.B. Sloep. Towards a Digital Learner Identity. In F. Abel, V. Dimitrova, E. Herder, and G.-J. Houben, editors, Augmenting User Models with Real World Experiences Workshop (AUM). In conjunction with UMAP 2011, July 2011. 4. S. Creagh. Princeton goes open access to stop staff handing all copyright to journals – unless waiver granted. Available online at http://bit.ly/ princeton-goes-open-access accessed 18.11.2011, 2011. 27 Understanding the meaning of awareness in Research Networks 5. J. Dehler-Zufferey, D. Bodemer, J. Buder, and F. W. Hesse. Partner knowledge awareness in knowledge communication: Learning by adapting to the partner. The Journal of Experimental Education, 79(1):102–125, 2011. 6. P. Dourish and V. Bellotti. Awareness and coordination in shared workspaces. In Proceedings of the 1992 ACM conference on Computer-supported cooperative work, CSCW ’92, pages 107–114, New York, NY, USA, 1992. ACM. 7. P. Drucker. Landmarks of Tomorrow: A Report on the New ’Post-Modern’ World. Harper and Row, New York, 1959. 8. T. Engelmann, J. Dehler, D. Bodemer, and J. Buder. Knowledge awareness in CSCL: a psychological perspective. Computers in Human Behavior, 25(4):949– 960, 2009. 9. M.H. Goldhaber. The Attention Economy and the Net. First Monday, 2(4):w, 1997. 10. Seth Goldstein. The war for attention: Summer 2006. Available on- line http://majestic.typepad.com/seth/2006/07/media_futures_s.html ac- cessed 18.11.2011, July 2006. 11. T. Gross, C. Stary, and A. Totter. User-Centered Awareness in Computer- Supported Cooperative Work-Systems: Structured Embedding of Findings from Social Sciences. International Journal of Human-Computer Interaction, 18(3):323– 360, 2005. 12. C. Gutwin and S. Greenberg. The effects of workspace awareness support on the usability of real-time distributed groupware. ACM Trans. Comput.-Hum. Interact., 6:243–281, September 1999. 13. N. Henry, H. Goodell, N. Elmqvist, and J.-D. Fekete. 20 Years of Four HCI Con- ferences: A Visual Exploration. Intl. Journal of Human-Computer Interaction, 23(3):239–285, 2007. 14. L. Loevstrand. Being Selectively Aware with the Khronika System. In Proceedings of the Second European Conference on Computer-Supported Cooperative Work - ECSCW’91, 1991. 15. P. Mayring. Qualitative Inhaltsanalyse - Grundlagen und Techniken. Beltz Verlag, Weinheim, Basel, 2010. 16. T. Nagel, E. Duval, and F. Heidmann. Visualizing Geospatial Co-Authorship Data on a Multitouch Table. In Proceedings of Smart Graphics 2011, 2011. 17. M. Nielsen. The Future of Science: Building a better collective memory. Available online http://michaelnielsen.org/blog/the-future-of-science-2/ (accessed 31 December 2010), July 2008. 18. J. Priem, D. Taraborelli, P. Groth, and C. Neylon. Alt-metrics: A manifesto (v.1.0). Available online http://altmetrics.org/manifesto accessed 18 August 2011, Oc- tober 2010. 19. R. Procter, R. Williams, J. Stewart, M. Poschen, H. Snee, A. Voss, and M. Asgari- Targhi. Adoption and use of Web 2.0 in scholarly communications. Phil. Trans. R. Soc. A, 368:4039–4056, 2010. 20. S. Quappe and G. Cantatore. What is Cultural Awareness, anyway? How do I build it? Available online http://www.culturosity.com/pdfs/ WhatisCulturalAwareness.pdf accessed 18.11.2011, 2005. 21. W. Reinhardt, C. Meier, H. Drachsler, and P. B. Sloep. Analyzing 5 years of EC-TEL proceedings. In Carlos Delgados Kloos, Denis Gillet, Raquel M. Crespo García, Fridolin Wild, and Martin Wolpers, editors, Towards Ubiquitous Learn- ing. Proceedings of the 6th European conference on Technology Enhanced Learning, number 6964 in LNCS, pages 531–536. Springer Berlin / Heidelberg, 2011. 28 Understanding the meaning of awareness in Research Networks 22. W. Reinhardt, C. Mletzko, H. Drachsler, and P. B. Sloep. Design and evaluation of a widget-based dashboard for awareness support in Research Networks. Interactive Learning Environments, in print, 2012. 23. W. Reinhardt, A. Wilke, M. Moi, H. Drachsler, and P. B. Sloep. Mining and visualizing Research Networks using the Artefact-Actor-Network approach, chap- ter 10, pages 233–267. Computational Social Networks: Mining and Visualization. Springer London (in print), 2012. 24. C. Schmandt and N. Marmasse. User-centered location awareness. Computer, 37(10):110–111, 2004. 25. K. Schmidt. The problem with Awareness. Computer Supported Cooperative Work, 11:285–298, 2002. 26. Scientific Software Development GmbH. Atlas.ti. Available online http://www. atlasti.com/ accessed 18.11.2011, 2011. 27. B. Shneiderman. Science 2.0. Science, 319(5868):1349–1350, 2008. 28. P. B. Sloep, M. Van der Klink, F. Brouns, J. Van Bruggen, and W. Didderen, edi- tors. Leernetwerken; Kennisdeling, kennisontwikkeling en de leerprocessen [Learn- ing Networks: Knowledge Sharing, Knowledge Development and Learning Pro- cesses]. Bohn, Stafleu, Van Loghum, 2011. 29. The Open University. The Flashmeeting Project. Available online http:// flashmeeting.open.ac.uk/home.html accessed 18.11.2011, 2011. 30. Suzanne Thorin. Global Changes in Scholarly Communication. In Hsianghoo Ching, Paul Poon, and Carmel McNaught, editors, eLearning and Digital Publish- ing, volume 33 of Computer Supported Cooperative Work, pages 221–240. Springer Netherlands, 2006. 31. M.M. Waldrop. Science 2.0. Scientific American, 298(5):68–73, 2008. 32. F. Wild, X. Ochoa, N. Heinze, R. M. Crespo, and K. Quick. Bringing together what belongs together: A recommender-system to foster academic collaboration. In Proceedings of the 2009 Alpine Rendez-Vous, 2009. 29 30 Supporting Scholarly Awareness and Researchers’ Social Interactions using PUSHPIN Wolfgang Reinhardt, Pranav Kadam, Tobias Varlemann, Junaid Surve, Muneeb I. Ahmad, and Johannes Magenheim University of Paderborn Department of Computer Science Computer Science Education Group Fuerstenallee 11, 33102 Paderborn, Germany wolle@upb.de,pdkadam@mail.upb.de,tobiashv@upb.de, jsurve@mail.upb.de,muneeb06@gmail.com,jsm@upb.de Abstract. With the advent of Research 2.0, the way research is con- ducted has significantly changed. New tools and methodologies have emerged and an increasing amount of research is conducted in networked communities including the use of social networking tools. Apart from the well-known social networks, smaller and tailored social networks for re- searchers have emerged that are geared towards the specific needs of researchers. As more and more potentially relevant information is being made available, many researchers feel the need for awareness support in order to cope with the available amount of data. In this article we in- troduce the PUSHPIN application that aims at supporting researchers’ awareness of publications, peers and research trends. The application is based on an eResearch infrastructure that analyzes large corpora of scientific publications and combines the extracted data with the social interactions in an active social network. Keywords: research 2.0, eResearch infrastructure, scholarly communi- cation, social networking, hadoop, storm, big data analysis, near-copy detection, object-centered sociality, bibliometrics 1 Introduction In the early days, the Internet was mostly a top-down information distribution system in which only few people provided information. Users of the Internet merely consumed the information without being enabled to interact with or create own information easily. With the rise of Web 2.0, Internet usage has been revolutionized. It has enabled mankind to more easily participate in the spread of information and the participation in global discourse [9,20,21]. The different developments in Web 2.0 have resulted in a wide range of new tools and methodologies, which reshaped social interaction, distribution of news and other content as well as it fostered user participation. Applications like Facebook and Twitter not only have had impact on the worldwide social system but also 31 Supporting Scholarly Awareness and Researchers’ Social Interactions influenced researchers to make applications that modernized how research is done. The usage of Web 2.0 tools, practices and methodologies in the context of scholarly communication has been recently labeled as Science 2.0 or Research 2.0 [27,29]. Similarly, the term eResearch is used when the talk is about tech- nologies and infrastructures to support Research 2.0, big data analysis and data sharing on a large scale. Scholarly communication is generally referred to as the publication and peer review of scientific publications. In line with [22,23] we consider scholarly communication in a broader scope and consider each social interactions and communicative activities, which is part of research cycle. Thus, we especially consider the joint developing of ideas and the exchange of short texts, like in tweets or status updates as potentially relevant research information. Moreover, the use of social networks is considered as very relevant part of the modern research methodology. Despite the fact that Facebook has evolved to be the de-facto standard in social networking sites (SNS), there are several SNS that are tailored to the use by researchers and that help them in connecting to like-minded researcher, publications and other content. Applications like Mendeley1 , ResearchGate2 , Academia.edu3 or iamResearcher4 compete with the top dog Facebook by providing features that cannot be found in the general-purpose social network. Mendeley for example focuses on the shar- ing and annotation of scientific documents in private or public groups. Moreover, it supports researchers in generating bibliographies and recommending publica- tions that the research might be interested in. However, the new way of conducting research, communicating research ideas and findings and sharing data also results in a very scattered network of poten- tially relevant information. Researchers are in urgent need of awareness support tools and techniques that provide detailed recommendations and hints for possi- ble collaborators. Many of the existing approaches seem to be based on first-level metadata and collaborative filtering approaches only and this is where PUSH- PIN (Supporting Scholarly Awareness in Publications and Social Networks) will enhance the state-of-the-art. Through the application of in-depth publication and citation analysis combined with the immense power of the social graph, PUSHPIN aims to provide better awareness support for researchers than the existing tools. In the following sections, we present our new application called PUSHPIN and its approach for awareness support for researchers (Section 2). In Section 3, we present the implementation details for PUSHPIN and present the underlying eResearch infrastructure. We also discuss the three user interfaces for web, mobile and tabletops that PUSHPIN provides for its users. Finally in Section 4, we give an outlook on future research opportunities and present our evaluation and public release plans. 1 http://www.mendeley.com/ 2 http://www.researchgate.net/ 3 http://academia.edu/ 4 http://www.iamresearcher.com/ 32 Supporting Scholarly Awareness and Researchers’ Social Interactions 2 The PUSHPIN approach for awareness support for researchers PUSHPIN is an ongoing research project at the University of Paderborn (Ger- many) that aims to provide awareness support for researchers through the inte- gration of social networking and big data analysis features. While many features of the whole approach have already been implemented and can be used, other features are not yet realized and are currently under development. In this section we give an introduction to how PUSHPIN will help researchers to become and stay aware of their connections to other researchers and publi- cations. In particular we describe how the social layer and the available social networking features contribute to the overall awareness of researchers (Section 2.1) and discuss the power of email notifications to keep the users engaged to visit the platform (Section 2.6). In Section 2.3, we describe how the automatic analysis of big data sets of publications is supporting object-centered sociality in PUSHPIN and how it gives insight to the relations of people and objects in PUSHPIN. Moreover, we present visualizations (Section 2.5) and recommenda- tions (Section 2.4) that support researchers’ awareness and discuss how we use mobile devices and interactive displays to access data in our ecosystem (Section 2.7). 2.1 The Social Layer of PUSHPIN To raise awareness of an idea and to create a circle of supporters of the same, it is essential for any research idea to reach a wide audience. Social networking makes it possible to connect to potential collaborators thereby supporting the start of an incipient Research Network. Where social networking tools are often based on the people element, on the other hand, social awareness tools tell us a story using various data associated with people and helps us build a network based on such data. Often, we also find social networks that assemble around specific objects, which become the hub for social interactions [6]. In PUSHPIN, the objects that realize this object-centered sociality [12] are scientific publica- tions. While PUSHPIN can identify that there is a direct connection between two researchers as they follow each other, we also provide social awareness by stating that there are x publications that both of them have cited in their own writings. This way, the system may make the researchers aware of their shared interest and common knowledge in a certain research area and may trigger a user action. The social layer of PUSHPIN aims to support users in creating an active social network that is created by the users themselves through social interactions and conscious activities. The other parts of PUSHPIN rather contribute to a passive social network that is automatically generated by the system and that is built based on abstract information and activities such as collaboratively writing publications, working at the same institution or citing similar works [14]. Both, social networking features and social awareness support together can provide a 33 Supporting Scholarly Awareness and Researchers’ Social Interactions powerful framework to support research [14,23]. The following points describe how PUSHPIN support object-centered sociality and active network constructs. Sign-up and sign-in using existing accounts To ease the sign-up and sign- in process for users and to support them to reuse their existing social profiles as login, we enable login via Facebook, Twitter and Mendeley. Moreover, PUSHPIN gets access to the respective social graphs and can recommend friends from the other social networks that already use PUSHPIN. User profile updates A user’s profile plays an important role in getting to know the user. To name a few, it consists of information about the user’s affiliations, research interests and research disciplines, which highlights the user’s research areas. This information has impact on the engagement in social networks as they reflect the personality of a user. Any changes to the user’s profile are presented to the followers of that user in their activity stream. Following a user Users can follow other users to get an account of all their activities. The number of followers (users following the current user) and followees (users followed by the current user) are shown on the dashboard of a user as well as any user’s profile. The number of followers of a user can be taken as quantification of the popularity and networking efforts of that user on PUSHPIN. Status updates, likes and comments Sharing status updates is a common construct in social networking applications which allow users to share their current thoughts or their work progress. In PUSHPIN, the status updates could be used not only for sharing ideas or current readings but also for requesting help or simply sharing some news. Also, all followers of the user can like and comment on a status message, which may eventually result in a discussion of the content shared. Moreover, if a user has connected other social media accounts to his PUSHPIN account, she can automatically share the status update with all of her other accounts. Private messaging To support non-public information exchange, all users on PUSHPIN can exchange private messages with each other. Messages are stored in conversations that multiple users can be part of. Any member of a conversation can add additional users to the conversation and each user can leave a conversation at any time. User’s activities When PUSHPIN users successfully sign in, they are redi- rected to their personal dashboard. A significant part of the dashboard con- sists of an activity stream, which is a sorted summary of activities. These activities consist of stories such as status updates of users, likes and com- ments on statuses, changes in profile information, users following and tagging other users, users uploading, bookmarking, rating and tagging publications, etc. In short, it tells stories of the users’ interaction with other users and publications. Users can only see updates of other users, whom they follow. Apart from the dashboard, users can also see activities of a particular user on their user profile. This kind of feature is common with most of the social networking platforms including Twitter and Facebook and hence, most of the users are already familiar with it. 34 Supporting Scholarly Awareness and Researchers’ Social Interactions Uploading publications Since scientific publications are the central hub for object-centered sociality in PUSHPIN, users can upload publications to the service5 . This may be done by selecting publication from the local computer and uploading them, or by connecting their Mendeley account to PUSHPIN. In the latter case, all the PDFs in the user’s Mendeley collections are au- tomatically imported in the PUSHPIN infrastructure. All the publications that have been uploaded to the system, are then automatically analyzed and information is extracted from them (see Section 2.3 for a detailed description of this process). Interacting with publications All the users have access to the dedicated pro- files of all the publications in PUSHPIN. On the profile, users can rate the publication and share it on other social networking sites. Moreover, users can recommend the publication to other PUSHPIN users or send the recom- mendation via email. Finally, users can bookmark the publication and put it in one of their collections on PUSHPIN. Tagging objects Social tagging is one of the most prominent features of Web 2.0 [15] and is available for all kinds of objects in PUSHPIN. Users can tag publications and institutions and to classify other users they can also tag users (this is commonly referred to as people tagging [3,7,19]). When someone explores a keyword, all the users tagged with that keyword form a part of search results in researchers’ list. 2.2 Publications analysis All scientific publications that are uploaded to the PUSHPIN infrastructure6 are automatically analyzed according to several aspects. This automatic analysis represents a series of processing steps that are executed after a publication is uploaded the PUSHPIN system. The first and foremost step taken is to check if the publication is already in the publication corpus and/or if a full analysis has to be started. If this is not the case, the uploaded publication is inserted into HBase7 . After that, Storm8 is triggered for further analysis of the publication. This analysis by Storm in- volves activating the metadata extraction and reference extraction modules to obtain the metadata and references from the publication. The metadata, be- ing referred to, can be the title, the author(s) and their email addresses, the authors’ institutions, abstract, and keywords. For each of the references that have been cited in the publication, the reference extraction module looks for title, author(s), year of publication and publication outlet. The two modules 5 Due to potential copyright infringements, we will only process the uploaded data in order to extract metadata from the publications. We will not, however, allow the public download of the PDFs shared with the PUSHPIN system. 6 Currently we only process articles in PDF format. In particular, we do not process books or theses. 7 http://hbase.apache.org 8 http://storm-project.net 35 Supporting Scholarly Awareness and Researchers’ Social Interactions use GROBID9 and ParsCit10 as key software tools. If additional metadata in BibTEX or PLoS XML format is available, the modules make use of this in- formation as well. The extracted data is then compared and combined to get the most exact metadata (similar to our approach in [26]). Alongside metadata extraction, Storm also triggers a module that creates thumbnails of each page of the uploaded publication. 2.3 Near-copy detection and publication similarities A problem of modern science is the rising amount of plagiarism. In the digital age it has become much easier to access scientific publications and to copy content. In order to detect conscious or unconscious plagiarism we introduced algorithms to PUSHPIN, which are capable of doing near-copy detection (NCD). NCD means that correctly cited paragraphs will also be detected. To distinguish between full-text quotes and plagiarism, additional algorithms have to be used to detect plagiarism indicators. This could be done in future projects. The NCD algorithm used in PUSHPIN are inspired by the fuzzy string similarity detection algorithm described in [1]. Each uploaded paper first goes through initial text preprocessing steps before it can be analyzed by our NCD algorithm. These initial steps are used to remove irrelevant and uninteresting parts of the text and to make the different text better comparable: Text extraction The papers are uploaded as PDF files. From these files, the text, along with the information about its position in the PDF file are ex- tracted. This gives the exact location of a copied text in the documents it appears in. Text cleaning The extracted text contains – for the NCD algorithm – unin- teresting information, like headers and footers of the document. These lines are removed and hyphenated words are joined again. Language detection Some algorithms need to know the language of the text as they work with trained models that are specific for one language. Part-of-speech tagging The "Part-of-Speech" (POS) tagging determines the grammatical meaning of a word in a sentence. This information is necessary for detecting synonym groups of words later on. Moreover, POS tagging is also useful in combination with lemmatization for calculating word clouds. Lemmatization and stemming For comparing words in our NCD algorithm, it is necessary to bring all words to the principal form, which is the same for all tenses and plural and singular forms. Lemmatization transforms words in the principal form using a dictionary algorithm. This algorithm is expen- sive in time and memory but the results are real words, which also can be displayed in word clouds. Stemming is an algorithmic transformation of the input word that will transform it to the stem. The stem, however, does not 9 http://grobid.no-ip.org 10 http://aye.comp.nus.edu.sg/parsCit 36 Supporting Scholarly Awareness and Researchers’ Social Interactions need to be a real word and thus should not be used in word clouds or the like but its calculation is very fast. Number and stop word removal In this step, we remove unimportant ele- ments from the text in order to reduce the complexity of the NCD algorithm computation. Synonym detection Often, copiers try to conceal the copies by replacing words with synonyms of the word. This makes it harder to detect certain parts of a text as copied. This makes it necessary to detect synonym groups that a given word belongs to and to check all synonyms of the word for potential copies. In this step we make use of the WordNet project [16,8] and a modified Lesk algorithm [2] for distinguishing the different meanings of a word. After the text preprocessing is finished, the NCD algorithm can calculate the similarities between all sentences of the publication and the preprocessed back- ground corpus. This procedure is inspired by [1] but additionally incorporates the similarity between two synonym groups. Whereas the original algorithm uses a similarity of 1 if two words are equal, a similarity of 0.5 if they are in the same WordNet synonym groups and 0 in all other cases, we calculate the Wu and Palmer WordNet similarity [33] between two words if they are not equal. Additionally to the sentence-level calculation of similarities, we also compute several text-based similarity measures on a fulltext-level of all publications in the PUSHPIN corpus with respect to each other. This computation needs very large computational power and produces a lot of similarity data. We rely on the Apache Hadoop framework to scale the com- putation to a cluster of computers (see Section 3 for a detailed inspection of the PUSHPIN eResearch infrastructure). 2.4 Recommendations In PUSHPIN we use an ample number of recommender algorithms due to the following reasons: 1. The system has to take into consideration the networks that result from the extracted co-authorship information as well as the co-citation and biblio- graphic coupling data of publications. 2. For item-based recommendations, the system also has to employ the use of textual similarities, clustering results, author-assigned and extracted key- words as well as user tags. 3. Also, the system is capable of tracking user activity on the PUSHPIN web application, store the user activity, and based on these, be able to recom- mend resources (e.g., users who bookmarked publication X also bookmarked publication Y; mutual followers; you might also assign these tags to the re- source because others did so; people who visited this resource also visited that resource). To sum up, the recommender system takes into account all the above infor- mation for recommendation. In addition, the recommendations will be textual and visual, and also can be explained to the user. 37 Supporting Scholarly Awareness and Researchers’ Social Interactions 2.5 Visualizations Visualizations prove very useful in presenting and understanding large and com- plex sets of data and mining for hidden patterns within them. They serve as a very useful decision support tool in research networks and help researchers to become and stay aware of large data sets [18,23]. Sometimes, they also allow interaction with the data in order to enhance the understanding [31]. In PUSH- PIN, visualizations play an important part to support social awareness using a set of aesthetic visualizations of data related to researchers, affiliations and pub- lications. We will have a brief look at some of the visualizations that we have or plan to have in PUSHPIN. Usage and statistical visualizations This category of visualizations will be prevalent throughout PUSHPIN. For researchers, there will be a simple chart depicting the development of followers, co-authors, publications, etc. Simi- larly, there will be charts for a publication how the number of citations and bookmarks developed over time. Besides, visualizations based on gen- eral statistical data like typical co-authorship network sizes, most referenced articles, top research disciplines, etc. will have a place in PUSHPIN. Trend-based visualizations This category will include trends using numbers as well as trends in usage of text over time. Trending citations, authors, topics and keywords will be visualized in appropriate manner. Similarity-based visualizations Details of textual similarity between papers and bibliographic coupling similarity between papers will be explored here. Moreover, appropriate visualization of paragraphs that have been found dur- ing the near-copy detection will be developed and provided in PUSHPIN. Map-based visualizations Geo-spatial visualizations show us the geographi- cal location of researchers and institutions and help us understand the widely spread co-authorship networks and the associations of different institutions (inspired by the works of [17,18]). Particularly, we have interactive visual- izations that show and link us to various information related to a researcher or an institution and relations between them. Co-authorship visualizations For a researcher, there will be a circular vi- sualization with the researcher at center and his co-authors around him in circles. This give us a chance to explore the co-authors of this researcher. When a user explores a discipline, a research interest, an institution or a tag, there can be sets of co-authorship networks related to the explore query which may not be connected. Hence, we do not use a radial layout here, in- stead build a graph comprising of different networks(not connected) to show various sets effectively. Besides the above categories, we will also have tag-based visualizations like word clouds, spark lines, etc. and also circle-based visualizations 2.6 Email notifications As Fred Wilson points out “if you want to drive retention and repeat usage [of your service], there isn’t a better way to do it than email ” [32]. Instead of making 38 Supporting Scholarly Awareness and Researchers’ Social Interactions email disappear, social media has created new application fields for email and makes heavy use of them in all kind of domains. In PUSHPIN, we also use the power of email notifications to keep the users of the system up-to-date what is going on in PUSHPIN. Users will receive emails when they have new followers or someone comments on their publications. PUSHPIN will send alerts if it found new publications of an author or if someone tagged an author’s publication. If users do not want to be bothered with emails, they can deactivate them or set adjust their granularity and frequency levels. 2.7 Access on mobile devices and interactive displays In our previous research we found that mobile access to research information, together with context-awareness and push notification of relevant information is very relevant for researchers overall awareness of their research networks [25,23]. Moreover, research conducted by Nagel et al. [17,18] and Vandeputte et al. [28] shows that interactive tabletop applications are useful for sensemaking of pub- lication data and co-authorship networks. Moreover, most of the existing social networks and Research 2.0 applications make allowance for the immense per- vasion of mobile devices among all social classes by providing dedicated mobile applications the resemble the features of their web-based counterparts. Often, the mobile applications even make extensive use of the specific technical char- acteristics of the mobile devices such as camera, microphone, GPS positioning. Against this background, we decided to provide a mobile application, which could be used by all PUSHPIN users and a multitouch application that should be used for special occasions such as conferences. The PUSHPINmobile application resembles a significant part of the features of the web application. Making use of specific mobile interface patterns such as dashboards and multitouch gestures, researchers are enabled to access all the information from the social layer and to engage in social interactions with their peers. Researchers are also able to view their own and other researchers’ profiles, search nearby researchers depending on their physical location, and also explore the different research disciplines, institutions and publications in the system. Moreover, researcher will also be able to tag other researchers and communicate with each other through private messages. Beyond that, users of the mobile application will be enabled to authenti- cate and exchange data with the multitouch table application (PUSHPINM T ). Therefore, researchers can connect to PUSHPINM T using either Bluetooth or NFC. Additionally, the mobile application can bring up QR codes that can be scanned by the multitouch application. The QR codes can contain information about the researcher’ own or other researchers’ profile, institutions or publica- tions. On PUSHPINM T , users will be able to explore their relations to other researchers and publications based on several scientometric measures. Moreover, they can explore the publications in PUSHPIN based on tags and other classi- fications. Finally, they can scan QR codes of any PUSHPIN object and get a virtual representation of the object on the tabletop. 39 Supporting Scholarly Awareness and Researchers’ Social Interactions 3 PUSHPIN’s eResearch infrastructure implementation In this section we describe the technological underpinning of PUSHPIN’s eRe- search and big data analysis infrastructure and relevant technologies we employ in the realization of the PUSHPIN user interfaces. 3.1 Big Data analysis In modern web-based (social) applications, users create huge amounts of data. This data can be used for analyzing the system or for building recommender systems to advance the user experience. For PUSHPIN, large computational power is needed to analyze uploaded scientific papers, do text extraction and manipulation, thumbnail creation as well as text analysis and similarity analyses, near-copy detection and metadata extraction. Most of the applied algorithms need large computational power and create huge intermediary data. To handle these needs, we decided to use well-known and massively scalable frameworks like Apache Hadoop11 and Twitter Storm12 for batch processing, handling large datasets and for real-time analysis. Both frameworks are designed for running on clusters of consumer PCs, are robust against system faults and optimized for highly parallel computation. Storm is a distributed realtime computation framework developed by Nathan Marz. It consists of a master server called nathan, which controls a set of worker nodes called supervisors. The system is coordinated using the Apache Zookeeper framework13 . A processing chain in Storm is described by a topology of steps called bolt and is filled with data by a datasource called spout. The spout and the bolts are distributed on a cluster of computing devices and connected to each other via messaging queues described within the topology. Each element of the topology will be created with a specific parallelism factor, which generates multiple instances of this element on different nodes of the cluster. The frame- work passes a computing object from the spout to the first bolt and then from bolt to bolt where several different tasks can be executed. In PUSHPIN, Storm is used to do the first computing steps for an uploaded paper where near real-time responses are required. For this, we use multiple Storm topologies. If an user has uploaded a paper, the first topology receives the paper and extracts information, which are needed for rendering the next webpage directly after uploading the paper. After that, we can continue to asynchronously process the paper in order to extract information that takes more time to com- pute, like creating thumbnails of the pages, or doing text-processing, or update trend-detection values. The Apache Hadoop framework consists of two modules which deal with the batch processing of big data: 1) the Hadoop Distributed File System (HDFS) and 2) MapReduce. 11 http://hadoop.apache.org 12 http://storm-project.net/ 13 http://zookeeper.apache.org 40 Supporting Scholarly Awareness and Researchers’ Social Interactions The Hadoop Distributed File System is an open source implementation of a fault tolerant, self-healing, distributed filesystem for large datasets inspired by the Google filesystem (GFS)[10]. It is designed to store large file, which are split and distributed over several nodes of a cluster, and to achieve high performance, while serving the data to computing processes. The processing methodology of Hadoop is an implementation of the MapReduce paradigm [5], which is de- signed to handle large amounts of data by splitting the input stream into chunks, which are computed on several nodes of a cluster. The MapReduce paradigm di- vides the processing into two stages to reduce the complexity. The first stage (map) processes several input key/value pairs and outputs a set of intermediate key/value pairs, which are sorted and transferred to the second stage (reduce). The reducer, eventually, merges all intermediate values, which are associated to the same key and outputs results for that key. Hadoop provides batch processing function, which perfectly scales with the number of nodes in a cluster. This functionality excellently supports parallelism to a wide range of algorithms especially in data mining and information retrieval. In PUSHPIN, Hadoop is used for several algorithms, which need large com- putational performance and that process big data. Amongst others, these algo- rithms compute the similarity of texts, clusters the papers, builds recommender models or run near-copy detection algorithms. Moreover, we use Apache Ma- hout14 for the calculation of text-based similarities, text clustering, classification and recommender algorithms based on Hadoop MapReduce. 3.2 Text preprocessing As described in Section 2.3, we perform several text preprocessing steps before a paper can be analyzed by the near-copy detection algorithm. The text extraction and thumbnail generation is done using Apache PDFBox15 . Since many algo- rithms need to have knowledge about the language of a text, we use a Java-based language detection library16 for that. The Part-of-speech tagging is realized us- ing Apache OpenNLP17 . Stemming and lemmatization of the extracted texts is implemented on top of the Mate Tools natural language analysis toolkit18 . Finally, we make use of Apache Lucene19 in the process of removing numbers and stop words that we consider as being not relevant for text similarities or near-copy detection. 3.3 Metadata and reference extraction During the metadata and reference extraction processes we are trying to accu- rately detect a publication’s title, author(s), contact information, like emails and 14 http://mahout.apache.org 15 http://pdfbox.apache.org 16 http://code.google.com/p/language-detection 17 http://opennlp.apache.org 18 http://code.google.com/p/mate-tools 19 http://lucene.apache.org 41 Supporting Scholarly Awareness and Researchers’ Social Interactions address data as well as author-provided keywords and the publication’s abstract. Moreover, we are interested in the list of references and all the relevant data from each of the references. This metadata is extracted for different purposes, e.g., the attribution of publications to PUSHPIN users, the creation of co-authorship graphs, the calculation of recommendations and for detecting reference and re- search trends. Once a publication has been uploaded to PUSHPIN and inserted into HBase, the metadata and reference extraction modules get triggered by Storm. The process involves triggering ParsCit and GROBID in parallel threads. GROBID (GeneRatiOn of BIbliographic Data) employs the concept of Conditional Ran- dom Fields (CRFs) for pattern recognition and data extraction [30]. Using this, "GROBID extracts the bibliographical data corresponding to the header informa- tion (title, authors, abstract, etc.) and to each reference (title, authors, journal title, issue, number, etc.). The references are associated to their respective cita- tion contexts" [13]. ParsCit also employs the use of CRF model at its core for metadata extraction by locating reference strings, parsing them and retrieving their citation contexts. It employs state-of-the-art machine learning models to achieve its high accuracy in reference string segmentation, and heuristic rules to locate and delimit the reference strings and to locate citation contexts. [4]. Each tool does an independent metadata and reference extraction and the two results, obtained at the end, are then combined with potentially available other metadata like BibTEX data or PLoS XMLs. This merging is necessary as sometimes the metadata extracted from both tools differs, and also at times either of the tool misses out on some important metadata. If available, the data available in BibTEX or PLoS XML format are the most accurate source of infor- mation since they have been manually created by people knowledgeable of the publication. 3.4 Sign-up and sign-in using OAuth In PUSHPIN, we use the Open Authorization (OAuth) protocol20 to allow users to login to PUSHPIN using their Facebook, Twitter or Mendeley accounts. OAuth “is a security protocol that enables users to grant third-party access to their web resources without sharing their passwords” [11]. Apart from this, PUSHPIN also serves as an OAuth service provider, which implies that websites can use PUSHPIN for the sign-up and sign-in of users. OAuth is also used to connect the three PUSHPIN user interfaces to the backend. 3.5 The PUSHPIN API In PUSHPIN, we use provide a REST (REpresentational State Transfer) API (Application Programming Interface) to communicate between the frontends (web-based application, mobile application and multitouch table) and the Java backend. The frontend sends/requests data to the backend using the REST API, 20 http://oauth.net 42 Supporting Scholarly Awareness and Researchers’ Social Interactions e.g., information about a certain resource such as a publication. The backend in turn returns a representation of the resource in JSON notation. The reasons for using REST (over other available web services such as SOAP) are that it is light-weight, simple, very popular among web applications and that it provides better performance and scalability. 3.6 PUSHPIN user interfaces PUSHPIN currently provides three user interfaces for its users. The web-based application serves as the main interface to our service and will be used by the av- erage user. Moreover, we provide a mobile application for Android smartphones that allows the anytime-anywhere access to PUSHPIN’s main features. Finally, we also provide a multitouch application for tabletop-displays that supports users in exploring the PUSHPIN data in new ways. Web-based application The web-based PUSHPIN front-end is a self-contained application and serves as the primary application to most of the users (see Fig- ure 1). This application is written in PHP5 and builds on the state-of-the-art in HTML5 and CSS3 development. It also involves extensive use of JavaScript that enhances the user experience. Also, various Javascript frameworks are used for different visualizations. Mobile application The PUSHPINmobile application is developed using the Android 4 SDK and supports all smartphones running Android OS 4.0 and higher. PUSHPINmobile currently provides users an interface to the social layer of PUSHPIN and lets them flip through their activity stream, like and comment entries and post new status updates. The application can scan QR codes of any PUSHPIN object and present the data related to that object. Moreover, the users can locate themselves and see relevant researchers around them. Multitouch application The main purpose of the PUSHPINM T application is to provide different interactions with the data in PUSHPIN. In [24] we discern four basic modes of data exploration on PUSHPINM T : the 1) people-based, 2) topic-based, 3) event-based and 4) trend-based approach. Users can use the search to bring up researcher or publication profiles or authenticate themselves using PUSHPINmobile or QR codes. Moreover, they can explore the relations between publications, which can be related by common references or authors, textual similarity or even by copied/cited paragraphs. Finally, users can explore the trends in reference and publication data as well as exploring the authorship patterns found during the automatic analysis of the publications. 4 Conclusion and future research opportunities In this paper we have introduced the PUSHPIN approach for awareness sup- port in research networks. In PUSHPIN we combine the best of two worlds: 43 Supporting Scholarly Awareness and Researchers’ Social Interactions Fig. 1. Dashboard in the web-based PUSHPIN application classic features of Facebook-like social networking sites and those of innovative eResearch infrastructures. The integration of these features results in enhanced awareness support for researchers on both a social and a content layer. The rec- ommender systems in PUSHPIN will not only recommend publications based on collaborative filtering but also on the actual content and reference data within the publications. Thus, PUSHPIN goes beyond the state-of-the-art and might help overcoming unwanted fragmentation in research networks and connecting researchers that otherwise would have stayed unknown to each other. In the coming months we will continue to improve the implementation of the analyti- cal backend and further enhance the three user interfaces. We will invite selected users to an alpha test of the PUSHPIN web-based application in August and evaluate the existing features with them. The feedback on early versions of the software will help shaping the further development. We plan to release the system to public beta in early October 2012. References 1. Salha Alzahrani and Naomie Salim. Fuzzy Semantic-Based String Similarity for Extrinsic Plagiarism Detection. Lab report, Taif University Saudi Arabia and 44 Supporting Scholarly Awareness and Researchers’ Social Interactions Universiti Teknologi Malaysia, 2010. 2. Satanjeev Banerjee and Ted Pedersen. An adapted lesk algorithm for word sense disambiguation using wordnet. In Alexander Gelbukh, editor, Computational Lin- guistics and Intelligent Text Processing, volume 2276 of Lecture Notes in Computer Science, pages 117–171. Springer Berlin / Heidelberg, 2002. 3. Simone Braun, Christine Kunzmann, and Andreas Schmidt. People Tagging & Ontology Maturing: Towards Collaborative Competence Management, pages 133– 154. Springer, 2010. 4. Isaac G. Councill, C. Lee Giles, and Min yen Kan. Parscit: An open-source crf ref- erence string parsing package. In INTERNATIONAL LANGUAGE RESOURCES AND EVALUATION. European Language Resources Association, 2008. 5. Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. Commun. ACM, 51(1):107–113, January 2008. 6. Jyri Engeström. Why some social network services work and others don’t — or: the case for object-centered sociality. Available online http://bit.ly/eJA7OQ (accessed 31 December 2010), April 2005. 7. Stephen Farrell, Tessa Lau, Stefan Nusser, Eric Wilcox, and Michael Muller. So- cially augmenting employee profiles with people-tagging. In Proceedings of the 20th annual ACM symposium on User interface software and technology, UIST ’07, pages 91–100, New York, NY, USA, 2007. ACM. 8. Christiane Fellbaum. Wordnet. In Roberto Poli, Michael Healy, and Achilles Kameas, editors, Theory and Applications of Ontology: Computer Applications, pages 231–243. Springer Netherlands, 2010. 9. Christian Fuchs. Handbook of Research on Web 2.0, 3.0, and X.0: Technologies, Business, and Social Applications, volume II, chapter Social Software and Web 2.0: Their Sociological Foundations and Implications, pages 764–789. IGI-Global, Hershey, PA, 2010. 10. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The google file system. SIGOPS Oper. Syst. Rev., 37(5):29–43, October 2003. 11. Eran Hammer. Introducing oauth 2.0, May 2010. 12. K. Knorr Cetina. Sociality with Objects: Social Relations in Postsocial Knowledge Societies. Theory Culture Society, 14(4):1–30, 1997. 13. Patrice Lopez. Grobid: combining automatic bibliographic data recognition and term extraction for scholarship publications. In Proceedings of the 13th European conference on Research and advanced technology for digital libraries, ECDL’09, pages 473–474, Berlin, Heidelberg, 2009. Springer-Verlag. 14. Tamara M. McMahon, James E. Powell, Matthew Hopkins, Daniel A. Alcazar, Laniece E. Miller, Linn Collins, and Ketan K. Mane. Social awareness tools for science research. D-Lib Magazine, 18(3/4), 2012. 15. David R. Millen, Jonathan Feinberg, and Bernard Kerr. Dogear: Social bookmark- ing in the enterprise. In Proceedings of the SIGCHI conference on Human Factors in computing systems, CHI ’06, pages 111–120, New York, NY, USA, 2006. ACM. 16. George A. Miller. Wordnet: a lexical database for english. Commun. ACM, 38(11):39–41, November 1995. 17. Till Nagel and Erik Duval. Muse: Visualizing the origins and connections of in- stitutions on co-authorship of publications. In Proceedings of the Science 2.0 for Technology Enhanced Learning Workshop, 2010. 18. Till Nagel, Erik Duval, and Frank Heidmann. Visualizing geospatial co-authorship data on a multitouch tabletop. In Proceedings of the 11th international confer- ence on Smart graphics, SG’11, pages 134–137, Berlin, Heidelberg, 2011. Springer- Verlag. 45 Supporting Scholarly Awareness and Researchers’ Social Interactions 19. Peyman Nasirifard, Sheila Kinsella, Krystian Samp, and Stefan Decker. Social people-tagging vs. social bookmark-tagging. In Philipp Cimiano and H. Pinto, editors, Knowledge Engineering and Management by the Masses, volume 6317 of Lecture Notes in Computer Science, pages 150–162. Springer Berlin / Heidelberg, 2010. 20. Tim O’Reilly. What is web 2.0. Available online http://oreilly.com/web2/ archive/what-is-web-20.html, 2005. 21. Tim O’Reilly and John Battelle. Web Squared: Web 2.0 Five Years On. Whitepa- per, O’Reilly Media Inc., 2009. 22. Rob Procter, Robin Williams, James Stewart, Meik Poschen, Helene Snee, Alex Voss, and Marzieh Asgari-Targhi. Adoption and use of Web 2.0 in scholarly com- munications. Phil. Trans. R. Soc. A, 368:4039–4056, 2010. 23. Wolfgang Reinhardt. Awareness Support for Knowledge Workers in Research Net- works. Available online at http: // bit. ly/ PhD-Reinhardt . PhD thesis, Open University of the Netherlands, 2012. 24. Wolfgang Reinhardt, Muneeb I. Ahmad, Pranav Kadam, Ksenia Kharadzhieva, Jan Petertonkoker, Amit Shrestha, Pragati Sureka, Junaid Surve, Kaleem Ullah, Tobias Varlemann, and Vitali Voth. Exploration wissenschaftlicher Netzwerke und Publikationen mittels einer Multitouch-Anwendung [Exploration of Research Net- works and Publications using a Multitouch Application]. In Florian Klompmaker, Karten Nebe, and Nils Jeners, editors, Proceedings of the 3rd Workshop Kollabora- tives Arbeiten an interaktiven Displays [Collaborative Work on interactive displays] at the Mensch & Computer Konferenz 2012, September 2012. 25. Wolfgang Reinhardt, Christian Mletzko, Hendrik Drachsler, and Peter B. Sloep. Design and evaluation of a widget-based dashboard for awareness support in Re- search Networks. Interactive Learning Environments, 2012. 26. Wolfgang Reinhardt, Christian Mletzko, Benedikt Schmidt, Johannes Magenheim, and Tobias Schauerte. Knowledge Processing and Contextualisation by Auto- matical Metadata Extraction and Semantic Analysis. In Pierre Dillenbourg and Marcus Specht, editors, Proceedings of the 3rd European Conference on Technology Enhanced Learning (EC-TEL 2008), Maastricht, The Netherlands,, volume 5192 of Lecture Notes in Computer Science, pages 378–383. Springer Berlin, 2008. 27. Ben Shneiderman. Science 2.0. Science, 319(5868):1349–1350, 2008. 28. Bram Vandeputte, Erik Duval, and Joris Klerkx. Interactive sensemaking in au- thorship networks. In Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces, Kobe, Japan, 2011. 29. M.M. Waldrop. Science 2.0. Scientific American, 298(5):68–73, 2008. 30. Hanna M. Wallach. Conditional random fields: An introduction. CIS MS-CIS-04- 21, University of Pennsylvania, 2004. 31. Matthew O. Ward, Georges Grinstein, and Daniel Keim. Interactive Data Visual- ization: Foundations, Techniques, and Applications. Taylor & Francis, 2010. 32. Fred Wilson. Email: Social Media’s Secret Weapon. Available online http://articles.businessinsider.com/2011-05-15/tech/30100968_1_ return-path-matt-blumberg-facebook, May 2011. 33. Zhibiao Wu and Martha Palmer. Verbs semantics and lexical selection. In Pro- ceedings of the 32nd annual meeting on Association for Computational Linguistics, ACL ’94, pages 133–138, Stroudsburg, PA, USA, 1994. Association for Computa- tional Linguistics. 46 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems Shalini Kurapati1 , Gwendolyn Kolfschoten1 , Alexander Verbraeck1 Hendrik Drachsler2 , Marcus Specht2 , and Frances Brazier1 1 Delft University of Technology, Delft 2600 GA , The Netherlands 2 Open Universiteit, Heerlen 6419 AT, The Netherlands Abstract. Sociotechnical systems are large technical systems compris- ing many stakeholders (e.g.: Supply chains, Transportation networks, Energy distribution systems etc.). Decision making in such systems is complex, as the stakeholders are inter-dependent and the large size of the systems leads to insufficient Shared Situational Awareness (SSA), which is important for participatory decision making. The aim of this paper is to develop a framework to understand the goals and requirements for designing processes to create SSA in such systems. The framework is based on the Capability Maturity Model (CMM) and systems thinking perspective. The framework is initially validated by experts and will be further validated with experiments with stakeholders in several workshop settings. Keywords: Shared Situational Awareness, Sociotechnical Systems, De- cision making 1 Introduction 1.1 Sociotechnical systems and relevance of SSA Sociotechnical systems involve both complex physical-technical systems and net- works of interdependent stakeholders. These systems consist of technology that drives the system, and stakeholders that design, maintain, operationalize, and implement that system [4]. However, during a problem situation, as the number of stakeholders increases, the conflicts of interests become greater, making de- cision making complex and challenging. Eventually, it may become impossible for any one actor to understand the situation in its entirety [4], which can be defined as lack of a ’common operational picture’ or lack of shared situational awareness. For example, according to research conducted by IBM among various supply chain network managers, more than 70% expressed concern about lack of visibility, transparency and awareness in the network due to organizational silos, lack of information sharing, coordination issues, local optimization against global view etc. [13]. The aim of this paper is to design a theoretical framework to gain insight into the objectives and requirements for SSA in sociotechnical sys- tems. Thereby, understand the processes towards better participatory decision 47 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems making in such systems. The relevance and importance of SSA for such systems is introduced in Section 1, followed by a brief theoretical background of SSA. Subsequently, the research gap in the study of SSA is highlighted. After which, a theoretical SSA framework is presented along with the research methodology. This paper concludes with the presentation of the future work, in lieu of the nature of this paper which is Work-In-Progress. 2 Shared Situational Awareness background Shared Situational Awareness is described as ”shared awareness/understanding of a particular situation” or ”common operational picture” or common relevant picture distributed rapidly about a problem situation [18]. The concept of sit- uational awareness (SA) was developed after the World War II to improve the judgment and decision making abilities of fighter pilots. Individual situational awareness is defined as ”the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the pro- jection of their status in the near future” [10]. The success of the applications of SA led to its adoption by other areas such as energy distribution, nuclear power plant operational maintenance, process control, maritime, tele-operations etc and is a key topic in human factors literature [23]. As today’s organizations are largely comprised of teams, the research focus in the human factors com- munity is shifting from individual SA to SSA. However, there is no one-for-all definition and theory that explains SSA. 2.1 The theoretical gap: SSA in sociotechnical systems Existing individual, team and shared SA models, whilst each containing use- ful elements, may prove impractical when applied to the description and as- sessment of SA in non-hierarchical environments [23]. The research on SSA so far has not dealt enough with the multi-stakeholder networks or organizations. Most of the current application domains of shared SA have a structural hierar- chy of decision-making and their operations are conducted in a command and control environment. But there has not been much focus on shared situational awareness in multi-stakeholder networks such as global supply chain networks, intermodal transportation networks etc. These are sociotechnical systems where the stakeholders though are autonomous, are inter-dependent and have to be participative in nature. Therefore, the following sections describe the design of a framework that aims at closing the identified research gap in the study of SSA in sociotechnical systems. 3 Research Methodology The SSA framework for sociotechnical systems is designed based on deductive theory construction using an iterative design process [2]. Firstly, a comprehen- sive inventory of literature was gathered to study the topic of interest- SSA in 48 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems sociotechnical systems. In the second step, the knowledge gaps in the topic were analyzed. Based on the identified gaps, a framework was derived with a novel perspective on SSA, using the systems thinking perspective. The framework was presented to 2 professors at TU Delft and 2 professors at OU, Heerlen for expert opinion. With the feedback received and further literature survey, it was im- proved in the second iteration. Further improvements will be based on feedback from expert sessions, as well as testing with user groups. The following chapter describes the SSA framework in detail. 4 The SSA theoretical framework for socio-technical systems Sociotechnical systems are frequently affected by wicked problems [22]. Solving wicked problems requires the joint decision making of all the stakeholders.The joint decision making in the system requires an ’overview’ of the problem, effects of each others’ actions, and planning for the future. In other words, there needs to be SSA among the stakeholders. As the sociotecnical systems become large and complex, the actors lose an overview about the problem as well as the actions and decision of others to handle it jointly [5]. Therefore, it is crucial to understand the concept of SSA in sociotechnical systems where the actors are autonomous yet interrelated and wield varying degrees of power. When a problem occurs in the present sociotechnical systems, ad-hoc decisions are being made by actors without mutual consultation and shared awareness about each others plans, leading to conflicts, opportunistic behavior and under-utilization in the system. To address these issues, a framework for SSA was created, analogous to a framework in literature named as Capability Maturity Model (CMM) [12], which has 5 evolutionary process steps towards system organization and capability utilization. The aim of CMM is to control, measure and improve processes in large organizations and systems where the base situation is chaotic. Therefore, the CMM framework was chosen as an inspiration to design the process levels for SSA framework The five CMM steps are as follows ”1. Initial - until the process is under statistical control, no orderly progress in process improvement is possible. 2.Repeatable - a stable process with a re- peatable level of statistical control is achieved by initiating rigorous project management of commitments, cost, schedule, and change. 3.Defined - definition of the process is necessary to assure consistent implementation and to provide a basis for better understanding of the process. 4.Managed - following the defined process, it is possible to initiate process measurements. 5.Optimized - with a measured process, the foundation is in place for continuing improvement and optimization of the process ”[12]. Against the 5 levels of CMMs, only 3 levels have been chosen for SSA frame- work as level 1 and 2 of the CMM are merged into level 1 of the SSA framework,as the initial level has no interesting properties from an SSA perspective. The level 4 and 5 are merged as the objectives of SSA framework are closer to collabo- 49 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems ration and participation rather than optimization. Therefore the three maturity levels of the SSA framework are as follows. 1 Perception: The ability to perceive oneś (individual, group or system) sur- roundings, circumstances and function in the system 2 Prescription: The ability to modify existing plans , if a problem affects the system, to remain as close as possible to the existing plans 3 Participation: The ability to participate in joint corrective actions, and adapt while a problem occurs in the system As described in theoretical gap, SSA has not been studied in sociotechnical systems. The existing theories and models of SSA have not yet dealt with local- ized problems in the system that have a wide impact across the entire system. Therefore, a system thinking viewpoint has been adopted to define the SSA framework in addition to the individual and group levels, which have already been introduced in literature. The core aspects of systems thinking is gaining a bigger picture and making decisions while taking the perspectives of other stakeholders in the system into consideration [7]. Systems thinking approach is very useful to understand SSA in sociotechnical systems, as it offers approaches to understand the interrelationships, different objectives, and power relations among the stakeholders in a system [20]. The framework is intended to describe the purpose of SSA in sociotechni- cal systems. SSA is goal oriented and the requirements for reaching the gals at individual and group levels have been discussed in a command and control environment [11]. Following a similar pattern, this paper introduces goals, and the requirements for sociotechnical systems that have multiple stakeholders at individual, group and system levels along the three SSA maturity levels. The framework also focuses on learning, whether associated with individuals, groups or organizations, comprise of a set of processes that improve performance [17]. As our main objective is to study SSA in sociotechnical systems towards improving participatory decision making, learning and reflection are essential constituents of the processes towards such an improvement. The following chapters describe them in detail. 4.1 Objectives The objectives for the various system decomposition levels of the framework at the all three SSA maturity levels are defined with support from literature in Figure 1. [10] [4], [23], [21], [9], [26] in [24], [11], [14], [19] [1] 4.2 Requirements Requirements are the necessary conditions to achieve objectives stated in the above subsection. Each of the requirements for individual, team/group and sys- tem level for the three maturity levels of SSA are described in Figure 2. with literature support from [10], [3], [15], [8], [11]. [6], [10], [16], [14], [25]. 50 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems Fig. 1. Objectives of SSA for sociotechnical systems Fig. 2. Requirements for SSA in sociotechnical systems 5 Conclusion and future work SSA has rarely been studied in multi-stakeholder systems. A framework has been designed to define the processes, requirements and examples of methodologies to 51 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems be employed to understand SSA in these networks, towards reducing the theo- retical gaps found in SSA literature. The model has been primarily validated by expert opinion, and the ARTEL workshop will be a platform for further feed- back. As for the future work, experiments will be designed with the stakeholders of multi-stakeholder networks based on the SSA framework, to gain an insight about the impact of SSA in theory and practice. The experiments are scheduled to be serious games, which will be validated for design, content and rigor with both scientific and professional experts in game design. The effectiveness of the experiments will be discussed in extensive workshop sessions after the game play with the participants in the form of group interviews and feedback sessions. With the gathered results from the experiments, the framework will be improvised in several iterations and is intended to be a basis of a measurement tool for assess- ment of SSA in sociotechnical systems, as well to aid in the design of serious games for SSA training in these systems. The final objective of the research is to deduce SSA theory in sociotechnical systems describing the cognitive processes of stakeholders, factors influencing SSA, to create an insight into how SSA comes to be in sociotechnical systems. 6 Acknowledgement The research presented in the paper is conducted under the SALOMO project (Situational Awareness for LOgistic Multimodal Operations) in container sup- ply chains and networks sponsored by the Dutch Institute of Advanced Logistics (DINALOG). We also acknowledge the input from Christian Glahn (Interna- tional Relations and Security Network, ETH, Zurich, formerly associated with OU, Heerlen) for the SSA framework. References 1. Alfredson, J.: Differences in Situational Awareness and How to Manage them in Development of Complex Systems. Ph.D. thesis, Linköping University (2007) 2. Babbie, E.: The Basics of Social Research. Thomson Higher Education, Belmont, 4 edn. (2008) 3. Bolstad, C.A., Endsley, M.R.: Shared displays and team performance. pp. 1–6. No. 2, Human Performance, Situation Awareness and Automation Conference, Savannah, GA (2000) 4. de Bruijn, H., ten Heuvelhof, E.: Management in networks: On multi-actor decision making. Routledge, Oxford, 1 edn. (2008) 5. de Bruijn, Hans, H., Herder, P.M.: System and Actor Perspectives on Sociotechni- cal Systems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Sys- tems and Humans 39(5), 981–992 (2009) 6. CEN: Disaster and emergency management - Shared situation awareness. Tech. rep., European Committee for Standardization, Brussels (2009) 7. Chapman, J.: System Failure: Why Governments Must Learn to Think Differently. Demos, London, 2 edn. (2004) 52 A Theoretical Framework for Shared Situational Awareness in Sociotechnical Systems 8. Chen, Y., Harper, F.M., Konstan, J., Sherry, X.L.: Social Comparisons and Con- tributions to Online Communities : A Field Experiment on MovieLens. American Economic Review 100(4), 1358–1398 (2010) 9. Dryzek, J.: Networks and Democratic Ideals: Equality, Freedom, and Commu- nication. In: Theories of Democratic Network Governance, pp. 262–73. Palgrave Macmillan, Basingstoke (2007) 10. Endsley, M.R.: Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society 37(1), 32–64 (1995) 11. Endsley, M.R., Jones, W.M.: Situation Awareness Information Dominance & In- formation Warfare. Tech. Rep. February, DTIC Document (1997) 12. Humphrey, W.S.: Characterizing the Software Process: A Maturity Framework. Tech. rep., Software Engineering Institute, CMU, Pittsburgh (1987) 13. IBM: The smarter supply chain of the future. Tech. rep., IBM (2010) 14. Juga, J.: Organizing for network synergy in logistics A case study. International Journal of Physical Distribution & Logistics Management (2010) 15. Klein, G., Woods, D.D., Feltovich, P.J.: Common Ground and Coordination in Joint Activity. Organizational Simulation pp. 1–42 (2004) 16. Locke, E.A., Latham, G.P.: New Directions in Goal-Setting Theory. Current direc- tions in psychological science 15(5), 265–268 (2006) 17. Nevis, E.C., Ghoreishi, S., Gould, J.M.: Understanding Organizations as Learning Systems. Sloan Management Review 36(2), 73–85 (1995) 18. Nofi, A.: Defining and Measuring Shared Situational Awareness. Tech. Rep. 10, Center for Naval Analyses, Arlington (2000) 19. Nonaka, I., von Krogh, G.: Tacit Knowledge and Knowledge Conversion: Contro- versy and Advancement in Organizational Knowledge Creation Theory. Organiza- tion Science 20(3), 635–652 (2009) 20. Reynolds, M., Holwell, S., Beer, S.: Systems Approaches to Managing Change: A Practical Guide. In: Reynolds, M., Holwell, S. (eds.) Systems Approaches to Managing Change: A Practical Guide, pp. 1–23. Springer, London (2010) 21. Rhodes, R.: Understanding Governance: Policy Networks, Governance, Reflexivity and Accountability. Open University Press, Buckingham (1997) 22. Rittel, H., Webber, M.: Dilemmas in a general theory of planning. Policy sciences 4(1969), 155–169 (1973) 23. Salmon, P.M., Stanton, N.A., Walker, G.H., Jenkins, D.P., Mcmaster, R., Young, M.S.: What really is going on ? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science pp. 37–41 (2008) 24. Sorensen, E.: Governance Networks as a Tool for Democratizing Inter- Governmental Policy Making (2010) 25. Storck, J., Lesser, E.L.: Communities of practice and organizational performance. IBM Systems Journal 40(4), 831–841 (2001) 26. Young, I.: Inclusion and Democracy. Oxford University Press, Oxford (2000) 53 54 Exploiting awareness to facilitate the orchestration of collaborative activities in physical spaces Davinia Hernández-Leo, Mara Balestrini, Raul Nieves, Josep Blat Universitat Pompeu Fabra, Roc Boronat 138, 08018 Barcelona, Spain [davinia.hernandez, mara.balestrini, raul.nieves, josep.blat]@upf.edu Abstract. Complex group dynamics in physical educational spaces, such as the classroom, can lead to significant learning benefits. Outstanding teachers apply these dynamics, but their adoption is not extensive. One of the reasons behind the lack of broad adoption refers to their implementation inconveniences, including the time and attention that teachers and students need to dedicate to the orchestration of the dynamic. This workshop paper discusses a technology, the Signal Orchestration System (SOS), which facilitates the organization of group activities in physical spaces by exploiting awareness indications. Using the SOS, students wear a device that renders signals denoting orchestration aspects (e.g., color signals indicating group formation) in a way that the signals are collectively perceived. The paper states the problem and presents the proposed solution discussing different designs for the wearable devices. Keywords: group awareness, physical learning spaces, CSCL, orchestration 1 Problem statement and discussion of the proposed solution Teachers plan and orchestrate activities in physical spaces, such as the classroom, at different social levels (individual, small groups, class) with the aim of achieving a set of desired learning outcomes [1]. Dynamic sequences of multiple group activities facilitate effective learning situations driven by knowledge-intensive social interactions (e.g., mutual explanation and regulation) [2]. However, the application of complex collaborative dynamics is not extensive. One of the factors that hinder its adoption refers to the implementation inconveniences derived from the orchestration of the dynamics. Teachers have to indicate group formation and role assignment for every activity, considering the use of multiple resources/tools and the evolution of the learning situation. This orchestration task is especially demanding when the number of students involved is high. Both teachers and students need to devote part of their attention to orchestration aspects. Orchestrating collaboration is time-consuming and typically generates a noise / mess effect that can lead to distraction and disorganization. We state that augmenting physical educational spaces with awareness visualization mechanisms can facilitate the orchestration of collaborative dynamics, ultimately promoting their adoption. Related ideas have been proposed to support classroom activity supervision using interactive lamps [3]. The Signal Orchestration System (SOS) enables teachers to distribute signals denoting orchestration aspects [4]. These signals are rendered in physical devices that students can easily wear in a way that the signals can be collectively perceived. This facilitates awareness of the social dynamic and the activity flow. For instance, to 55 Exploiting awareness to facilitate the orchestration of collaborative activities in physical spaces indicate group formation, students’ devices show color signals. The students with the same color form a group. Blinking lights can indicate role or resource distribution, sound signals change of activity, etc. However, the actual meaning of each signal depends on the needs and creativity of the teacher who design the collaborative dynamic and its orchestration. The wearable devices achieve an ambient awareness effect that cannot be easily achieved with mobile devices. Three different low-cost designs have been implemented and used in several Jigsaw collaborative learning dynamics (Fig. 1). The use of the first two designs (a, b) was evaluated in two experiments framed in real scenarios [4]. The necklace was more visible, but its size and weight made it more uncomfortable. The fabric belt was lighter, thinner and aesthetically nicer, but it was less visible (too comfortable and similar to their clothes). (a) (b) (c) Fig. 1. Wearable signaling devices (a) necklace (b) fabric belt (c) arm bracelet Considering these observations, we propose an arm bracelet as an intermediate approach (Fig. 1, c). It has been designed so that it is more compact (adapted to the size of its hardware components) and can be fixed to a bracelet worn in the arm. Its position in the arm facilitates the visibility of the signals even when the participants are sitting down at their desks. Fig. 1 (c) shows how students wearing the bracelets look for other students with the same color signals to form a group. We are currently analyzing the data collected in an experiment that compares the use of the SOS arm bracelets with a controlled group using a traditional approach based on paper cards. Preliminary results indicate that the awareness facilitated by the SOS leads to a more agile classroom orchestration promoting a more satisfactory learning experience. Acknowledgments This work has been partially funded by the Spanish EEE (TIN2011-28308-C03-03) project. References 1. Dillenbourg, P., Jermann, P.: Technology for classroom orchestration. Technology for classroom orchestration. In Khine, M.S., Saleh, I. (Eds.), New Science of Learning (pp. 525-552). New York: Springer Science+Business Media (2010) 2. Roschelle, J., Teasley, S.: The construction of shared knowledge in collaborative problem solving. In C. O'Malley (Ed.), Computer-supported collaborative learning (pp. 69-197). Berlin, Germany: Springer Verlag (1995) 3. Alavi, H., Dillenbourg, P.: An ambient awareness tool for supporting supervised collaborative problem solving. IEEE Transactions on Learning Technologies (in press) 4. Hernández-Leo, D., Nieves, R., Arroyo, E., Rosales, A., Melero, J., Blat, J.: SOS: Orchestrating collaborative activities across digital and physical spaces using wearable signaling devices, Journal of Universal Computer Science (accepted) 56 Tool support for reflection in the workplace in the context of reflective learning cycles Birgit R. Krogstie1 and Michael Prilla2, 1 Norwegian University of Science and Technology, Sem Sælands vei 7-9, 7491 Trondheim, Norway birgitkr@idi.ntnu.no 2 University of Bochum, Institute for Applied Work Science, Information and Technology Management, Bochum, Germany michael.prilla@rub.de Abstract. This paper describes a model for Computer Support Reflective Learning (CSRL) as a conceptual framework to support the design, application and evaluation for tools supporting reflection as a learning mechanism at work. The CSRL model has been derived from theory and inspired by empirical work done in the MIRROR project. It contains necessary steps of reflection, which form a reflection cycle and are linked to corresponding tools and additional support mechanisms such as scaffolds to enable computer supported reflective learning. It is accompanied by a procedure to use it for the design and analysis of reflection tools in real cases. The model and the procedure to apply it have been evaluated in the MIRROR project. This paper reports on results of this evaluation. 1 Introduction Developing solutions to improve reflective learning in the workplace is a main objec- tive in the MIRROR research project, which is an integrated research project funded under the FP7 of the European Commission. MIRROR seeks to provide tools to em- power and motivate employees to learn from reflection on tacit work practices and personal experiences. MIRROR applications offer computer-supported reflective learning (CSRL) tools for individual, social, creative, game-based as well as organiza- tional reflection and real-time learning. The project consortium includes five test bed organizations representing a variety of organizational characteristics and user needs, and the tools under development in the project cover a wide specter of technologies. Apart from the MIRROR apps, the project produces conceptual tools to support the development of CSRL solutions. One of these is a reference framework for the devel- opment of MIRROR apps. The framework includes a model accounting for the role of technology in reflective learning processes – the MIRROR CSRL model - and a set of conceptual tools supporting app development and their use in the test beds. This paper is addressing the MIRROR CSRL model in a first version and the ac- companying stepwise procedure for applying the model to a case of reflective learn- adfa, p. 1, 2011. 57 © Springer-Verlag Berlin Heidelberg 2011 Tool support for reflection in the workplace in the context of reflective learning cycles ing in a workplace to aid analysis and design. The procedure was developed, evaluat- ed and delivered as an integral part of version 1 of the model. In the paper, we use a detailed example to demonstrate the potential of the ap- proach, seeking to invite discussion in the TEL research community about the CSRL model and its use. While theoretically grounded, the focus of the paper is deliberately practical. To underpin our arguments about the qualities of the model we present re- sults from an evaluation and discuss further work in light of these results. In what follows, Section 2 gives a theoretical background and Section 3 presents the CSRL model. Section 4 outlines the procedure for applying the model to a case. Section 5 presents an example of use of the procedure. Section 6 addresses the evalua- tion of the model and Section 7 concludes the paper, addressing further work. 2 Background: Computer supported reflective learning Reflection is critical to workplace learning, enabling employees to make sense of complex and dynamic situations [1, 2]. Boud et al. [3] (p. 19) defined learning through reflection as “those intellectual and affective activities in which individuals engage to explore their experiences in order to lead to new understandings and appre- ciations.” In line with this definition, in the MIRROR project we consider reflective learning to be the conscious re-evaluation of experience for the purpose of guiding future behavior, acknowledging the need to attend to feelings, ideas as well as behav- ior associated with work experience. In the workplace, work and reflection on work are intertwined [1, 2], keeping each other going and taking inputs from each other. Work creates experiences, and some experiences are reflected upon. Sometimes reflection takes place close to work and at other times with some distance. Sometimes, reflective learning is based “just” on memory, sometimes on data as well. Reflection on work experience leads to an improved understanding of the experi- ence and allows for deriving implications, conclusions, or lessons learned. In this way reflection transforms experience into knowledge applicable to the challenges of daily work. Reflection and learning thus form a cycle (e.g. [4-7]). The outcome of reflec- tion on work is applied in the work practice. Apart from being an individual, cognitive process, reflection has a strong social dimension [8, 9]: It is often accomplished collaboratively by a team or working unit, which has a joint task to perform and therefore shares work-related experience. It is possible to encourage reflection by providing appropriate support. In industrial settings there are reflection “tools” like project debriefings [10-12] demonstrating the value of reflection in work life. Most reflective learning at work, however, occurs without support of technology [13]. Technology has a large potential to increase the efficiency and impact of reflective learning at work [14-19] and can be applied to informal, everyday learning in the workplace]. The design space of possible solutions is vast and growing with the emergence of new technologies potentially applicable to work settings. 58 Tool support for reflection in the workplace in the context of reflective learning cycles There are many examples of successfully modeling experience-based learning as a cycle [4-7]. On the basis of work in MIRROR [20], the reflective learning cycle on work includes a reflection session (the time-limited activity of reflecting – short or long, informal or formal, planned or spontaneous, individual or collaborative, etc.). Furthermore, achieving transitions from work to reflection and back are essential, triggers for reflection and useful outcomes of reflection being key issues. A model outlining tool support for reflective learning in the workplace should out- line how work and reflection are connected, support the description of reflective learning processes and scenarios in different real-life settings, e.g. workplaces, and thereby aid the recognition of differences and commonalities. Also, it should clarify the different roles technology can play in supporting reflection [20, 21]. 3 The MIRROR model of Computer Supported Reflective Learning (CSRL) To support analysis and development of computer supported reflective learning, the use of technology can be linked to steps in a reflective learning cycle. In the MIRROR CSRL model [21], steps of reflective learning form a cycle and are linked to categories of tool use. The learning cycle contains four main steps (see Fig. 1). Fig. 1. The cycle view of the CSRL model The diagram in Fig. 1 can be instantiated with a case of reflective in the workplace comprising several cycles of reflective learning (e.g. an ‘expansion outwards’ of the model). Each learning cycle can be ‘expanded inwards’ to show more detailed steps in the specific reflection cycle, as well as associated use of tools to support the pro- cess. This is shown in Fig. 2, in which the rounded rectangles in the middle of the diagram show a detailing of the process steps in Fig. 1. The columns of square boxes to the left and to the right are categories of use of reflection tools supporting the steps: White boxes indicate tool support for capturing data, dotted boxes for providing data, light gray boxes for scaffolding the process, and the dark gray ones show use of tools for simulating the work process. 59 Tool support for reflection in the workplace in the context of reflective learning cycles Fig. 2. The process steps view of the CSRL model with associated categories of tool use The model in Fig. 2 is the Version 1 of the CSRL model. For a more detailed ex- planation of the diagram, see [21]. (Please note that the tool categories in Fig. 2, based on the ongoing conceptual developments in MIRROR, have been slightly re- fined as compared to the ones in [21]. The differences are not essential with respect to the issues addressed in the present paper.) 4 A procedure for applying the CSRL model to a case to support analysis and design The CSRL model can be used to describe existing processes or practices of reflective learning in an organization (e.g. before the introduction of new solutions), to describe intended use of new solutions (e.g. outlining user requirements), and to describe the actual practices after these solutions have been introduced. The procedure to use the model for these purposes contains three main steps: outlining a story of reflective learning, modeling the reflective learning cycles of the story, and detailing each cycle with steps and associated tool use. Step 1: Outline the story of reflective learning First, the case of reflective learning needs o be explained in context of work pro- cesses in the organization, using the perspective of its actors. Collaborative work with scenarios helps elicit rich information from users and the organization, helping users 60 Tool support for reflection in the workplace in the context of reflective learning cycles and developers to reach a common understanding of the case. For example, develop- ing stories in which people successfully learn by the aid of reflection tools helps fo- cusing on tasks and goals as well as on learning outcomes and their application. To cover the full potential of the tool in the case organization, it important that the story includes the relevant situations of reflection and tool usage as well as the connections between them (e.g. results of individual reflection feeding into processes of reflective learning on the level of the team or organization). Supporting artifacts are textual descriptions or other representations (e.g. storyboards) outlining the scenario of re- flective learning. Step 2: Outline the overall reflective learning process by identifying the learn- ing cycles (how work and reflection are integrated) and how they are connected. A key to understanding and supporting reflective learning is to consider transitions between work and reflection. This includes triggers and circumstances that lead to reflection and the step of bringing insights from reflection back into work, e.g. ensur- ing that the outcomes of reflection are brought into use in the work process. Artifacts supporting this are diagrams instantiating the learning cycle view of the CSRL model as in Fig. 2 and Fig. 5. Step 3: For each reflective learning cycle, apply the more detailed process steps and consider what steps are relevant and how tools are used in each cycle. By considering tool use for separate cycles, different usage in different situations is described. This might also create new ideas about tool usage or design, if a tool sup- porting a reflection session currently does not offer scaffolding of a particular step of reflection, or it does not capture data that are available and could be of potential use. Also, it could lead to considerations about similarities and differences in tool use between different reflection sessions from different cycles in the story, with implica- tions e.g. for the tailoring of user interfaces for the different sessions. This step can be supported by artifacts such as diagrams instantiating the process steps view of the CSRL model (e.g. Fig. 6), with tool categories. The proposed procedure for applying the CSRL model ends at a point where sever- al artifacts (e.g. an outlined story of reflection and several diagrams) have been devel- oped. Depending on the step of the development process, we propose that the artifacts be useful in different ways: As a resource for design on a more detailed and formally specified level, as a benchmark for evaluation of the modeled solutions, and/or as a basis for communication among developers and users, e.g. in the next round of devel- opment (if an iterative approach is used) 5 An example illustrating the instantiating of the CSRL model with a case In this section we show how the CSRL model can be used to describe a real case of reflective learning in the workplace, following the steps outlined above. The work- place is one of the MIRROR test beds: a hospital. In the story, reflection is supported by the ‘Talk Reflection’ app, which helps physicians to reflect on difficult conversa- tions (talks) with patients and relatives. 61 Tool support for reflection in the workplace in the context of reflective learning cycles Fig. 3 shows a screenshot from the Talk Reflection app. The form contains fields for doing the ‘objective’ documenting of the talk: This includes a choice of topic, a description which it is mandatory for the physicians to provide (1), and self- assessment of own feelings in the situation (2). The form also has a field for personal reflections (personal note) (3). The physicians can share notes with colleagues and comment on each other’s notes. Fig. 3. Screenshot from the Talk Reflection app with the form for documenting a relative talk Step 1: Outline the story of reflective learning In this story of reflective learning, the perspective is that of the physician (Fred), who is a participant in all the reflection sessions mentioned in the story. Additionally, it his colleagues are also reflecting, which provides important input to Fred’s reflection. To facilitate the later modeling steps, the story has been divided in three parts. Part 1) An assistant physician (Fred) is working in the stroke unit. Every time a patient is hos- pitalized with a stroke the relatives are very concerned about what happened and might hap- pen. One day Fred has to explain an older man that his wife has suffered a bad stroke and that she might not recover because it took them to long to get to the hospital. He explains what has happened in her brain and that because of the stroke she might die. He also explains that they will need a decision from the husband whether they should take life-extending measures or not. Suddenly the man gets very angry and shouts at Fred that it is his fault, that he had paid his health insurance for years, that he demands the best treatment for his wife, and that he thinks the hospital staff are not willing to do everything they can. Fred is stunned and does not know how to react. Fortunately a nurse coming into the room is able to calm the old man down and explain to him that they are doing everything possible to save the life of his wife. During the day Fred keeps thinking about this episode and finally finds a moment to docu- ment it in the Talk Reflection app. He first documents the case objectively the way it is required for the patient’s case file, filling in the description (e.g. explaining that he was stunned and did not know how to react to the aggression) and using the self assessments e.g. to quantify his feelings in the situation. He proceeds to add a personal note, reflecting about his experience and formulating the conclusion that he should perhaps have asked a nurse to participate in the 62 Tool support for reflection in the workplace in the context of reflective learning cycles conversation in the first place. He shares the documentation and his notes with other assistant physicians that he trusts, to allow them to comment on it in the Talk Reflection app. Part 2) Next time he logs into the app, several of his colleagues have commented on his documentation. Most have written that they have had similar experiences and that they know how difficult such situations can be. Others describe similar cases with aggressive relatives. For instance, one colleague had once been hit by the wife of a patient. Based on these com- ments, Fred recognizes his case as an example of a more general issue and decides to bring it up again in the bi-weekly reflection meeting for assistant physicians in the stroke unit. Part 3) In the bi-weekly meeting the five physicians discuss Fred’s case. Fred starts by briefly explaining about his experience, suggesting that it points to a general issue. His col- leagues then explain about their experiences with similar cases. The discussion proceeds with constructive critique of the various approaches. During the discussion the physicians have the Talk Reflection app in front of them on their individual iPads and can all look at the infor- mation that has been shared there. Some of the physicians use the app to make quick notes about cases not yet documented, and comments to cases already documented. Closing the dis- cussion in the meeting, the physicians have reached the resolution that it would be best to have relative talks only when there is at least one other person from staff nearby, as with the nurse in Fred’s case. They decide to make this a change to their work routines. Using the evaluation function in the Talk Reflection app, one physician writes down this reflection result. To docu- ment its rationale he makes a link to the relevant cases discussed in the meeting. He then shares the documented resolution with all participants of the meeting and the other physicians of the ward. Fig. 4. A story of reflective learning with the Talk Reflection app Step 2: Outline the overall reflective learning process by identifying the learn- ing cycles (how work and reflection are integrated) and how they are connected. The diagram in Fig. 5 shows the learning cycles described in the story. We note that there are three cycles in which Fred the physician is involved (drawn with solid ar- rows in the figures below). The cycles correspond to the three parts of the story in Fig. 4. In the innermost cycle (Part 1), Fred reflects while documenting his experience. (see the cycle shown highlighted in Fig. 5). In the next cycle (Part 2), Fred reflects on the comments provided by his colleagues. Finally, in the outer cycle (Part 3), he re- flects with his colleagues in the physician’s meeting. To complement the picture of reflective learning in the story, a cycle capturing the reflection session of Fred’s commenting colleagues has been included and drawn with dashed arrows and boxes. External factors influencing the process are not shown in the diagram. For instance, in the diagram in Fig. 5 the step of initiating the inner reflection cycle is called ‘Time to document’ – which refers not only that Fred actually has the time, but also to the fact that the organization has routines for documenting conversations with relatives, requiring that such documenting be done. Step 3: For each reflective learning cycle, apply the more detailed process steps and consider what steps are relevant and how tools are used in each cycle. In what follows, we show how the CSRL model can be used to outline process steps and tool use in two of the reflective learning cycles in Fig. 5. A more complete analy- 63 Tool support for reflection in the workplace in the context of reflective learning cycles sis of the case would have included a detailing of the mid cycles, but we left this out in the paper for reasons of space. Fig. 5. The learning cycles in the Talk Reflection app story (the inner cycle in this case marked in boldface to illustrate the procedure of detailing the cycles) Fig. 6. Instantiating the process steps and categories of tool use for the Talk Reflection case, inner learning cycle 64 Tool support for reflection in the workplace in the context of reflective learning cycles We start with the cycle marked with boldface in Fig. 5, e.g. the inner cycle. This cycle describes individual use of the app for reflecting on single experiences, i.e. rela- tive talks just documented (in the same app) as part of the work process. Fig. 6 shows the process steps diagram instantiated with the inner learning cycle. The steps from the reference model (see Fig. 6) have been reformulated to more accu- rately describe what happens in the cycle. Furthermore, steps in the process view of the CSRL model (Fig. 2) that did not seem relevant to the story have been omitted. The relevant tool categories have been instantiated with brief explanations of how the Talk Reflection app supports the process. In the perspective of the reflective learn- ing cycle, the objective documentation of the relative talk, including behavioral and emotional aspects, can be seen as capturing of data on work experiences. The app provides scaffolding for this data gathering. Reflection is triggered and framed, as the physician is encouraged to write a personal note (implicit in the provision of a per- sonal note field in the documentation template), reflecting on the objectively docu- mented experience. The documentation further helps the physician in reconstructing and understanding the meaning of the experience. Reconstruction, articulation of meaning, and re-evaluation are closely intertwined in this case. The Talk Reflection app does not provide scaffolding for re-evaluation of the experience, but supports the capturing and sharing of the reflection outcome. For the purpose of illustrating the potential of the model to shed light on different use of tools to support reflective learning in different reflective learning cycles, we proceed to instantiate the process steps and tool categories with another cycle in the Talk Reflection story: the outer cycle, e.g. Part 3 of the story. Here, physicians reflect in their bi-weekly meeting, the outcome being a decision to implement a change in the work routines. The outer cycle is shown with bold lines in Fig. 7, the process model instantiating the cycle is shown in Fig. 8. Fig. 7. The outer learning cycle in the Talk Reflection app story 65 Tool support for reflection in the workplace in the context of reflective learning cycles Fig. 8. Instantiating the process steps/categories of tool use for the Talk Reflection case, outer learning cycle 6 Evaluation of the approach The CSRL model and the approach of applying it to a particular case were evaluated in a workshop, which had the additional purpose of informing tool development. The evaluation took place with seven work groups of 24 MIRROR participants. Each group focused on a different story of reflective learning with a particular app in a test bed organization. Every group included at least a developer and a representative of the test bed, group size ranged from 2 to 5. The development of the tools in question had already started before the evaluation workshop, and thus modeling was mostly about refining understanding of the cases and re-designing solutions. The groups were asked to apply the procedure outlined in Section 3, after an intro- duction in which an example was briefly presented. Step 1 was slightly shortcut to give more time for steps 2 and 3: A story about reflective learning with the app in the test bed had been written prior to the exercise, based on knowledge of the case ob- 66 Tool support for reflection in the workplace in the context of reflective learning cycles tained through previous collaboration with the test bed1. The groups spent approxi- mately 1,5 hours on developing diagrams with learning cycles (step 2, as in Fig. 2 and Fig. 5) and their detailed steps (step 3, Fig. 6) – to make this easier, participants were not restricted to a certain formalism for the cycle diagrams, but could draw them freely. Then, 30 minutes were used to individually fill in an evaluation form. 22 forms were handed in. The questions in the form were on opinions about the exercise as well as strengths and weaknesses of the model and the procedure for applying it to a case. Besides others, the evaluation form focused on three key questions. Fig. 9 summarizes the answers to these questions (Q7, Q6 and Q13 in the evaluation form): • Did the participants perceive the procedure of applying the model to be useful? • Did the exercise help refine the understanding of the case (descriptive power and usefulness for analysis)? • Did the exercise lead to new design ideas (usefulness for design)? It can be noted from Fig. 9 that most respondents were positive or at least neutral about the usefulness of the procedure to apply the model (Q7). In another question, most participants regarded the structuring of the procedure for analysis and design to be positive, appreciating the detailed steps and categories of tool use. Regarding the use of a story of reflective learning as a starting point for the instantiation (step 1 in the procedure), the following comment captures the essence of several answers: “Us- ing the story is somehow good AND problematic. [on the positive side] it helps to focus on usage scenarios [and] to link abstract categories and the story [and] to involve the external people, [on the negative side] it restricts the instantiation to what you can have in the story”. One group reported having combined two stories to get a more complete picture of use of their reflection tool. Concerning the descriptive power of the model with respect to the particular case 17 respondents of Q6 (Fig. 9) answered that the exercise had added detail to the story of reflective learning. Regarding the usefulness of the model for design, Q13 (Fig. 9) was only answered negatively by one participant, two answers were left blank. Thus, 19 of the 22 respondents confirmed that the exercise had given them insights or ideas about the design of the app in question. Besides these answers, the participants identified strengths and weaknesses of the model as a tool for describing CSRL cases and solutions, including what could or could not be described about the particular app by modeling. As a result, a long wish list for additional capabilities of the model was derived, providing useful input for the further development of the CSRL model. The diagrams produced by the groups showed great diversity, and the groups gen- erally followed the steps of the procedure, but (as explicitly allowed) adapted the way of drawing the diagrams to their needs whenever there were aspects that they wished to include but that were difficult to represent with the model. These adaptations pro- vided ideas for further development of the model. For some of groups, focus of the exercise was solely on the cycle diagrams, and discussions about the cases seemed to evolve around these diagrams. These happened mostly for cases in which the com- 1 In two groups the story had not been written in advance and, but could be outlined during the exercise quickly, as the participants already knew relevant scenarios for usage of the app. 67 Tool support for reflection in the workplace in the context of reflective learning cycles plexity was high, the processes of reflective learning included several roles and organ- izational levels, and, in one of the cases, several apps needed to be coupled. This indi- cates that the cycle diagrams provided a good basis to understand a case of reflection (see above) and the use of technology within this case. Fig. 9: Diagrams summarizing answers to three questions from the evaluation form about the CSRL model and the procedure for applying it The evaluation must be considered in light of some validity threats. First, it was conducted within the MIRROR project, with participants that (to a varying degree) had prior knowledge of the model. It is thus difficult to conclude from the evaluation about the general use of the model. Also, as mentioned, the stories and the tools mod- eled were not new, but rather in a process of continued development. These condi- tions on the other hand allowed the creation of cycle and process step diagrams within a short timeframe (more limited than the one presumably needed and preferred in a typical development process), and as apps were mostly used only within one specific test bed, the evaluation could be conducted with many groups. However, this also means that comparison of results across participants and groups is difficult. At the same time, the differences between the cases ensured a wide range of characteristics and use cases of reflection to be described with the model, enabling and evaluation on a broad basis of real cases. Ownership and commitment of the participants with re- gard to the specific tools made the work more ‘real’ and is likely to have lead to in- creased motivation to actively participate, but having user organization and developer working together in a group is representative of a the intended development process. In addition, the time available for the modeling was less than it (probably) would have been in a real case. This was taken into account when considering the resulting dia- grams (e.g. their level of detail or coherence). The outcomes of the evaluation, in terms of quality and quantity, indicate that the evaluation reaped the benefits resulting from advantages. 68 Tool support for reflection in the workplace in the context of reflective learning cycles 7 Conclusion and further work The evaluation of the CSRL model and the associated procedure for its application to a case provided valuable insights about the usefulness of the model and the procedure, confirming the potential of the model to aid analysis and design. We will end the paper by discussing some challenges and future steps. The focus on a story of reflective learning in which there is a user (persona) seemed to help focus on user needs. Systematic application of the cycle model helped to make the transitions between work and reflection explicit, including how reflection is triggered and how reflection outcomes are made applicable and applied. The mod- eling of tool usage with the process steps diagram supports a systematic walkthrough of what is supported by the tool and what might be supported by the tool. Concerning the capturing of all relevant situations and aspects of tool use, it is crit- ical that the story of reflective learning covers the relevant scenarios. The fact that one group during our model evaluation decided to combine two stories suggests that it may be necessary to have several stories covering the relevant app usage and the per- spectives of different users. For instance, different stories could focus on the needs and practices of different personas in the organization. Results on the descriptive power of the model were promising. However, for the communication between developers (designers) and users, diagrams cannot substitute application prototypes (even paper prototypes), which let users try user interfaces and features. To use the unique advantages, MIRROR uses rapid prototyping as a devel- opment approach. Using the CSRL model for analysis, in turn, has the advantage of placing the use of the apps into the context of work processes of an organization, watching how use of an app in different settings form parts of the larger picture of reflective learning. Using cycle and other diagrams to provide a visually compact representation grounded in theory of reflective learning and makes it possible to pre- sent a rather succinct picture of a CSRL process, which expressive enough to support discussion among developers and potentially useful for communication with users. To use the advantages of both approaches, future work will also be concerned with com- bining the approaches of using the model and using prototypes. There are a few shortcomings to the approach presented here. First, it would be useful to have a systematic way of representing external factors impacting on the reflection processes. Second, reflective learning is closely linked to knowledge devel- opment in an organization (e.g. individual cases developing to general insights; indi- vidual experience developing to team and organizational knowledge, and so on; see [22]), and the model so far lacks the means to represent the levels of this process sys- tematically. The answers to these challenges are likely to lie in a combination of re- finement and extension of the CSRL model and refinement of the conceptual tools for its application, e.g. the procedure for model instantiation discussed in this paper. In the development of the second version of the model, refinement of the model and the procedure for its application will go hand in hand. We plan to apply the model to the same cases in a similar evaluation than de- scribed above after one year of using the apps. While the initial evaluation largely focused on intended tool use in the test bed organizations, this next evaluation may 69 Tool support for reflection in the workplace in the context of reflective learning cycles focus on the modeling of actual tool use, as the MIRROR apps in question will have been used in the test beds at that time. A comparison of the models of intended and actual tool use may lead to insights about how the tools fill the intended roles. In this evaluation, the application of the CSRL model will be used both for evaluation pur- poses and for feeding back into the (re) design of tools. While use of the CSRL model is important within the MIRROR project to support shared conceptual understanding [23] and tool development, we also want it to be used beyond the scope and time of the project. In this respect it is necessary to expose the model to development of CSRL solutions outside MIRROR: While we continue to evaluate it within MIRROR, we would like to encourage other researchers and practi- tioners to consider applying the first version of the CSRL model for purposes of anal- ysis and design. 8 Acknowledgement This work is partially funded by the project ‘MIRROR - Reflective learning at work', funded under the FP7 of the European Commission (project number 257617). The authors thank Martin Degeling for input on the Talk Reflection case and Viktoria Pammer for ideas used in this paper. 9 References 1. Lave, J., The practice of learning, in Understanding Practice: Perspectives on Activity and Context, S. Chaiklin and J. Lave, Editors. 1993, Cambridge University Press: Cambridge. p. 20. 2. Schön, D., The Reflective Practitioner1983: Basic Books, Inc. 3. Boud, D., R. Keogh, and D. Walker, Reflection: Turning Experience into Learning1985: RoutledgeFalmer. 4. Cress, U. and J. Kimmerle, A systemic and cognitive view on collaborative knowledge building in wikis. Computer-Supported Collaborative Learning, 2008. 3: p. 105-122. 5. Kolb, D.A. and R. Fry, Towards an applied theory of experiential learning, in Theories of Group Processes, C.L. Cooper, Editor 1975, John Wiley: London. p. 33-58. 6. Korthagen, F. and A. Vasalos, Levels in reflection: Core reflection as a means to enhance professional growth. Teachers and Teaching: Theory and Practice, 2005. 11(1): p. 25. 7. Stahl, G., Building collaborative knowing, in What We Know About CSCL And Implementing It In Higher Education, J.-W. Strijbos, P.A. Kirschner, and R.L. Martens, Editors. 2002, Kluwer Academic Publishers: Boston. p. 53-85. 8. Høyrup, S., Reflection as a core process in organisational learning. Journal of Workplace Learning, 2004. 16(8): p. 13. 70 Tool support for reflection in the workplace in the context of reflective learning cycles 9. vanWoerkom, M. and M. Croon, Operationalising critically reflective behaviour. Personnel Review, 2008. 37(3): p. 15. 10. Dingsøyr, T., Postmortem reviews: purpose and approaches in software engineering. Information and Software Technology, 2005. 47: p. 293-303. 11. Kerth, N., Project Retrospectives: A Handbook for Team Reviews 2001: Dorset House Publishing Company. 12. Krogstie, B.R. and M. Divitini. Shared timeline and individual experience: Supporting retrospective reflection in student software engineering teams. in CSEE&T 2009. 2009. Hyderabad: IEEE Computer Society. 13. Schindler, M. and M.J. Eppler, Harvesting project knowledge: a review of project learning methods and success factors. International Journal of Project Management, 2003. 21: p. 10. 14. Kim, D. and S. Lee, Designing Collaborative Reflection Support Tools in e- project Based Learning Environment. Journal of Interactive Learning Research, 2002. 13(4): p. 375-392. 15. Krogstie, B.R. and M. Divitini. Supporting Reflection in Software Development with Everyday Working Tools. in COOP. 2010. Aix-en- Provence, France: Springer. 16. Li, I., A. Dey, K., and J. Forlizzi, Understandinng My Data, Myself: Supporting Self-Reflection with Ubicomp Technologies, in Ubicomp'112011: Bejing, China. 17. Lin, X., et al., Designing Technology to Support Reflection. Educational Technology, Research and Development, 1999. 47(3): p. 43-62. 18. Siewiorek, N., et al., Reflection Tools in Modeling Activities, in ICLS2010, ISLS: Chicago. 19. Xiao, L., et al. Promoting Reflective Thinking in Collaborative Learning Activities. in Eighth IEEE International Conference on Advanced Learning Technologies (ICALT). 2008. Santander, Cantrabria, Spain: IEEE. 20. Pammer, V., et al. Reflective Learning at Work - A Position and Discussion Paper. in ARNets11- Awareness and Reflection in Learning Networks. 2011. Palermo, Italy. 21. Krogstie, B., et al., Computer support for reflective learning in the workplace: A model. , in International Conference on Advanced Learning Technologies (ICALT) 20122012, ACM: Rome. 22. Prilla, M., V. Pammer and S. Balzert The Push and Pull of Reflection in Workplace Learning: Designing to Support Transitions Between Individual, Collaborative and Organisational learning, in EC-TEL2012, Springer: Saarbruecken, Germany. 23. Krogstie, B.R., et al., Collaborative Modelling of Reflection to Inform the Development and Evaluation of Work-Based Learning Technologies in i- KNOW2012, ACM ICPS: Graz, Austria. 71 72 Empowering students to reflect on their activity with StepUp!: Two case studies with engineering students. Jose Luis Santos, Katrien Verbert, and Erik Duval Dept. of Computer Science, KU Leuven, Celestijnenlaan 200A, B-3001 Leuven, Belgium {JoseLuis.Santos,Katrien.Verbert,Erik.Duval}@cs.kuleuven.be Abstract. This paper reports on our ongoing research around the use of learning analytics technology for awareness and self-reflection by teachers and learners. We compare two case studies. Both rely on an open learning methodology where learners engage in authentic problems, in dialogue with the outside world. In this context, learners are encouraged to share results of their work, opinions and experiences and to enrich the learn- ing experiences of their peers through comments that promote reflection and awareness on their activity. In order to support this open learning process, we provided the students with StepUp!, a student activity visu- alization tool. In this paper, we focus on the evaluation by students of this tool, and the comparison of results of two case studies. Results in- dicate that StepUp! is a useful tool that enriches student experiences by providing transparency to the social interactions. The case studies show also how time spent on predefined high level activities influence strongly the perceived usefulness of our tool. Keywords: human computer interaction, technology enhanced learn- ing, reflection, awareness 1 Introduction This paper reports on a comparison of two recent experiments with learning an- alytics. In our view, learning analytics focuses on collecting traces that learners leave behind and using those traces to improve learning [1]. Educational Data Mining can process the traces algorithmically and point out patterns or com- pute indicators [2, 3]. Our interest is more in visualizing traces in order to make learners and teachers to reflect on the activity and consequently, to draw con- clusions. We focus on building dashboards that visualize the traces in ways that help learners or teachers to steer the learning process [4]. Our courses follow an open learning approach where engineering students work individually or in groups of three or four on realistic project assignments in an open way. Students use twitter (with course hash tags), wikis, blogs and other 73 Empowering students to reflect on their activity with StepUp! web 2.0 tools such as Toggl1 and TiNYARM2 ., to report and communicate about their work with each other and the outside world in a community of practice kind of way [5, 6]. Students share their reports, problems and solutions, enabling peer students to learn from them and to contribute as well. However, teachers, assistants and students themselves can get overwhelmed and feel lost in the abundance of tweets, blog posts, blog comments, wiki changes, etc. Moreover, most stu- dents are not used to such a community based approach and have difficulties in understanding this process. Therefore the reflection on the activity of the com- munity can help users to understand what is going on and what is expected of them. In this paper, we present two follow-up studies to our earlier work [7], where we documented the user-centered design of an earlier version of StepUp!: the new version we present here is geared towards an open learning approach. In our courses, we encourage students to be responsible of their own learn- ing activities, much in the same way as we expect them to be responsible of their professional activities later on. In order to support them in this process, our studies focus on how learning dashboards can promote reflection and self awareness by students. To this end, we consider different ways to capture traces and to identify which traces are relevant to visualize for the users. Finally, we analyze how visualizing these traces affects the perception and actions of the learner. These experiments rely on the design, implementation, deployment and eval- uation of dashboards with real users in ongoing courses. We evaluated our proto- types in two elaborate case studies: in the first case study, we introduced StepUp! to the students at the beginning of the course, visualizing blog and twitter ac- tivity and time reported on the different activities of the course using Toggl. They could access the tool but it was not mandatory. After a period of time, we evaluated the tool with students by using a questionnaire and Google Analytics3 to track the actual use of the tool. In the second case study, StepUp! visualized student activities from blogs, twitter and TiNYARM, a tool to track read, skimmed and suggested papers in a social context [8]. Students used the tool at the end of the course, after which they completed an evaluation questionnaire. The idea behind of evaluating the tool at the end of the course was to analyze how the normal use of the tool affected to the perceived usefulness. As time tracking is so prominent in what we visualize, we also discuss the importance of tracking time on high-level definition of activities and the potential differences between automatic and manual tracking of the data. The remainder of this text is structured as follows: the next section presents our first case study, in a human-computer interaction course. Section 3 describes 1 http://toggl.com 2 http://atinyarm.appspot.com/ 3 http://analytics.google.com 74 Empowering students to reflect on their activity with StepUp! the second case study, in a master thesis student group. Results are discussed in Section 4. Section 5 presents conclusions and plans on future work. 2 First case study 2.1 Data tracked One of the main challenges with learning analytics is to collect data that reflect relevant learner and teacher activities [4]. Some activities are tracked automatically: this is obviously a more secure and scalable way to collect traces of learning activities. Much of our work in this area is inspired by “quantified self” applications [9], where users often carry sensors, either as apps on mobile devices, or as specific devices, such as for instance Fitbit4 or Nike Fuel5 . We rely on software trackers that collect relevant traces from the Web in the form of digital student deliverables: the learners post reports on group blogs, comment on the blogs of other groups and tweet about activities with a course hash tag. Those activities are all tracked automatically: we basically process RSS feeds of the blogs and the blog comments every hour and collect the relevant information (the identity of the person who posted the blog post or comment and the timestamp) into a database with activity traces. Similarly, we use the twitter Application Programming Interface (API) to retrieve the identity and timestamp of every tweet with the hash tag of the course. Moreover, we track learner activities that may or may not produce a digital outcome with a tool called Toggl: this is basically a time tracking application that can be configured with a specific set of activities. In our HCI course, we make a distinction between the activities reported on in this way, based on the different tasks that the students carry out in the course: 1. evaluation of google plus; 2. brainstorming; 3. scenario development; 4. design and implementation of paper prototype; 5. evaluation of paper prototype; 6. design and implementation of digital prototype; 7. evaluation of digital prototype; 8. mini-lectures; 9. reading and commenting on blogs by other groups; 10. blogging on own group blog. The first six items above correspond to course topics: the students started with the evaluation of an existing tool (Google Plus6 ) and then went through one cycle of user-centered design of their own application, from brainstorming over 4 http://www.fitbit.com/ 5 http://www.nike.com/fuelband/ 6 http://plus.google.com/ 75 Empowering students to reflect on their activity with StepUp! scenario development to the design, implementation and evaluation of first a paper and then a series of) digital prototype(s) [10]. The last three items above correspond with more generic activities that happen throughout the course: mini- lectures during working sessions, and blogging activities, both on their own blog and on that of their peers. For all these activities, we track the start time, the end time and the time span between, as well as learner identity. When students use Toggl, they can do so in semi-automatic mode or man- ually. Semi-automatic mode means that, when they start an activity, they can select it and click on a start button. When they finish the activity, they click on a stop button. Manually means that the students have to specify activity, time, and duration to Toggl. In this way, students can add activities that they forgot to report or edit them manually. Of course, on the one hand, this kind of tracking is tedious and error prone - hence the manual option. On the other hand, requiring students to log time may make them more aware of their time investment and may trigger more conscious decisions about what to focus on or how much time to spend on a specific activity. The main course objective is to change the perspective of how they look at software applications, from a code-centric view to a more user-centric view. That is an additional reason why self-reflection is important in this context. 2.2 Description of the interface Figure 1 illustrates how the data are made available in their complete detail in our StepUp! tool: this is a “Big Table” overview where each row corresponds with a student. The students are clustered in the groups that they belong to. For instance: rows 1-3 contain the details of the students ‘anneeverars’, ‘ganji ’ and ‘greetrobijns’ (see marker 1 at Figure 1). These three students work together in a group called ‘chigirlpower’, the second column in the table (marker 2). The green cells in that second column indicate that these students made 8, 9 and 13 posts in their group blog respectively (marker 3). Rows 4-6 contain the details of the second group, called ‘chikulua12‘: they made 1, 4 and 18 comments on the blog of the first group (column 2) and 9, 6 and 9 posts in their own blog (column 3) respectively (marker 4). The rightmost columns (marker 5) in the table indicate the total number of posts, the total number of hours spent on the course (Toggl) and the total number of tweets. The two rightmost columns are sparklines[9] that provide a quick glance of the overall evolution of the activity for a particular student (marker 6). They can be activated to reveal more details of student activity (marker 7 and 8). As is obvious from Figure 1, this is a somewhat complex tool. Originally, the idea was that this would mainly be useful for the teacher - who can indeed provide very personal feedback to the students, based on the in-depth data provided by the table. However, somewhat to our surprise, and as illustrated by Figure 2 and Figure 3, this overview is used by almost all students once per week, for an average of about 10 minutes. Nevertheless, in order to provide a more personalized and easy to understand view that students can consult more frequently, which is important for awareness 76 Empowering students to reflect on their activity with StepUp! Fig. 1. First case study - Big table View Fig. 2. Analytics of Big Table use (daily) Fig. 3. Analytics of Big Table (week) 77 Empowering students to reflect on their activity with StepUp! support, we have developed a mobile application for these data (see Figure 4) that we released recently, as discussed in future work section below. Fig. 4. Profile view in Mobile Application 2.3 Evaluation We carried out a rather detailed evaluation six weeks into the course, based on online surveys. In the evaluation, we used five instruments, in order to obtain a broad view of all the positive and negative issues that these could bring up: 1. open questions about student opinions of the course; 2. questions related to their awareness of their own activities, those of their group and those of other groups; 3. opinions about the importance of the social media used in the course; 4. questions about how StepUp! supports awareness of their own activity, that of their group and of other groups; 5. a System Usability Scale (SUS) evaluation focused on the tool [11]. Another goal of our evaluations is to gather new requirements to improve the course and the deployed tools. This task becomes complex because sometimes students are not aware about the goals of the course. Below, we summarize the main outcomes of this evaluation. Demographics In total, 27 students participated in the evaluation; they are between 20 and 23 years old and include 23 males and 4 females. All the partic- ipants are students of the Human Computer Interaction course. 78 Empowering students to reflect on their activity with StepUp! Open Questions For the open questions, the students were asked about pos- itive and negative aspects of the course, and they were asked how they would improve the course. Overall, the use of the learning analytics seems to be well received, as il- lustrated by the following quotes: “I like the interactive courses. As professor Duval said himself, it allows him to adjust us faster. We (the students) keep on the right track. Otherwise, we might do a lot of worthless work and thus lose valuable time we could invest better in other ways in this course.” or “The course is different from any courses I taken before as there is class participation, imme- diate feedback etc.”. Neither the negative aspects mentioned, nor the suggestions to improve the course related to the use of learning analytics. Fig. 5. Evaluation first case study - Awareness part Awareness We asked students questions on whether they think they are aware of how they, their group and the other students in class spend efforts and time in the course, and whether they consider this kind of information important. Overall, the students think that they are very aware of their own efforts, just a little bit less aware of the efforts of the other members in their group, and less aware of the efforts by members of other groups - Figure 5 (left box plot) provides more details. StepUp! support As illustrated by Figure 5 (right box plot), students evaluate the support by StepUp! for increased awareness rather positively: the students agree that the tool reinforces transparency, that it helps to understand how peers and other students invest efforts in the course. This is important because these data suggest that the tool does achieve its main goal. 79 Empowering students to reflect on their activity with StepUp! SUS questionnaire Overall, the SUS usability questionnaire rating of StepUp! is 77 points on a scale of 100. This score rates the dashboard as good [11]. From our previous design, we have increased 5 points in this scale [7], which is encouraging. 3 Second case study 3.1 Tracked data The second case study ran with 13 master students working on their master thesis. All of them work on HCI topics such as music visualization and augmented reality. In this case study, most students work individually on their thesis topics, except for two students who work together on one topic. As in the previous case study, they report their progress on blogs, share opinions and communicate with their supervisors and each other on twitter. In addition, they use TiNYARM. The use of this tool is intended to increase the awareness of supervisors and students. They can suggest papers to each other, see what others have read and read papers that are suggested to them. In our previous experiment [9], we tracked the time spent using RescueTime, a completely automatic time tracking tool. In section 2, students reported the time spent on activities using Toggl. In this case study, students do not report time spent. The goal behind this setup is to figure out how important the time spent traces are for our students. 3.2 Description of the interface Fig. 6. Second case study - Big table View Figure 6 illustrates how the data are made available in their complete detail in our StepUp! Tool. The students are ordered alphabetically and in the groups that they belong to, as it is the case for ‘annivdb and ‘mendouksai (marker 1 at Figure 6). For instance: rows 1-2 contain the details of the students already mentioned before. 80 Empowering students to reflect on their activity with StepUp! These two students work together on a thesis topic (augmented reality). The green cells in that second column indicate that these students made 17 and 15 posts in their blog respectively (marker 2). Row 3 contains the details of another student who is working individually on his thesis: he made 2 comments on the blog of the group working on augmented reality (column 2) and 43 posts in his own blog (column 3) (marker 3). The rightmost columns in the table indicate the total number of tweets and read, skimmed, suggested and to read papers (marker 4). The rightmost column is a sparkline that provides a quick glance of the overall evolution of the twitter, blog and TiNYARM activity for a particular student. They can be activated to reveal more details of student activity (marker 5). 3.3 Evaluation We carried out the same detailed evaluation as in the previous case study. How- ever, in this case study, students had not accessed the tool before. The idea behind of this evaluation setup was to analyze how the use or not use of the tool before influenced the perceived usefulness of the tool. Demographics In total, 12 students participated in the evaluation; they are between 21 and 25 years old and include 10 males and 2 females. Open Questions For the open questions, the students were asked about pos- itive and negative aspects of the course, and they were asked how they would improve the course. Overall, the use of social networks seems to be well received, as illustrated by the following quotes: “The blogs are a good way to get an overview of what everyone is doing. ” or “Having a blog is also a good thing for myself, because now I have most of the information I processed in one place.” Awareness We asked students questions on whether they think they are aware of how they, and the other students in class spend efforts in the course, and whether they consider this kind of information important. Overall, the students think that they are very aware of their own efforts and less aware of the efforts by other members of the course - Figure 7 (left box plot) provides the details. These results are similar to the previous case study. StepUp! support As illustrated by Figure 7 (right box plot), students evaluate the support by StepUp! different from the previous case study. They consider that StepUp! provides better transparency, but indicate that this tool is less useful to understand how others spend their efforts. As we discuss in the next section, time seems to be a really useful indicator to understand how others are behaving, being this the main difference with the previous use case. 81 Empowering students to reflect on their activity with StepUp! Fig. 7. Evaluation secondcase study - Awareness part One of the students remarked that he would have liked to realize earlier his low activity on commenting blogs, an all the rest agreed that they should have been more active in the use of social networks. SUS questionnaire Overall, the SUS usability questionnaire rating of StepUp! is 84 points on a scale of 100. This score rates the dashboard as almost excellent [11]. From the previous experiment, we have increased 5 points in this scale. The main difference from the previous use case is that we replaced Toggl data by data that is tracked by TiNYARM. We could say that the complexity of the visualization decreases by erasing Toggl data. In the previous use case, we visualized two units, time (Toggl) and number of actions (Twitter and Blog). In the second case study we focus on number of actions (Twitter, Blog and TiNYARM). In the second case study, the number of users decreases, hence the size of table is also smaller - which may also affect the usability results. Although the usability results can be encouraging, results of this case study indicate that StepUp! is less useful to understand the efforts of peer students. As Toggl data was not included in the visualizations of this case study, this may have affected this perceived usefulness. These results indicate that further evaluation studies are required to assess the impact of visualized data to support awareness. 4 Discussion and open issues The field of learning analytics has known explosive growth and interest recently. Siemens et al. [12] presents an overview of ongoing research in this area. Some 82 Empowering students to reflect on their activity with StepUp! of that recent focuses more on Educational Data Mining, where the user traces power recommendation algorithms [2, 3]. When learning analytics research ap- plies visualizations, it is typically less focused on dashboards and less systematic evaluations of the usability and usefulness of the tools are conducted. In this paper, we have presented two case studies. The first study focuses on visualizing social network activity and complementarily time reporting on predefined activities in a course that follows an open learning approach. The second case study focuses exclusively on the social network activity. Time is a commonly used indicator for planning. Based on the European Qualification Framework of higher education, degrees and courses have been assigned a number of credits called European Credit Transfer System (ECTS). Each of these credits have an estimation of time, one credit is approximately 30 hours. Therefore, time spent seems to be a good indicator to take into account for reflection and to check whether the time spent by the student in the course is properly distributed. Time is also used in empirical studies[13]. In addition, our results supports this idea. Students seems to understand better how others spend their efforts when time spent is visualized. However, time tracking is not an easy task. Manual tracker systems and applications such as Trac[14], Toggl described in this paper and twitter [15] are used in learning experiments for this purpose. These systems rely on the user to report time. They require such explicit action as well as the implicit process of reflection. But these systems enable users to game the system overestimating the time spent on the course. On the other hand, the deployment of automatic trackers such as Rescuetime [7] and logging systems of learning management systems [15] release the user of such manual reporting tasks. These trackers are able to categorize the used tools by the activity that they are intended for. Usually, they are less abstract activities. Moreover, they are not able to track time on tasks done offline such as reading a book or having a meeting. Nevertheless, time tracking has influenced the results of the evaluations. In the second case study, student reported worse understanding on how others spend their efforts. From the evaluations and discussion above is clear that many open research issues remain. We briefly discuss some of them below. 1. What are relevant learner actions? We track tweets and blog posts and ask students to track their efforts on specific course topics and activities. How- ever, we track quantitative data that tells us little or nothing about the quality of what students do. Obviously, these data provide in some sense information about necessary conditions: if the students spend no time on particular topics, then they will probably not learn a lot about them either. However, they may spend a lot of time on topics and not learn a lot. Or they may be quite efficient and learn a lot with little investment of time. It is clear, that we need to be quite careful with the interpretation of these data. 2. How can we capture learner actions? We rely on software trackers for laptop or desktop interactions, and social media for learner interactions (through twitter hash tags and blog posts and comments). We could further augment 83 Empowering students to reflect on their activity with StepUp! the scope of the data through physical sensors for mobile devices. However, capturing all relevant actions in an open environment in a scalable way is challenging. 3. How can we evaluate the usability, usefulness and learning impact of dash- boards? Whereas usability is relatively easy to evaluate (and we have done many such evaluations of our tools), usefulness, for instance in the form of learning impact, is much harder to evaluate, as this requires longer-term and larger-scale evaluations. 4. How can we enable goal setting and connect it with the visualizations, so as to close the feedback loop and enable learners and teachers to react to what they observe and then track the effect of their reactions? We are experimenting with playful gamification approaches, that present their own challenges [16], for instance around trivialization and control. 5. There are obvious issues around privacy and control - yet, as public attitudes and technical affordances evolve [17], it is unclear how we can strike a good balance in this area. 5 Conclusions and future work Our main goal with StepUp! is to provide students with a useful tool and to empower them to become better students. From our point of view, they should work in an open way sharing their knowledge with the world and having some impact in others opinion. StepUp! supports our open learning approach providing more transparency in the social interaction. It provides students an opportunity to reflect on their ac- tivity to take a look to this quantitative data and see how others are performing within the community. Time tracking seems to be a useful indicator for students to understand how students spend their efforts and to increase awareness on the course activity. Furthermore, usefulness of a tool is not only based on conclusions driven by vi- sualizations. How we collect the traces also influences such a factor. To this end, manual and automatic tracking require more research. Design is also a factor that influences the use of our application. To this end, we are currently experimenting with other approaches. For instance, we have currently deployed a mobile web application (see Figure 5) that provides a quick overview and indicators on their activity. We expect to reduce the cognitive efforts making them more attractive to use these tools. In conclusion, we believe that a sustained research effort on learning analytics dashboards, with a systematic evaluation of both usability and usefulness, can help to make sure that the current research hype around learning analytics can lead to real progress. As we already mention in section 2, we propose to deploy new versions of StepUp! on different devices to research how devices can influ- ence the reflection process from a Human Computer Interaction perspective, for instance evaluating the profile view (Figure 4) for mobile devices. Furthermore, as explained in section 4, we are interested mainly to figure out the relevant 84 Empowering students to reflect on their activity with StepUp! traces for the students, to involve sensors to track external data and to enable goal setting. 6 Acknowledgements This work is supported by the STELLAR Network of Excellence (grant agree- ment no. 231913). Katrien Verbert is a Postdoctoral Fellow of the Research Foundation -Flanders (FWO). The work of Jose Luis Santos has received fund- ing from the EC Seventh Framework Programme (FP7/2007-2013) under grant agreement no 231396 (ROLE). References 1. Duval, E.: Attention please! learning analytics for visualization and recommenda- tion. In: Proceedings of LAK11: 1st International Conference on Learning Analytics and Knowledge,, ACM (2011) 9–17 2. Pechenizkiy, M., Calders, T., Conati, C., Ventura, S., Romero, C., Stamper, J., eds.: Proceedings of EDM11: 4th International Conference on Educational Data Mining. (2011) 3. Verbert, K., Manouselis, N., Drachsler, H., Duval, E.: Dataset-driven research to support learning and knowledge analytics. Educational Technology and Society (2012) 1–21 4. Duval, E., Klerkx, J., Verbert, K., Nagel, T., Govaerts, S., Parra Chico, G.A., San- tos, J.L., Vandeputte, B.: Learning dashboards and learnscapes. In: Educational Interfaces, Software, and Technology,. (May 2012) 1–5 5. Fischer, G.: Understanding, fostering, and supporting cultures of participation. interactions 18(3) (May 2011) 42–53 6. Wenger, E.: Communities of Practice: Learning, Meaning, and Identity (Learning in Doing: Social, Cognitive and Computational Perspectives). 1 edn. Cambridge University Press (September 1999) 7. Santos, J.L., Govaerts, S., Verbert, K., Duval, E.: Goal-oriented visualizations of activity tracking: a case study with engineering students. In: LAK12: International Conference on Learning Analytics and Knowledge, Vancouver, Canada, 29 April - 2 May 2012, ACM (May 2012) Accepted. 8. Parra, G., Klerkx, J., Duval, E.: Tinyarm: Awareness of relevant research papers through your community of practice. In: Proceedings of the ACM 2013 conference on Computer Supported Cooperative Work. (2013) under review. 9. Tufte, E.R.: Beautiful Evidence. Graphics Press (2006) 10. Rogers, Y., Sharp, H., Preece, J.: Interaction Design: Beyond Human-Computer Interaction. John Wiley and Sons Ltd (2002) 11. Bangor, A., Kortum, P.T., Miller, J.T.: An empirical evaluation of the system usability scale. Int. J. Hum. Comput. Interaction (2008) 574–594 12. Siemens, G., Gasevic, D., Haythornthwaite, C., Dawson, S., Shum, S., Ferguson, R., Duval, E., Verbert, K., Baker, R.: Open learning analytics: an integrated and modularized platform: Proposal to design, implement and evaluate an open plat- form to integrate heterogeneous learning analytics techniques. Society for Learning Analytics Research (2011) 85 Empowering students to reflect on their activity with StepUp! 13. Keith, T.Z.: Time spent on homework and high school grades: A large-sample path analysis. Journal of Educational Psychology 74(2) (1982) 248–253 14. Upton, K., Kay, J.: Narcissus: Group and individual models to support small group work. In: Proceedings of the 17th International Conference on User Mod- eling, Adaptation, and Personalization: formerly UM and AH. UMAP ’09, Berlin, Heidelberg, Springer-Verlag (2009) 54–65 15. Govaerts, S., Verbert, K., Duval, E., Pardo, A.: The student activity meter for awareness and self-reflection. In: CHI EA ’12: Proceedings of the 2012 ACM An- nual Conference Extended Abstracts on Human Factors in Computing Systems Extended Abstracts,, ACM (May 2012) 869–884 16. Deterding, S., Sicart, M., Nacke, L., O’Hara, K., Dixon, D.: Gamification. using game-design elements in non-gaming contexts. In: Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems. CHI EA ’11, New York, NY, USA, ACM (2011) 2425–2428 17. Jarvis, J.: Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live. Simon Schuster (2011) 86 Fostering reflective practice with mobile technologies Bernardo Tabuenca, Dominique Verpoorten, Stefaan Ternier, Wim Westera, and Marcus Specht Open University of The Netherlands, PO Box 2960, 6401 DL Heerlen, The Netherlands. {bernardo.tabuenca; dominique.verpoorten; stefaan.ternier; wim.westera; marcus.specht}@ou.nl Abstract. During 2 school days and 2 days off, 37 college pupils were offered a daily re- flection and reporting exercise about how (intensity and channels) they learnt in the day. This pilot experiment had 2 purposes: a) to assess the extent to which the mobile phone can be used as an instrument to develop awareness about learning and b) to explore how young people attend to their identity as (life- long) learners when they are prompted to reflect on this theme. Results show that students accepted to answer questions about learning on own mobile appli- ances and outside school hours. The study also provides indications that getting aware of and reflecting about their identity as (professional) learners is not a common and/or understood practice for the participants. These findings, which questions the common life of young people from a learning perspective, are dis- cussed in the light of the call to breed mindful, responsible and committed learners. Keywords. Reflection; awareness; mobile technologies; lifelong learning 1 Introduction Average European pupils have spent, at the end of college, about 13000 hours on the school benches (OECD, 2011). There is no doubt about the quantity of academic con- tent that they have acquired as students. Less sure and explored is how they have developed an identity as learners. Yet, the acquisition of such an identity, and the associated reflective transversal skills, grows in importance in a “lifelong learning society” (EuropeanCommission, 2006), a context precisely wherein learning attitudes and behaviours become central assets of individuals and organizations. Research on the akin notions of “learning to learn”(Claxton, 2006), “meta-learning” (Jackson, 2004) or “meta-cognitive development” (Aviram, 2008) have put various levels of emphasis on the social and pedagogical relevance of promoting thinking about think- ing. Most often however this call to more thoughtful learning have centered on me- chanics and methods learning, usually purposed to train the self-as-a-performer (Azevedo, 2005; Csapó, 1999). Recently, emerging research strands like the narrative approach to learning (Watkins, 2006) or student’s voice (Lodge, 2005) have proposed 87 Fostering reflective practice with mobile technologies to also question the educational needs of the self-as-a-learner. If learning becomes a critical part of life, it is expected that those who practice it can conceptualize all these hours of tuition as a specific activity that they are able to qualify, describe, distinguish and practice from others. Developing this kind of awareness goes along what could be called a “student professional development”. Its provision implies to make room for issues like the meaning of the daily life at school (student’s “common life” as defined by Lasch (1997)), the personal commitment to knowledge or students’ conceptions of the relationship between elements of the environment and learning (Elen & Lowyck, 1998). This holistic approach suggests that a way to sharpen reflective habits about learning is to problematise the daily exposure to the learning activities. This approach recommends that students do not simply think of their interactions with learning op- portunities as a process of “performing” them but also pay attention to the personal internalization of these experience (Le Cornu, 2009), in an effort to steadily see own intellectual growth as a product of intentions and choices rather than externally- imposed or incidental entities. The current study tests an instructional setting deemed to stimulate students to make what they live at school a deliberate object of attention (Watkins, 2001) through the use of reflection amplifiers instantiated by smartphones. 1.1 Reflection amplifiers Training the self-as-a-learner implies to attend to learning processes with increased time, attention and resources. There is therefore a challenge in finding ways to pro- vide pupils opportunities to mentally evoke what they have lived throughout the day with regard to learning, so that this experience can be turned into a deliberate object of attention and reflection. One possible way is offered by Verpoorten, Westera and Specht (2011) in their work on “reflection amplifiers” (RAs). This expression refers to compact and well-considered prompting approaches that offer learners structured opportunities to examine and evaluate their own learning. Whereas the promotion of reflection is often associated with post-practice methods of experience recapture (Boud, Keogh, & Walker, 1985) through portfolios or learning diaries, RAs are pre- sent as structured and repeated introspective episodes, offered in the course of action and meant to make learning visible (Hattie, 2008) and to nurture internal feedback (Butler & Winne, 1995). Such instructional practice does not simply aim at engaging learners at the level of presenting information for understanding and use, but also direct them at meta-levels of learning. The concise reflection, which they call for further characterizes RAs. As support to condensed reflective processes, RAs operate though miniature Web applications providing a single engagement point with a de- fined type of reflection, here the daily SMS about their learning day. So far, RAs have been tested in regular formal online learning. Furthermore, the “learning to think” approach enacted by RAs have concerned academic reflective skills like summarizing or self-assessing. This study transposes the RAs to mobile (meta-)learning, after- school setting and analytical scrutiny onto one’s learning day. 88 Fostering reflective practice with mobile technologies 1.2 Mobile technologies This pilot study builds upon 3 core-features of mobile technologies, and of smartphones in particular: • Smartphones represent the only technology that students have permanently inside and outside the classroom. In this way, smartphones appear as possible mediations between scholarly and after-school contexts. These appliances therefore recom- mended themselves in a study aiming at developing awareness of learning (Marton & Booth, 1997), both formal and informal. • They are likely to promote a more personalized approach to learning because they represent a direct channel to the learner and one that is open at all time. Not only are the reflection prompts received on personal devices but the targeted reflection bears on the deepening of the personal relationship of the smartphone owner to knowledge and self-growth (Ranson, Boothby, Mazmanian, & Alvanzo, 2007). • They increase the chance of learning in unconventional contexts (waiting times, transportation, etc.) with the virtual promise of replacing this perceived "lost time” into perceived "productive time". If it is impossible to know beforehand where and when the participants to this study will use their smartphone for meta-learning, it is nevertheless likely that this reflection break will offer an opportunity for learning from reflection in a non traditional context. 1.3 Research questions In an exploratory study, students have been assigned to amplify their reflection about the learning affordances offered to them throughout the day. Three main research questions have guided this pilot: 1. Will students react actively to invitations to reflect on personal learning sent on their own device and outside the school hours (participation)? 2. What insight does this sampling of experience bring regarding how learning takes place in students’ today common life (channels of learning)? 3. What effects (or lack thereof) of these structured episodes of introspective reflec- tion can be pinpointed on dimensions of the learning (familiarity, appreciation per- ceived learning, account of the learning experience)? 2 Method 2.1 Outline of the experiment Context and assignment (daily reflection exercise). The study took place in an “Experiment day” which offered students to discover the work of the Learning Media Laboratory (the authors’ workplace) through the partici- pation to empirical experiments. At the end of the day, a presentation provided an overview of mobile technologies for learning. Afterwards, the corresponding author 89 Fostering reflective practice with mobile technologies introduced the participants to the exercise to be done in the next 4 days. The experi- ment was described to students as a reflection exercise in which they were encouraged to amplify their awareness of their daily activity as learners. The famous speech of Steve Jobs (whose recent death had received much attention from medias) at the end of the year session at Stanford1 was used as a stance on the importance to take a step backward and consciously attend to one’s own life and personal identity, here as a learner. The assignment was written as follows: How many years have you invested studying and learning in your life? Maybe it is time to reflect in your mirror for some days and ask yourself: “If today were the last day of my life, would I want to do what I am about to do today?” I offer you to live Steve Jobs’ experience during the next 4 following days, so you can be aware of your learning and decide if you need to change anything. In our case, the mirror will be your mobile phone so you will re- ceive a daily SMS asking you about your personal learning day. It could be anything you learned at school or during leisure time. The experiment required using both an SMS messaging system2 that would alert them about the reflection moment of the day, and a student response system3 where they should answer the questions they were going to be asked. An in-situ demo was per- formed so students could solve doubts about the interaction with these tools. The students went back to school with a paper wrapping up the goal, the assignment and the practical processes information about the study. Sample. The study enrolled 37 students (mean age = 17 years old, 37% female, 63% male) from two colleges (Connect College, Echt, The Netherlands and European School, Mol, Belgium). An iTunes voucher of 15 euros rewarded their participation in the experiment. The voucher was delivered to students that at least completed both the pre-questionnaire and the post-questionnaire. Timing. The daily reflection exercise was performed during 4 consecutive days (Thursday, Friday, Saturday, Sunday) after the presentation of the experiment. This setup was designed to evenly distribute the reflection exercised within 2 days at school and 2 days out of school. It allowed to encompass the awareness of and reflection on both formal and informal learning and to provide contrast to the descriptions of the learn- ing experience. 1 Steve Jobs at University of Stanford 2005. http://www.youtube.com/watch?v=xoUfvIb-9U4 2 Text Magic. SMS broadcast system. http://www.textmagic.com/ 3 Socrative. Student personal response system. http://www.socrative.com/ 90 Fostering reflective practice with mobile technologies The virtual classroom was opened everyday 30 minutes before sending the SMS (Fig. 2.a) in order to have the “Student paced quiz” ready when students would login (Fig. 1.a). An SMS was sent to students every day at 8 p.m. alerting them that the student response system was ready to receive answers with their reflections. Students that had smartphone with Internet connection could push the link and perform the reflection exercise within the platform in that moment. The ones that did not have an Internet connection in their mobile devices could do it later until 7 a.m. of the next day when the activity was closed. This platform lets the teacher monitorize how many students are performing the activity in every moment (Fig. 1.b). a. Tutor starting daily reflection exercise in b. Tutor monitoring daily reflection exercise classroom 91351 Fig. 1. Personal response system Tooling. In order to prompt every student to perform the reflection exercise, no regard to the mobile device they were using, it was decided to use SMSs notifications. In a first design of the experiment, a missing-calls response system4 was evaluated in order to be used as reflection virtual environment. Although it supports multiple-choice ques- tions and it is free of cost, it was discarded since it does not support long text answers. The student personal response system that was selected includes a series of educa- tional exercises (multiple choice questions, short and long answers) and games via smartphones, tablets, laptops and personal computers. It is necessary to be connected to the Internet to perform the reflection exercise. 4 Votapedia. A missing-calls response system. http://www.urvoting.com/ 91 Fostering reflective practice with mobile technologies 2.2 Measure instruments Pre-questionnaire. The pre-questionnaire gathered perceptions of students about the intensity of their learning in the previous week and the channel they use for learning. Additionally, they were asked to provide an account of their learning in the previous week. Daily questionnaire. The daily questionnaire, received daily on individual smartphone, was the reflec- tion amplifier of the study. It comprised one question about the perceived intensity of the learning day (Fig. 1.c) and one question about the main channel of learning used in the day (Fig. 1.b). a. Daily SMS received by b. What were your main c. How intense was your students. learning channels today? learning day? Rate it from 1 to 5. Fig. 2. Student reflective practice Post-questionnaire. The post-questionnaire, left active during one week, had 2 versions. The one sent to the students who performed the reflective exercise at least once presented the very same questions as in the pre-questionnaire, plus some questions deemed to collect students’ evaluative data regarding the daily reflection exercise. The other version was sent to students who dropped out, these are, students who did not complete any of the 4 daily reflection exercise. It raised the three same questions as in the pre- questionnaire, plus one asking them the reason why they did not participate. 92 Fostering reflective practice with mobile technologies 3 Results The processing of closed questions was performed with the Statistical Package for the Social Sciences (SPSS), version 20. The analysis of the questions requesting a coding of the answers was done thanks to the “Multiple Episode Protocol Analysis” (Erkens, 2005). 3.1 Acceptance Research question 1: “To what extent will students react actively to invitations to reflect on personal learning sent on their own device and outside the school hours (participation)?” The decrease in participation was quite visible from the first to the 4th iteration of the daily questionnaire (Fig. 3) but was not as severe as the dropout rate from the pre- questionnaire to the mere entrance in the exercise. The 29 recorded post- questionnaires comprised both the participative (56% [n=16]) and the drop-out ver- sions (44% [n=13]). Fig. 3. Evolution of student’s participation during experiment Main invoked reasons for dropouts (n=13) were for 46% “I did not receive any SMS” and 38% “I had no internet connection in that moment”. No respondent selected lack of interest, boredom of the intrusive character of the experiment as justifications for not participation. The SMS tool confirmed the weight of technical failures: an average of 15% of the SMS were not delivered, a large majority thereof caused by a wrong phone number given by the student right from the start but also caused by malfunc- tions in the broadcasting (especially in day 3 where a restart of the whole activity was 93 Fostering reflective practice with mobile technologies necessary). Some loss happened also (mainly 6 in day 2). Additionally, the monitor- ing tool also displayed how many students were connected to the platform filling-out the questionnaire in every moment. From these observations, it can be concluded that the majority of the students completed it in the same moment they received the SMS. 3.2 Today’s learning Research question 2: “What insight does this sampling of experience bring regarding how learning takes place in students’ today common life (channels of learning)?” Table 1 wraps up the answers given by students in the pre-questionnaire and in the daily reflection exercises. School and Internet were the most important sources of learning. School Internet Conversations Leisure Other Pre-quest. (n=37) 65% 27% 3% 0% 5% Day 1 (n=19) 26% 53% 11% 5% 5% Day 2 (n=17) 73% 9% 9% 9% 0% Day 3 (n=13) 0% 31% 7% 31% 31% Day 4 (n=11) 0% 46% 9% 9% 36% Table 1. Main channel of learning 3.3 Reflection Research question 3: “What effects of the structured episodes of introspective reflec- tion can be pinpointed?” Familiarity with reflective practice. Looking backward on one’s life as a learner is not a deep-rooted habit in students if the answer to the question “before the start of this experiment, can you remember the last time you thought about your learning day?” is taken as an indicator. 81% of the participants (n=16) answered “No”. Appreciation of reflective practice. When asked whether they liked the reflection ritual implemented through their smartphone, 69% (n=16) answer positively. Four categories emerged from the justifi- cations of students valuing the experience: • Gains in meaning (18%). E.g. participant #18: “It helps you realise that your day has much value. It is eventually about my life”. • Gains in self-assessment (29%). E.g. participant #5: “You look critically at what you have learnt and how you might improve. Evaluating yourself adds to the learn- ing experience itself”. 94 Fostering reflective practice with mobile technologies • Gains in consciousness without further details (24%). E.g. participant #7: “My interest steadily grew because it made me more conscious”. • Other answer (29%). E.g. participant #9: “Very interesting and well done”. Only a few students gave reason for their dislike of the experiment: “no learning comes from the reflection” (participant #6), “the reflection is quickly forgotten” (par- ticipant #20), “my reflection on learning takes place in the moment of learning and not afterwards” (participant #21), “I reflect on other things” (participant #10), “I’ve often asked myself before if I learnt at school and often came to this conclusion: noth- ing” (participant #2). Perceived learning. Perceived learning was rated on a 3-point Likert scale: “I learnt less than usual”, “I learnt as usual” and “I learnt more than usual”. A higher relative frequency of the answer “I learnt more than usual” was found for the group of students who partici- pated to the reflection exercise and filled in the post-questionnaire (N = 19) than for the group of students who did not show up for the exercise but took the post- questionnaire (N = 10): 31% versus 7% respectively. However, a Mann-Whitney test granted no significance to this observation: U = 79, p = .12, r = .03 Mean intensity SD N Perceived learning for the week before the experiment 1.8 .6 37 Perceived learning reported in the daily reflection exercise (all days) 1.7 .8 56 Intensity rating for the week of the experiment (non participants) 1.8 .5 13 Intensity rating for the week of the experiment (participants) 2.2 .6 16 Table 2. Perceived learning Description of learning experience. When asked to describe their learning experience during the week, participants to the daily reflective exercise produced longer accounts: 112 characters on average versus 88 for the non-participants. However, from a t-test, it turned out that these differences were not significant, t(26)= 1.12, p= .26, d = 0.29. The same conclusion was drawn from a chi-square test bearing upon the level of complexity of the accounts, assessed with a three-level coding rubric. 4 Discussion and further research work. This section gives an interpretation of the results and locates them in a broader educa- tional context. The discussion and the suggestions for future research follow the order of the 3 guiding research questions of this study. 95 Fostering reflective practice with mobile technologies 4.1 Use of private phones to raise awareness about learning It is possible to use smartphones to stimulate meta-learning about common life as a learner. A proportion of pupils accepted and was able to use their personal smartphone for “serious” messages coming from the researcher outside the school hours. Whilst it can seem obvious, this pre-condition does not speak for itself. Hardy (Hardy et al., 2008) shows that even when undergraduates do have a good level of IT competence and confidence, they tend to be conservative in their approaches to uni- versity study, maintaining a clear separation between technologies for learning and for social networking. Margaryan and Littlejohn (2009) lean on their findings on the low level of use of and familiarity with collaborative knowledge creation tools, virtual worlds, personal Web publishing, and other emergent social technologies, to cast doubts on the ability or the wish of students to use complex digital tools in their learn- ing practice. On the other hand, Jones, Edwards, & Reid (2007) report that, despite being unaccustomed to using their mobile phones for academic study, students will- ingly accepted SMS reminders – focused on time management and not on learning consolidation – from their tutor via a bulk texting service). 4.2 Fragmentation of the learning sources Despite the mounting gulfs of literature stressing the emergence of a “Net Genera- tion”, “Homo Zappiens”, or “digital natives”, despite the growing interest for infor- mal learning which can go in its extreme form to the prediction of a disappearance of physical institutions like schools (Miller, Shapiro, & Hilding-Hamann, 2008) under the pressure of the fragmentation of the traditional education landscape into thousands of personal learning environments, this study suggests that learners still perceive school as a major vector of learning. Indeed, its monopoly over learning processes seems to be challenged by the emergence of a rich ecosystem outside school walls as heralded by Internet (see Table 1). Of particular concern for future research would be to ascertain how school and other vectors of education contribute to youth’s intellec- tual growth (Facer, 2011). In such an investigation, student’s voice is obviously criti- cal. And to express it, young people will have to learn to think as learners in order to provide valuable accounts of what they are living as learners in multiple contexts. This need to be able to reflect on common life as learners takes us back to the what motivated this study: defining methods and tools designed to make learning an object of attention and reflection. 4.3 Acceptance and effects of reflective practice Three findings emerge from this study regarding reflective practice in students’ com- mon life: a) There is no anchored habit in the students to see themselves as learners and to develop a “professional” awareness (see section “Familiarity with reflective practice”) about their daily activity/job at school (Ertmer & Newby, 1996; Sternberg, 1998) and the learning opportunities after school; 96 Fostering reflective practice with mobile technologies b) Providing time to perform reflective activities on this topic is appreciated by about half of the sample (see section “Appreciation”) for reasons relating to sense- making and professional development as a student; c) The stop-and-think beacons offered here are considered as useless or superflu- ous by a good deal of students, even when they have been designed not to last a long time (for similar attitudes of rejection of reflection see (Johnson & Sherlock, 2009) and (Watkins, 2001) p. 9). Further research is needed to disentangle the profile of the people ready or not to devote time to self-awareness development (Baeten, Kyndt, Struyven, & Dochy, 2010), and the consequences thereof. In order to get a grip on what young people live day after day as learners, finding concrete ways to make learning visible and externalize perceptions of it is also a challenge for research. The- oretical and empirical work must also concurrently be conducted regarding the rela- tionship between self-awareness and learning and the kind of new knowledge con- veyed by episodes of introspection intended to help students to sharpen awareness of themselves as learners. 4.4 Limitations of the study The sample in this study has shrunk for technical reasons but also for reasons proba- bly tied to the importance granted to reflection (the high drop-out right from the start of the experiment). These reasons should be investigated for themselves and subse- quent study should be carried out with bigger samples. This study also prompted stu- dents only four times. More investigation is needed into the tension of intruding into the pupils' out-of-school time it has already been shown that many university students don't like their academic studies to intrude into personal time or their social network- ing activities. The SocialLearn5 project at the Open University (UK), that uses social networking for learning and has been well received by its students to date (however, OU students are often not "typical" undergraduates so this might change the perspec- tive on the work). The invitation to reflect did not come from patented teachers but from researchers unknown to the participants. A better integration of the reflection amplifiers in the school context as well as attempts to take the frequency of the prompting as inde- pendent variables would cast more light on the possible interplay between action and thought. A last limitation must be mentioned: the data was processed only according to between-subjects comparisons. Any within-subjects analysis was impossible due to the inability of the Socrative system to track who answers. 5 Conclusion In this study, a reflection amplifier modeled as an evaluation questionnaire of daily learning, was relayed to the students through personal smartphones with the purpose 5 SocialLearn. Learning Through Social Connection. http://www.open.ac.uk/blogs/sociallearn/ 97 Fostering reflective practice with mobile technologies of stimulating the opening up of and the reflection upon learning activities, contexts and channels. These structured educational encounters between opportunities to learn and opportunities to make them visible and conscious in the mental realm of the learners aimed at encouraging students not to merely “learn” but also to put various dimensions of this experience into sharp focus. It should be further investigated whether the actualization of true learning is not at the confluence of this combination of experiences (action) and thought (reflection). 6 References Aviram, R. (2008). Navigating through the Storm: Education in Postmodern Demo- cratic Society. Rotterdam: Sense Publishers. Azevedo, R. (2005). Computer Environments as Metacognitive Tools for Enhancing Learning. Educational Psychologist, 40(4), 193-197. Baeten, M., Kyndt, E., Struyven, K., & Dochy, F. (2010). Using student-centred learning environments to stimulate deep approaches to learning: Factors en- couraging or discouraging their effectiveness. Educational Research Review, 5(3), 243-260. Boud, D., Keogh, R., & Walker, D. (1985). Reflection, Turning Experience into Learning. London: Kogan Page. Butler, D. L., & Winne, P. H. (1995). Feedback and Self-Regulated Learning: A The- oretical Synthesis. Review of Educational Research, 65(3), 245-281. Claxton, G. (2006). Expanding the Capacity to Learn: A new end for education? Uni- versity of Bristol. Keynote speech, British Educational Research Association Annual Conference, University of Warwick, 6-9 September 2005, . Csapó, B. (1999). Improving thinking through the content of teaching. In H. Hamers, J. van Luit & B. Csapó (Eds.), Teaching and learning thinking skills (pp. 37- 62). Lisse: Swets and Zeitlinger. Elen, J., & Lowyck, J. (1998). Students' views on the efficiency of instruction: An exploratory survey of the instructional metacognitive knowledge of universi- ty freshmen. Higher Education, 36(2), 231-252. Erkens, G. (2005). Multiple episode protocol analysis (MEPA). Version 4.10. The Netherlands: Utrecht University Ertmer, P., & Newby, T. (1996). The expert learner: strategic, self-regulated, and reflective. Instructional Science, 24, 1-24. EuropeanCommission. (2006). Proposal for a recommendation of the European Par- liament and of the Council on key competences for lifelong learning. COM(2005)548 final. Brussels. Facer, K. (2011). Learning Futures: Education, technology and social change. Lon- don: Routledge Hardy, J., D. Haywood, Bates, S., Paterson, J., Rhind, S., Macleod, H., & Haywood, J. (2008). Expectations and Reality: Exploring the use of learning technolo- gies across the disciplines. Paper presented at the Sixth International Confer- ence on Networked Learning, Halkidiki, Greece. 98 Fostering reflective practice with mobile technologies Hattie, J. (2008). Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement. London: Routledge. Jackson, N. (2004). Developing the Concept of Metalearning. Innovations in Educa- tion and Teaching International, 41(4), 391-403. Johnson, M., & Sherlock, D. (2009). Learner reflexivity, technology and making our way through the world. International Journal of Continuing Engineering Education and Life-Long Learning, 19, 352-365. Jones, G., Edwards, G., & Reid, A. (2007). Supporting and Enhancing Undergradu- ate Learning with m-learning tools: an exploration and analysis of the po- tential of Mobile Phones and SMS. URL http://www.networkedlearningconference.org.uk/past/nlc2008/abstracts/PDF s/Jones_162-170.pdf. Lasch, C. (1997). Women and the Common Life: Love, Marriage, and Feminism. New York, USA: Norton. Le Cornu, A. (2009). Meaning, Internalization, and Externalization. Adult Education Quarterly, 59(4), 279-297. Lodge, C. (2005). From hearing voices to engaging in dialogue: problematising stu- dent participation in school improvement. Journal of Educational Change, 6(2), 125-146. Margaryan, A., & Littlejohn, A. (2009). Are digital natives a myth or reality?: Stu- dents’ use of technologies for learning. URL http://www.academy.gcal.ac.uk/anoush/documents/DigitalNativesMythOrRe ality-MargaryanAndLittlejohn-draft-111208.pdf. Marton, F., & Booth, S. (1997). Learning and awareness. Mahwah, N.J, USA: L. Erlbaum Associates. Miller, R., Shapiro, H., & Hilding-Hamann, K. E. (2008). School's Over: Learn- ing Spaces in Europe in 2020: An Imagining Exercise on the Future of Learning.: Joint Research Centre, Institute for Prospective Technological Studies, European Commission. OECD (2011). Education at a Glance: OECD Indicators. Paris, France: OECD Pub- lishing. Ranson, S. L., Boothby, J., Mazmanian, P. E., & Alvanzo, A. (2007). Use of personal digital assistants (PDAs) in reflection on learning and practice. Journal of Continuing Education in the Health Professions, 27(4), 227-233. Sternberg, R. J. (1998). Metacognition, abilities, and developing expertise: What makes an expert student? Instructional Science, 26(1), 127-140. Verpoorten, D., Westera, W., & Specht, M. (2011). Reflection Amplifiers in Online Courses: A Classification Framework. Journal of Interactive Learning Re- search, 22(2), 167-190. Watkins, C. (2001). Learning about Learning Enhances Performance. London: Insti- tute of Education, University of London. Watkins, C. (2006). Explorations in metalearning from a narrative stance. Paper presented at the Second bi-annual conference of the European association for research on learning and instruction - Special interest group 16: Metacogni- tion, Cambridge, UK. 99 100 Comparing Automatically Detected Reflective Texts with Human Judgements Thomas Daniel Ullmann? , Fridolin Wild, and Peter Scott Knowledge Media Institute, The Open University Walton Hall, MK7 6AA Milton Keynes, United Kingdom {t.ullmann,f.wild,peter.scott}@open.ac.uk http://kmi.open.ac.uk Abstract. This paper reports on the descriptive results of an experi- ment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing as- sessment and their operationalisation five elements of reflection were de- fined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or not- reflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analy- sis indicates the expected difference between reflective and not-reflective texts. Keywords: reflection, detection, thinking skills analytics 1 Introduction The topic of reflection has a long-standing tradition in the area of educational science as well as in technology-enhanced learning. Reflection is seen as a key competency. These are competencies, which are important for society, to help meeting important demands for all individuals and not only for specialists. Re- flection is at the ”heart of key competencies” for a successful life and a well- functioning society [25]. The focus of this research is on reflective writings. A reflective writing is one of many ways to manifest the cognitive act of reflection. Common forms are diaries, journals, or blogs, which serve a person as a vehicle to capture reflections. Although reflection has been present in the modern educational discourse since at least 1910 [11], methods for the assessment of reflective writings are a relatively recent development. They are not in their infancy, but they are neither fully established. Wong et al. [37] states that there is a lack of empirical research on methods of how to assess reflection, and that the discussion is more driven by ? Corresponding author 101 Comparing Automatically Detected Reflective Texts with Human Judgements theorising concepts of reflection and its use. Plack et al. [27, p. 199] more recently states ”(...) yet little is written about how to assess reflection in journals”. Classical tools to identify evidence of reflections are questionnaires (e.g. [1, 3]), and manual content analysis of reflective writings (for an overview see Dyment and O’Connell [12]). These methods are time-consuming and expensive. Due to their nature, the evaluations of reflective writings and feedback are usu- ally available far after the act of writing, as it first has to be processed by an expert. In addition, due to the personal nature of reflection some people prefer not to share them, although feedback would benefit their reflective writing skills. The automated detection of reflection is a step forward to mitigate these problems, as well as it provides a new perspective on the research of reflection evaluation methods. As a first step towards this goal, text was annotated and based on the annota- tion rules were defined. These rules mapped five elements of reflection. Then the reflection detector was parameterised based on authoritative texts. This base- line parameterisation was used to distinguish texts that fulfilled the rule criteria and afterwards referred to as reflective texts, and texts, which do not satisfy these criteria, referred to as not-reflective. A larger blog corpus was automat- ically analysed. The annotated texts were rated by human judges. This paper reports the results of the comparison between automated detection of reflection and human ratings. 2 Situating the Research in the Research Landscape The automated detection of reflection is part of the broader field of learning analytics, especially social learning content analysis [13]. Two related prominent approaches for identifying automatically cognitive processes have emerged in the past. The first approach draws from the associa- tive connection between cue words and acts of cognition. This approach explicitly uses feature words associated with psychological states. Pennebaker and Fran- cis [26], for example, developed the Linguistic Inquiry and Word Counting tool to research the link between key words and its impact on physical health and academic performance using a bank of over 60 controlled vocabularies in the de- tection of emotion and cognitive processes. Bruno et al. [6] describe an approach for analysing journals using a mental vocabulary. This semi-automatic approach focuses on the detection of cognitive, emotive, and volitive words, enabling them to highlight changes in the use of these mental words over a course term. Chang and Chou [7] are using a phrase detection system to study reflection in learners’ portfolios. The system serves as a pre-processor of contents, thereby emphasis- ing specific parts-of-speech (in their case: stative verbs in Mandarin), which then later helped experts to assign the automatically annotated words to four cat- egories associated with reflection, labelled as emotion, memory, cognition, and evaluation. The second type of approaches relies on probabilistic models and machine learning algorithms. McKlin [21] describes an approach using artificial neural 102 Comparing Automatically Detected Reflective Texts with Human Judgements networks to categorise discussion posts regarding levels of cognitive presence. The concept of cognitive presence reflects according to Garrison et al. [14, p. 11] ”(...) higher-order knowledge acquisition and application and is most associated with the literature and research related to critical thinking”. Cognitive presence consists of four categories: triggering events, exploration, integration, and reso- lution. The cognitive presence model was also used in the ACAT system [8]. In this system, a Bayesian classifier was used to distinguish content according to the four categories of the cognitive presence model. Rosé et al. [28] describe the use of a set of classification algorithms (Naı̈ve Bayes, Support Vector Machines, De- cision Trees) to automatically annotate sentences from discussion forums related to - amongst others - epistemic activity, argumentation, or social regulation. 3 Research Question The wider goal of this research is to evaluate the boundaries of automated de- tection of reflection. This includes the question of to what extent it is possible to algorithmically codify reflection detection that validly and reliably detects and measures elements and depth of reflection in texts and how these results compare to human judgements. This is an on-going research process. Within this paper the focus lies on the following questions: 1. How does automated detection of reflection relate with human judgments of reflection? 2. What are reasonable weights to parameterise the reflection detector? Regarding the first question the goal is to compare automatically detected re- flective texts with texts that do not satisfy the criteria of a reflective text, with human judgments. It is expected that the two categories will differ. The sec- ond question refers to the weights of the reflection detection of each element of reflection. Based on a set of reflective texts weights will be determined. It is expected that by using these weights, the reflection detector will find reflective texts, which are also marked as reflective by human judges. 4 Elements of Reflection Up to now, an agreed model of reflection does not exist. This might be due to the variety of contexts, in which reflection research is embedded (e.g. medical area, psychology, vocational education). With this, certain elements of reflection are more important in a given context than in others contributing to this variety. It seems however, that there are certain repeating elements of reflection, which will build the foundation of the model used in this paper. The elements presented here are based on the major streams of the theoretical discussion on reflection. The elements of reflection used in this paper are the following: 103 Comparing Automatically Detected Reflective Texts with Human Judgements 1. Description of an experience: This element of reflection sets the stage for it. It is a description of what was happening. Boud et al. [4, p. 26] describes it as returning to experience by recapturing the most important parts of the event. The writer is recalling and detailing the salient moments of the event. The description of the happening can be either the description of external events as the source of reflection, but also descriptions of the inner situation of the person, for example their thoughts or emotions. There can be many themes, which were the reason or trigger of the writer to engage in reflective writing. Some common themes are the following. – Conflict: A description of an experienced conflict (either a conflict of the person with him/herself or with another person/s or situations). The conflict can be presented as a disorienting dilemma, which is either solvable or on-going. – Self-awareness: Recognising that cognitive or emotional factors as a driv- ing force of own beliefs and that these beliefs are shaping own actions. – Emotions: Feelings are frequently cited as a starting point of reflection. As with the other topics emotions might be part of a reflection but they are not necessarily part of every one of them [24, p. 88]. Boud et al. [4, p. 26] emphasises to use helpful feelings and to remove or to contain obstructive ones, as a goal of a reflection. It can be seen as a reaction to a personal concern about an event. Dewey [10, p. 9] states that the starting point of a reflection can be a perceived as the perplexity of difficulty, hesitation or doubt, but also something surprising, new, or never experienced before. 2. Personal experience: As reflection is about own experiences, one might expect that they are self-related, and ought to tell a personal experience. Although it seems convincing that reflective writing should be about own experiences, there still exists a certain debate. Moon [24, p. 89] argues reflective writing does not necessarily needs to be written in first person. However, in the case of a deep reflection, the writer often expresses self-awareness of individual behaviour using the first person perspective. Hatton and Smith [15] describe it as an inner dialogue or monologue that forms part of the dialogic reflec- tion of their reflection model. Boyd and Fales [5] call it personal or internal examination and Wald et al. [36] emphasis on the existence of the own voice expressed in the writing, indicating that the person is fully present. 3. Critical analysis: Mezirow [22] states that the critical questioning of content, of process, and premises of experiences in order to correct assumptions or beliefs, might lead to new interpretations and new behaviour. Dewey [10, pp. 118, 199-209] speaks of the importance of testing of hypothesises by overt or imaginative action. It is this critical analysis, which helps the writer to step back from the experience in order to be able to mentally elaborate or critique own assumptions, values, beliefs, and biases. This process of mulling over or mental elaboration can contain an analysis, synthesis, evaluation of experience, testing or validation of ideas, argumentation and reasoning, hypothesising, recognising inconsistencies, finding reasons or justifications for own behaviour or of others, linking of (association) and integrating ideas. 104 Comparing Automatically Detected Reflective Texts with Human Judgements 4. Taking perspectives into account: The frame of reference can be formed in the dialogue with others, by comparing reactions with other experiences, but also by referring to general principles, a theory, or a moral or philosophical position [33]. A change of perspective can shed new insights, and helps to reinterpret experience [22]. 5. Outcome of the reflective writing: According to Wald et al. [36] a reflec- tion can have two outcomes: Either the writer arrives to new understanding (transformative learning) or at confirmatory learning (meaning structures are confirmed). Both touch the dimension of reflection-for-action [17]. The outcome of a reflection is especially important in an educational context. It sums up what was learned, concludes, sketches future plans, but might also comprise a sense of breakthrough, a new insight and understanding. While these elements are presented separately, there is still an overlap be- tween them. For example, the description of an experience can already be critical and contain multiple perspectives. Wong et al. [37] subsume validation, appro- priation and outcome of reflection as part of perspective change, while Wald et al. [36] puts meaning making and critical analysis into one category. These five elements of reflection build the foundation of the theoretical frame- work. For each element a set of indicators was developed. Each indicator is mapped back into the elements of reflection using a set of rules. These rules de- fine the relation or mapping between the indicators and the element of reflection. 5 Reflection Detection Architecture With the help of several analysis engines that wrap linguistic processing pipelines for each classifier, elements of reflection can be annotated. The analysis compo- nent is then used to aggregate overviews informing about the level of reflection identified. For an overview of the architecture, see Ullmann [35]. 5.1 Description of the Annotators A set of annotators has been developed. Each annotation consists of its own type and can have one or more features. An annotation can span over a text from single characters, to words, to sentences, or even the whole text. For this paper, the following annotators were used. – NLP annotator: The NLP annotator makes use of the Stanford NLP parser [9, 18, 34]. It is used to annotate part-of-speech, sentences, lemma, linguistic dependency, and co-references. – The premise and the conclusion annotator use a handpicked selection of key- words indicating a premise (e.g. assuming that, because, deduced from) or conclusion (e.g. as a result, therefore, thus). – The self-reference annotator is based on keywords referring to the first person singular (I, me, mine, etc.), while the ”pronoun other” annotator contains keywords referring to the other/s (he, they, others, someone, etc.). 105 Comparing Automatically Detected Reflective Texts with Human Judgements – The reflective verb annotator is a refined version of Ullmann [35], making use of reflective verbs (e.g. rethink, reason, mull over). – The learning outcome annotator is based on Moon [23, pp. 68-69] (lemmas: define, name, outline, etc.), while the Bloom [2] taxonomy annotator contains keywords for the categories ”remember”, ”understand”, ”apply”, ”analyse”, ”evaluate”, and ”create”. – The future tense annotator is built from a selected list of key words, indi- cating future tense (will, won’t, ought, etc.). – The achievement, causation, certainty, discrepancy, and insight annotator are based on the LIWC tool [26], but refined and based on lemmas. – The surprise annotator contains a refined set of nouns, verbs, and adjectives from the SemEvalTask1 [31], which in turn are based on WordNet affect [32]. 5.2 Description of the Analysis Component While the analysis of the annotators can already help to gain insights regarding the reflectivity of the text, the aggregation of annotators adds an additional layer of meaning. Besides UIMA as a framework to orchestrate the annotators, the Drools framework - especially its rule engine - was leveraged to infer knowledge from the annotations. This has several benefits starting from the ability to infer new facts, chain facts from low-level facts to high-level constructs, to update facts, and to reject facts. The rules are expressed in IF - THEN statements (for example, if A is true then B). As a simplified example (see 5.2) I show three rules to infer whether a sentence shows evidence of personal use of the reflective verb vocabulary (the rule is described in natural language and not using the notation of Drools). This is one of the six rules of the indicator critical analysis. Listing 1.1. Rule example FOR ALL sentences of the document : IF sentence contains a nominal subject AND IF it is a self - referential pronoun AND IF the governor of this sentence is contained in the vocabulary reflective verbs THEN add fact " Sentence is of type personal use of reflective  vocabulary " For each element of the reflection a set of rules can be used to describe the mapping between the annotations and the element of reflection. The high-level rules of each element are then combined to a rule/s, which indicates reflection or grades of reflection. The micro level of analysis is the set of facts formed by the annotations, the meso level represents the set of rules for each element, and the macro level is the set of rules indicating the high-level construct (in this case reflection). 1 http://www.cse.unt.edu/ rada/affectivetext/ 106 Comparing Automatically Detected Reflective Texts with Human Judgements 6 Method The discussion of the method will follow two strands. First, we will outline the method used to distinguish texts regarding their reflective quality using the reflection detector. This includes the mapping of indicators to the elements of reflection and the parameterisation of the macro rule to detect reflection. The result of the automatic classification labels each text with either ”reflective” or ”not-reflective”. The second strand describes the method used to gather the human judgments using on an online questionnaire. 6.1 Assignment of Indicators to Elements of Reflection This experiment uses 16 rules, which indicate a facet of an element of reflection. For each element of reflection, a set of indicators was designed. The development of each indicator was an iterative process. Based on the experience of the first author with reflective texts several versions of each indicator were developed, and the most promising ones were kept. Each indicator was tested with sample texts, including reflective texts, not-reflective ones, and self-generated test cases. The goal of this approach was to generate sound indicators, which could then be tested against empirical data. Altogether 28 rules form the meso-level. Several of these rules are chained together, leaving 16 rules at the end of the chain. These 16 rules were assigned to each of the five elements of reflection based on the elements derived from theory (see Table 1). Elements of reflection Indicators (based on rule inference) Description of an expe- Past tense sentence with self-related pronoun as rience subject. Present tense sentence with self-related pronoun as subject. Sentence with surprise keyword and self-related pronoun as subject. Personal experience All indicators, which are based on self-related pronouns. Question sentence, in which the subject is a self-related pronoun. Critical analysis Sentences with premise, conclusion, and causation keywords. Sentences with certainty or discrepancy as keyword and using as subject a self-related pronoun. Sentences, which have a self-related pronoun as subject and a reflective verb as governor. Sentences that take Sentences, which have a ”pronoun others” as other perspectives into subject and a self-related pronoun as object. account Sentences, which have a self-related pronoun as subject and pronouns others as object. 107 Comparing Automatically Detected Reflective Texts with Human Judgements Outcome Sentences, which have self-related pronoun as subject and keywords coming from the Bloom [2], or Moon [23, pp. 68-69] taxonomy of learning outcomes. Sentences, which have a self-related pronoun as subject and a keyword expressing insights. Future tense sentences with self-related pronoun as subject. Table 1: Mapping of elements of reflection to indicators (as a self- related pronoun we understand a 1st person singular pronoun, while a pronoun referring to others is termed as pronoun other). According to this mapping, sentences, which are personal and written in the past or present, or contain surprise, belong to the element ”description of experience”. The element of ”personal experience” is implicitly covered by all sentences, which are self-related. Additional self-related questions are covered. Sentences with premise, conclusion, causation, certainty, discrepancy, or reflec- tive key words are subsumed in the element ”critical analysis”. ”Taking perspec- tives into account” uses two rules, while the ”outcome” dimension is based on the Moon [23, pp. 68-69] and Bloom [2] taxonomy of learning outcomes, but also insight keywords and sentences, which refer to future events. 6.2 Parameterising the Reflection Detection Architecture One of the imminent questions is which weight should be given to each indicator to form a reflective text. In this context ”how many occurrences of each indi- cator satisfy as criteria indicating an evidence of an element of reflection?” To parameterise the reflection detection analytics component 10 texts found in the reflection literature marked as prototypical reflective writings were used. This reference corpus contains 10 texts taken from the instructional material of Moon [24], and the examples of the papers of Korthagen and Vasalos [19], and Wald et al. [36] supplemental material. The texts were automatically annotated and analysed. For each element of reflection the individual indicators were aggre- gated and the arithmetic mean calculated. The results are broken down in the following table (see Table 2). Elements of reflection Mean Description of an experience 5.23 Self-related questioning (several other indicators implicitly 0.80 contain the element ”personal experience”) Critical analysis 3.55 Taking other perspectives into account 0.45 Outcome 4.13 Table 2: Parameters for the elements of reflection. 108 Comparing Automatically Detected Reflective Texts with Human Judgements These figures are used in the analytics component of the reflection detection engine as parameters. According to this, a text is reflective if all of the following conditions are met: – The indicators of the ”description of experiences” fire more than four times. – At least one self-related question. – The indicators of the ”critical analysis” element fire more than 3 times. – At least one indicator of the ”taking perspectives into account” fires. – The indicators of the ”outcome” element fire more than three times. Texts detected with these parameters belong to the group ”reflective”, while texts, which do not satisfy any of the conditions (fires zero times), belong to the group ”not-reflective”. 6.3 The Questionnaire The aim of the design of the online questionnaire was two-fold. On the one hand, the formulation of the questions had to be suitable for a layperson audience re- garding the reflection research terminology, and on the other hand to allow that, the participant could leave the survey at any time. The questionnaire consists of the following building blocks. Each page contained five blog posts. After each blog post, seven questions were displayed, which refer to the reflective quality of the blog post. Each item had a short description to clarify the task. A six-level Likert scale was used ranging from strongly agree to strongly disagree. All seven items were required. 1. The text contains a description of what was happening. Description: Does the text re-capture an important experience of the writer? This could be a description of a situation, event, inner thoughts, emotions, conflict, surprise, beliefs, etc. 2. The text shows evidence of a personal experience. Description: The text is written with an inner voice. Contains passages, which are self-related, describing an inner examination, or even contains an inner monologue/dia- logue, etc. 3. The text shows evidence of a critical analysis. Description: Does the text con- tain an examination of what was happening? This might be an evaluation, linking or integration of ideas, argumentation, reasoning, finding justifica- tions or inconsistencies, etc. 4. The text shows evidence of taking other perspectives into account. Descrip- tion: This includes recognising alternative explanations or viewpoints, or a comparison with other experiences, also references to general principles, theories, moral or philosophical positions. 5. The text contains an outcome. Description: The text contains a description of what was learned, what is next, conclusions, future plans, decisions to take, etc. It might even contain a sense of breakthrough, new insights or understanding. 109 Comparing Automatically Detected Reflective Texts with Human Judgements 6. The text describes what happened, what now, and what next. Description: Does the text contain evidences of all three questions: What happened? What now? What next? 7. The text is reflective: Description: A reflective text shows evidences of critical analysis of situations, experiences, beliefs in order to achieve deeper meaning and understanding. The first five items of the questionnaire reflect the above outlined elements of reflection. The description of item seven follows the definition of reflection based on Mann et al. [20]. Item six refers to the time-dependent dimensions of reflection [17, 30]: reflection-on-action, reflection-in-action and reflection-for-action. 6.4 Text Corpus The text corpus is based on the freely available blog authorship corpus [29]: ”The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person” [29]. The blog authorship corpus was used as a vehicle to examine texts according to their reflectivity2 . From the whole blog authorship corpus the first 150 blog files were taken and automatically analysed. A file contains all individual blog posts of one blog. Short blog posts (less than 10 sentences) and blog posts in another language than English were removed. The rational was that a reflective writing that fulfills the above outlined elements is usually a longer text. In total 5176 blog posts were annotated. In total 4.842.295 annotations were made, which resulted into 178.504 inferences. The reflection detector classified the texts, and after the removal of texts with more than three unsuitable words (all remaining bad words were replaced by a placeholder), 149 texts were detected (95 reflective and 54 not-reflective ones). 6.5 Survey Sample The data of the survey was collected during July 2012. The set was complete in the last week of July. The questionnaire did not collect personal data. The online survey showed the blog posts together with the questions in randomised order. Each page contained five blog posts. The aim of the survey was to receive at least three complete ratings on all questions per blog posts. A small incentive was granted to each participant of the survey. In total 464 judgements were made. In a test trial of the first author, the average time to rate each page was about six minutes, which is in line with the average duration of the participants (371 sec.). The initial analysis however revealed that several participants only spent seconds per page. To assure that at least a minimum time was spent with the 2 as a prepared reflective text corpus is not available, which could have been used as a gold standard 110 Comparing Automatically Detected Reflective Texts with Human Judgements task the data were filtered and judgements, which took less than 300 seconds, were eliminated. This reduced the amount of judgements to 202 (74 for the not-reflective texts and 128 for the reflective texts). 7 Results The initial results of the experiment are summarised in Table 3. It shows for each of the two conditions the mean, the standard deviation, and the sample size. The values of the items range from 1 (strongly agree) to 6 (strongly disagree). The hy- pothesis is that the reflection category should have stronger agreement (smaller number) than the not-reflective category. Comparing the face value of the mean values, this tendency can be confirmed. Especially the element ”personal experi- ence” and ”reflective” show a higher difference between the means. On average, more people agreed that the texts of the automatically categorised group ”re- flection” contain more evidence of personal experience and reflection, than the ”not-reflective” group. reflective notreflective element N Mean SD N Mean SD situation 128 2.10 1.33 74 3.62 1.73 personal 128 2.11 1.43 74 3.84 1.54 critical 128 2.92 1.40 74 4.12 1.60 perspective 128 3.25 1.46 74 4.53 1.55 outcome 128 3.34 1.62 74 4.30 1.64 whatnext 128 2.71 1.43 74 4.03 1.64 reflective 128 2.51 1.48 74 4.09 1.63 Table 3. Descriptive results. The data of this analysis is based on the average time anticipated to fulfill the task. This has the benefit of leaving most of the judgements for the descriptive analysis. The next section examines if the differences between reflective and not-reflective texts still hold, if the requirements on the dataset are taken more strictly. The data was gathered with Amazon’s Mechanical Turk. This has the major advantage, that the experiment is not influenced by the researcher and that the coders are independent from each other. However, it comes with some costs, which make a thorough analysis of the data necessary. An inspection of the data reveals that the time spent on each page varies. Many coders spend only a few seconds on each page, which indicates that they filled in the questionnaire more or less randomly. This led to filter judgments spent less than 120 seconds. Besides the filtering of results based on time, it was also checked if one person filled out the two pages spending exact the same time for both. Although this 111 Comparing Automatically Detected Reflective Texts with Human Judgements could happen by chance, these persons were dismissed. This pattern can arouse, if for example, a script was written, which randomly fills in the answers, waits for a certain duration and then fills in the next page with the exact same time. This suspect of data manipulation was nourished by the observed behaviour that some of the people only needed seconds to fill out a page of the questionnaire, which could mean they are answered automatically or the person randomly selects answers, and additional reports on the quality of the judgments3 . Based on the analysis three people were dismissed. After the removal of these judgments, the whole dataset was re-evaluated to make sure that at least two people rated each item. The initial goal was to have at least three ratings per item. However, the deletion of the judgments reduced the set to a degree, that for the experiment two ratings per item had to suffice. To compensate the benefit of additional coders the standard deviation was taken into account. If the standard deviation was bigger than 1.5, then the whole rating was discarded. This assures that only items, which were consistently rated by at least two coders remain in the dataset. With this removal, some of the items did not have any more ratings on all seven items. These items were removed as well. The resulting descriptive statistics can be seen in the following table (Table 4). reflective notreflective element N Mean SD N Mean SD situation 18 1.87 0.66 10 3.27 0.97 personal 18 1.65 0.79 10 3.57 1.35 critical 18 2.66 0.88 10 3.52 1.34 perspective 18 3.19 1.12 10 4.37 1.13 outcome 18 2.71 0.96 10 3.42 1.44 whatnext 18 2.27 0.79 10 3.32 1.20 reflective 18 2.11 0.10 10 3.52 1.44 Table 4. Difference between reflective and not-reflective texts The descriptive statistics of this refined analysis is in line with the results above. If a text is reflective then the human coders agree more with the asked six questions, than with less reflective texts. 8 Discussion The results indicate that on average the two types of text not only differ within the reflection detection system, but also in the perception of human judgements. The anticipated stronger agreement of the reflective category is reflected in the 3 http://www.behind-the-enemy-lines.com/2010/12/mechanical-turk-now-with-4092- spam.html 112 Comparing Automatically Detected Reflective Texts with Human Judgements mean values compared to the not-reflective category. While these initial results of the analysis are already encouraging, further confirmatory testing is necessary. The parameterisation of the reflective texts is crucial, as these values set the base line for the reflection detection. While 10 texts already give insights on the weight of each indicator a larger corpus of reflective texts would be helpful for fine-tuning the weights. The inherent problem is that by now no larger corpus of high quality reflective texts exists, which are suitable for natural language processing. The approach described here is a first step towards a reflective text corpus. The assignment of indicators to the elements of reflection is in essence an additive model. This is seen already as a good starting point, as with this simple rule already differences are detectable. However, future research will consider more complex rules, which represent the essence of reflective texts more accurate, by taking into account a wider body of reflective texts for parameterisation. 9 Outlook Reflection is an important part in several theories and has many facets. This faceted character of reflection makes it a fascinating area of research as each element of reflection bears its own research problem, as well as aggregating indi- cators to a meaningful whole is yet to research. First steps have been made and some of them were sketched in this paper. Currently, the focus of this research is the development and evaluation of the analytics component of the reflection detection architecture. As a next step the data gained from this experiment, will be further analysed with the goal to refine the parameters of the reflection detector. One possible application scenario especially useful for an educational setting is to combine the detection with a feedback component. The described reflection detection architecture with its knowledge-based analysis component can be ex- tended to provide an explanation component, which can be used to feedback why the system thinks it is a reflective text, together with text samples as evidences. References [1] Aukes, L.C., Geertsma, J., Cohen-Schotanus, J., Zwierstra, R.P., Slaets, J.P.: The development of a scale to measure personal reflection in medical practice and education. Medical Teacher 29, 177–182 (Jan 2007), http: //informahealthcare.com/doi/abs/10.1080/01421590701299272 [2] Bloom, B.S.: Taxonomy of educational objectives. Longmans, Green (1954) [3] Bogo, M., Regehr, C., Katz, E., Logie, C., Mylopoulos, M.: Developing a tool for assessing students’ reflections on their practice. Social Work Education 30, 186–194 (Mar 2011), http://tandfprod.literatumonline.com/doi/ abs/10.1080/02615479.2011.540392 [4] Boud, D., Keogh, R., Walker, D.: Reflection: Turning Experience into Learn- ing. Routledge (Apr 1985) 113 Comparing Automatically Detected Reflective Texts with Human Judgements [5] Boyd, E.M., Fales, A.W.: Reflective learning. Journal of Humanistic Psy- chology 23(2), 99 –117 (1983), http://jhp.sagepub.com/content/23/2/ 99.abstract [6] Bruno, A., Galuppo, L., Gilardi, S.: Evaluating the reflexive practices in a learning experience. European Journal of Psychology of Educa- tion 26, 527–543 (May 2011), http://www.springerlink.com/index/10. 1007/s10212-011-0061-x [7] Chang, C., Chou, P.: Effects of reflection category and reflection quality on learning outcomes during web-based portfolio assessment process: A case study of high school students in computer application courses. TOJET 10(3) (2011) [8] Corich, S., Kinshuk, L.M.: Measuring critical thinking within discussion forums using a computerised content analysis tool. the Proceedings of Net- worked Learning (2006) [9] De Marneffe, M.C., MacCartney, B., Manning, C.D.: Generating typed de- pendency parses from phrase structure parses. In: Proceedings of LREC. vol. 6, p. 449–454 (2006), http://nlp.stanford.edu/manning/papers/ LREC_2.pdf [10] Dewey, J.: How we think: A restatement of the relation of reflective thinking to the educative process. DC Heath Boston (1933) [11] Dewey, J.: How we think. Courier Dover Publications (republication in 1997 of the work orginally published in 1910 by D. C. Heath & Co.) (1910) [12] Dyment, J.E., O’Connell, T.S.: Assessing the quality of reflection in student journals: a review of the research. Teaching in Higher Education 16, 81–97 (Feb 2011), http://www.tandfonline.com/doi/abs/10.1080/13562517. 2010.507308 [13] Ferguson, R., Shum, S.B.: Social learning analytics: five approaches. In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge. p. 23–33. LAK ’12, ACM, New York, NY, USA (2012), http: //doi.acm.org/10.1145/2330601.2330616 [14] Garrison, D.R., Anderson, T., Archer, W.: Critical thinking, cognitive pres- ence, and computer conferencing in distance education. American Journal of distance education 15(1), 7–24 (2001) [15] Hatton, N., Smith, D.: Reflection in teacher education: Towards defini- tion and implementation. Teaching and Teacher Education 11(1), 33– 49 (Jan 1995), http://www.sciencedirect.com/science/article/pii/ 0742051X9400012U [16] Kember, D., McKay, J., Sinclair, K., Wong, F.K.Y.: A four-category scheme for coding and assessing the level of reflection in written work. Assessment & Evaluation in Higher Education 33, 369–379 (Aug 2008), http://www. tandfonline.com/doi/full/10.1080/02602930701293355 [17] Killion, J.P., Todnem, G.R.: A process for personal theory building. Educational Leadership 48(6), 14–16 (1991), http://www.eric.ed.gov/ ERICWebPortal/detail?accno=EJ422847 [18] Klein, D., Manning, C.D.: Accurate unlexicalized parsing. In: IN PRO- CEEDINGS OF THE 41ST ANNUAL MEETING OF THE ASSOCIA- TION FOR COMPUTATIONAL LINGUISTICS. p. 423–430 (2003) 114 Comparing Automatically Detected Reflective Texts with Human Judgements [19] Korthagen, F., Vasalos, A.: Levels in reflection: core reflection as a means to enhance professional growth. Teachers and Teaching: Theory and Practice 11, 47–71 (Feb 2005), http://www.tandfonline.com/doi/abs/10.1080/ 1354060042000337093 [20] Mann, K., Gordon, J., MacLeod, A.: Reflection and reflective practice in health professions education: a systematic review. Advances in Health Sci- ences Education 14, 595–621 (Nov 2007), http://www.springerlink.com/ content/a226806k3n5115n5/ [21] McKlin, T.: Analyzing cognitive presence in online courses using an artificial neural network. Middle-Secondary Education and Instructional Technology Dissertations p. 1 (2004) [22] Mezirow, J.: On critical reflection. Adult Education Quarterly 48(3), 185– 198 (May 1998) [23] Moon, J.A.: The Module & Programme Development Handbook: A Prac- tical Guide to Linking Levels, Learning Outcomes & Assessment. Kogan Page (Mar 2002) [24] Moon, J.A.: A handbook of reflective and experiential learning. Routledge (Jun 2004) [25] OECD: The Definition and Selection of Key Competencies (DeSeCo): Exec- utive Summary. OECD (2005), http://www.oecd.org/dataoecd/47/61/ 35070367.pdf [26] Pennebaker, J.W., Francis, M.E.: Cognitive, emotional, and language pro- cesses in disclosure. Cognition & Emotion 10(6), 601–626 (Nov 1996), http://www.tandfonline.com/doi/abs/10.1080/026999396380079 [27] Plack, M., Driscoll, M., Blissett, S., McKenna, R., Plack, T.: A method for assessing reflective journal writing. Journal of allied health 34(4), 199–208 (2005) [28] Rosé, C., Wang, Y.C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., Fischer, F.: Analyzing collaborative learning processes automatically: Ex- ploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer-Supported Col- laborative Learning 3(3), 237–271 (Jan 2008), http://www.springerlink. com/content/j55358wu71846331/ [29] Schler, J., Koppel, M., Argamon, S., Pennebaker, J.: Effects of age and gen- der on blogging. In: Proceedings of the AAAI Spring Symposia on Compu- tational Approaches to Analyzing Weblogs. p. 27–29 (2006), https://www. aaai.org/Papers/Symposia/Spring/2006/SS-06-03/SS06-03-039.pdf [30] Schön, D.: Educating the reflective practitioner. Jossey-Bass San Francisco (1987) [31] Strapparava, C., Mihalcea, R.: Semeval-2007 task 14: Affective text. Proc. of SemEval 7 (2007), http://acl.ldc.upenn.edu/W/W07/W07-2013.pdf [32] Strapparava, C., Valitutti, A.: WordNet-Affect: an affective extension of WordNet. In: Proceedings of LREC. vol. 4, p. 1083–1086 (2004), http: //hnk.ffzg.hr/bibl/lrec2004/pdf/369.pdf [33] Surbeck, E., Han, E.P., Moyer, J.E.: Assessing reflective responses in jour- nals. Educational Leadership 48(6), 25–27 (1991), http://www.eric.ed. gov/ERICWebPortal/detail?accno=EJ422850 115 Comparing Automatically Detected Reflective Texts with Human Judgements [34] Toutanova, K., Klein, D., Manning, C.D., Singer, Y.: Feature-rich part-of- speech tagging with a cyclic dependency network. In: IN PROCEEDINGS OF HLT-NAACL. p. 252–259 (2003) [35] Ullmann, T.D.: An architecture for the automated detection of textual in- dicators of reflection. In: Reinhardt, W., Ullmann, T.D., Scott, P., Pam- mer, V., Conlan, O., Berlanga, A. (eds.) Proceedings of the 1st European Workshop on Awareness and Reflection in Learning Networks. pp. 138–151. Palermo, Italy (2011), http://ceur-ws.org/Vol-790/ [36] Wald, H.S., Borkan, J.M., Taylor, J.S., Anthony, D., Reis, S.P.: Fostering and evaluating reflective capacity in medical education: Developing the RE- FLECT rubric for assessing reflective writing. Academic Medicine 87(1), 41– 50 (Jan 2012), http://journals.lww.com/academicmedicine/Abstract/ 2012/01000/Fostering_and_Evaluating_Reflective_Capacity_in.15. aspx [37] Wong, F.K., Kember, D., Chung, L.Y.F., Yan, L.: Assessing the level of student reflection from reflective journals. Journal of Advanced Nurs- ing 22(1), 48–57 (Jul 1995), http://onlinelibrary.wiley.com/doi/10. 1046/j.1365-2648.1995.22010048.x/abstract 116 The Functions of Sharing Experiences, Observations and Insights for Reflective Learning at Work Viktoria Pammer1, Michael Prilla2 and Monica Divitini3 1 Know-Center, Inffeldgasse 21A, Graz, Austria 2 Information and Technology Management, University of Bochum, Germany 3 Dptmt. of Computer and Information Science, NTNU, Trondheim, Norway Abstract. In this paper, we are concerned with knowledge workers that want or need to improve their work performance, and choose to do so by reflective learning. These knowledge workers think back to own work experiences, criti- cally re-evaluate them, and distil lessons learned relevant to their own work practice. We highlight in this work the functions of sharing own work experi- ences, observations and insights for reflective learning at work. Based on ana- lysing existing Apps that support reflective learning in organizational context, we identify as different functions of sharing for reflective learning: 1) Shared data as baseline to (re-)evaluate own work. 2) Shared data as guideline for fu- ture behaviour at work. 3) Sharing as necessary prerequisite for collaborative or organizational learning. 4) Sharing to integrate multiple perspectives. Addition- ally, we show how knowledge of these functions of sharing can inform the de- sign of Apps for reflective learning in an organizational context. 1 Introduction Reflective learning is a method of self-directed learning that suits work-integrated learning well, because it does not require a teacher, coach or mentor. It is thus not surprising that reflective learning has since long been a part in formal education (e.g., of nurses, teachers, athletes’ training), and is expected in many professions as “part of the job”. More recently, efforts are being made to support reflective learning at work with information and computer technology. For instance, the SenseCam has been investigated as supporting school and university teachers [3], visualisations of group activities within software development environments as supporting software devel- opment as studied in [4, 5] for student software development projects, and ubiquitous computing technologies have been used to support reflection on a broad range of ac- tivities in the physical reality [7]. Technological support for reflective learning often includes the possibility to share “objects” such as the experiences that shall be reflected on, reflection outcomes in different stages of maturity (observations, ideas, solutions, etc.) or any artefacts relat- ing to such experiences and outcomes (photos, audio- or video recordings, notes, physical objects, etc.). In this work, we investigate the question: 117 The Functions of Sharing Experiences, Observations and Insights Which different functions does sharing have for reflective learning in organizations? Note that when we talk about sharing “data” below, we mean digital expressions of work-related experiences as well as of reflection outcomes (ideas, observations, insights, etc.). 2 Sharing and Reflective Learning in Selected Apps We illustrate our analysis with three Apps (CroMAR, Talk Reflection, Task Detec- tion) that support reflective learning in an organizational context. Functionalities and usage of the first two Apps, CroMAR and Talk Reflection, will be used in Sect. 3 to illustrate the different functions of sharing. The Task Detection App will be used in Sect. 4 to illustrate how knowledge about different functions of sharing can be used to shape the (re) design of reflection Apps. 2.1 CroMAR App: Reflecting on Critical Events CroMAR1 is a mobile augmented reality app that supports viewing and navigating of geo-tagged information (e.g., data from sensors, social media, radio broadcasts, video feeds), while the user is in the specific location where work took place. CroMAR provides access to information from different sources on top of the video feed from the device camera. Though the system has functionalities that might be relevant for reflecting on any working experience with a strong physical nature, the system has been specifically developed for reflection on emergency work. CroMAR supports emergency workers in after-event debriefing and reflection by providing multiple points of view of an event. Using CroMAR, it is possible to navigate information by mean of time, space and keywords. In this way we can expect the reflection process to be grounded in a context that helps to make sense of the information and reflect on alternative path of actions. An extended description of the system is available in [8]. The CroMAR App requires sharing in the sense that the CroMAR App’s purpose is to make information collected by multiple people during a collective event available for reflection. In addition, sharing during a reflection session is supported by a video- conferencing functionality, and the functionality to send items captured within the CroMAR App via email. Finally, reflection outcomes can be captured and shared via a note-taking functionality. Sharing thus serves the purpose to collect multiple view- points and to enable collaborative reflection via the possibility to discuss on items within the App which in the end may lead to organizational learning (e.g., new best practices on handling emergencies). 1 A description and screenshot are also available online: http://www.mirror- project.eu/showroom-a-publications/mirror-apps-status/84-cromar 118 The Functions of Sharing Experiences, Observations and Insights 2.2 Talk Reflection App: Reflecting on Conversations in Healthcare Conversations between medical staff and with patients and or their relatives are typi- cally challenging to medical and care staff, as they often include conveying bad news. (e.g., in the cases of a stroke, or of deteriorating bad health condition of elderly people in care homes), and the patients and relatives are often in difficult emotional (and cognitive in the case of patients) states during these conversations. On the other hand, good communication is necessary, as medical and care staff needs information on patients from relatives and because the quality of communication is a comparatively “cheap” (if not easy) way of raising the perceived quality of care. The Talk Reflection App2 provides the possibility to document patient and relative talks as legally required and to add personal and private impressions. The first part (the legally required doc- umentation of conversations) is public and shareable, and can be commented upon by colleagues. In the second part (personal impressions on the conversation), medical and care staff is asked to self-assess a conversation (in other words: asked to reflect upon the conversation), and has given the possibility to mark specific conversations for later discussion with colleagues or a supervisor. The App relates the self- assessments of physicians to the assessment of others to enable comparison to others. It also supports the exchange of documentation of conversations for the purpose of preparing for collaborative reflection sessions and commenting on shared documenta- tions. The App also provides the possibility to explicitly document and share insights from reflection and to link to collect multiple conversations together and document (shared) insight to these insights in order to make them more understandable [10]. Sharing within the Talk Reflection App thus serves the purpose to compare own expe- riences within conversations to the experience of colleagues, to benefit from others’ experiences and insights, and to enable collaborative reflection. 2.3 Task Detection App: Reflecting on Time Management The Task Detection App (TD App3) captures work activities on a PC. Specifically, it captures window focus (and focus switching) on a PC. For each window in focus, it also determines the window title and if applicable the path to the window resource (e.g., for websites and documents but not for emails and Skype messages). In addi- tion, users can record times that they spend on task activities such as “writing a pro- ject tender for customer XX” (the list of activities grows through usage as users add more and more of their own activities). Finally, the TD App also supports note-taking which serves the purpose to collect own observations and insights in relation to the work experiences. Thus the activities captured in the TD App are a mixture of auto- matically captured activities (focus switching) and manually captured task activities. The collected information is displayed i) on a timeline (which for most users illus- trates a high fragmentation of work), and ii) as statistics in the form of pie charts. The 2 A description and screenshot are also available online: http://www.mirror- project.eu/showroom-a-publications/mirror-apps-status/90-prepapp 3 A description and screenshot are also available online: http://www.mirror- project.eu/showroom-a-publications/mirror-apps-status/93-taskdetection 119 The Functions of Sharing Experiences, Observations and Insights App thus provides an “AS IS analysis” of how a user spends his/her time at work, and supports reflective learning regarding time management and self-organization. The Task Detection App currently does not have sharing functionalities – but shar- ing could play several functions for reflective learning if integrated into the Task- Detection App as will be shown below in Sect. 4. Most significantly, people could profit from seeing how others manage their time and time-management-related chal- lenges; and additionally systematic time management problems (and solutions!) may be identified by lifting the challenge of time management up from an individual level to a collaborative and organizational challenge, e.g., if the organisational culture is that meetings regularly take longer than expected or are started late. 3 The Functions of Sharing for Reflective Learning at Work Our understanding of the functions of sharing for reflective learning in an organi- zational context has evolved from literature research, user studies [12], requirements engineering [6] and an analysis of the CroMAR and Talk Reflection App as well as of four more Apps described in [9, p38ff]. 3.1 Shared Data as Baseline for Re-Evaluation Learning by observing others and reflecting on similarities and differences in work performance, behaviour etc. is a valuable learning method at work [1]. This principle underlies the functions of sharing “shared data as baseline for re-evaluation” (this subsection) and “shared data as input for learning” (next subsection, Sect. 3.2) We have observed in several user trials that users desire support in interpreting their work activities (e.g., were they exceptional, ok, to-be-improved?) How col- leagues or experts perform their work activities (e.g., organise their time, carry out conversations with patients) is one powerful way to give individual employees a base- line to actually make sense of data about own work behaviour (e.g., is it normal that I switch tasks that often?). Additionally of course, best practices, compliance guidelines etc. can also serve as baseline for data interpretation –in a sense these are highly “compressed” and standardized way of how others do their work. In this function, shared data helps the learner to evaluate own experiences and performance. In the Talk Reflection App, own assessment of a conversation can be compared to the assessment of others by exposing one’s own experience to comments of others, and specifically asking for this kind of input. Shared data could also be used more explicitly as a baseline for comparison. In [2] for instance, users can compare their own emotional reaction to a situation with the reaction of colleagues to the same situ- ation within a mood tracking application. In that context, the comparison feature has been shown to be highly appreciated by users. 120 The Functions of Sharing Experiences, Observations and Insights 3.2 Shared Data as Guideline for Future Behaviour Shared data also influences learners with respect to future behaviour – how to act and to react in the future. Observing how others have dealt with specific challenges in the past, or taking up ideas, advice etc. from others gives the individual knowledge worker a broad range of possibilities for future behaviour. In addition, these possibili- ties have sometimes already been “evaluated” by others when given in the form of advice for instance. In the Talk Reflection App for instance, sharing is available on request for a specif- ic conversation (a single physician shares an experience with others and invites com- ments). Resolutions derived through the ensuing collaborative reflection are available in the spirit of lessons learned from experience or advice for App users. 3.3 Sharing is Necessary for Collaborative and Organizational Learning From existing literature and empirical work as described in [11], it becomes clear that individual observations and reflections are an important starting point for iterative reflection sessions in organizations that can lead to organizational learning ultimately. Iterative reflection sessions are often necessary in an organizational context, as not everyone has all necessary knowledge to resolve a problem, or the power to imple- ment or disseminate a solution identified during reflection. On the other hand, man- agement levels have the power but may not have the detailed operative knowledge to identify problems in working processes, or efficient solutions. In this function, shared data serves as input to collaborative and organizational learning processes. In the CroMAR App, this function of sharing is obvious, as event management is distributed and collaborative work – in order to reflect on an event and its handling by emergency forces in a meaningful way on a collaborative and organizational level. 3.4 Sharing to Integrate Multiple Perspectives Finally, in some cases it is necessary to recognize the highly distributed nature of work and the impossibility for an individual to collect enough information to make sense of her experience taking into account different perspectives. For example, in the case of emergency work the perspective of an event that each worker gets is deeply influenced e.g., by the specific location one is working in and the role is playing. Dur- ing our studies we identified this as challenging because the worker is reflecting on a necessarily partial vision of the event, while comparing different perspectives and identifying conflicting or complementary information might serve as a trigger for reflection. Experiences and observations from multiple actors should therefore be combined to help a worker to shade light on different aspect of the experience, reach- ing a more complete perspective on the object of reflection (in the case of emergency work, a specific emergency event) than any single actor can achieve. To this purpose, CroMAR provides users with information that is collected by mul- tiple actors, either automatically through sensors, or proactively, e.g., by capturing tweets from the population. 121 The Functions of Sharing Experiences, Observations and Insights 4 Using Sharing Functions to Inform App Design Finally, we illustrate how knowledge about the existence of the different functions of sharing can inform the (re-) design of reflection Apps. The Task Detection App currently provides no sharing functionality. However, sharing anonymised data about time management patterns of colleagues could provide a baseline for evaluating own patterns (section 3.1), answering questions like “Is it normal that I switch tasks fre- quently”? Additionally, learners could share tips and tricks for dealing with time management challenges (section 3.2) and thus support others in changing their time management. Finally, some time management aspects are cultural and thus bound to the organization, like the widespread belief that email can be used as a synchronous communication medium, i.e. that emails get answered quickly. A solution to this can- not be implemented at an individual level but an organization-wide decision is need- ed. We can expect that data about the actual disruption this causes (e.g., from reflec- tion of corresponding experiences) can inform this decision. Thus, the challenge of time management would be lifted from an individual to an organizational level (sec- tion 3.3). Likewise, groups can benefit much in reflection e.g., of their communica- tion behaviour if individuals share data and perspectives describing their usage of communication tools (section 3.4). 5 Discussion and Outlook The different functions of sharing in the context of reflective learning within or- ganizations highlight that being able to get various data (experiences, observations, insights, ideas etc.) from multiple actors is critical both for the individual learners and their social context (teams and organization). At the same time, capturing relevant perspectives might be challenging. For example, people with a critical role might not provide input because they are too busy. To address this challenge it is necessary to introduce adequate scaffolding mechanisms and to provide easy modalities of input including automatically provided complementary data. In addition, it has become clear in first user trials that users are very interested in identifying the source of each input, and in comparing themselves to others This brings along challenging issues connected to visualization, ownership, and privacy. Finally, the four different functions bring out the fact that the person who shares rarely benefits directly from sharing, and that depending on the exact sharing func- tionality and its usage in an application context, different actors benefit from sharing (colleagues as individuals, colleagues as team up to whole organization). We can only hypothesize at this point, that this is interesting input when considering the motivation of (and how to motivate) knowledge workers to actually share data. This work is preliminary in the sense that the functions of sharing identified are based on the analysis of a very limited number of applications. Clearly, our first re- sults need to be put in relation with the large body of research that exists on sharing and learning, and with other existing technological support for reflective learning in an organizational context. However, this analysis of different functions for sharing is 122 The Functions of Sharing Experiences, Observations and Insights already valuable to inform the design of Apps that support reflective learning in an organizational context. Using the four functions above, existing technologies can be systematically analysed and extended with respect to which of the functions sharing needs to fulfil in a given App in a given application context. Acknowledgements The project “MIRROR - Reflective learning at work'' is funded under the FP7 of the European Commission (proj. number 257617). The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technolo- gy, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG. References 1. Bandura, A. Social cognitive theory of human development. In: Husen, T. & Postlethwaite, T. N. (Eds.), International encyclopedia of education, 1996, 5513-5518, Pergamon Press. 2. Fessl, A.; Rivera-Pelayo, V.; Pammer, V. & Braun, S. Mood Tracking in Virtual Meetings. To be published: 7th European Conf. on Technology-Enhanced Learning (ECTEL), 2012 3. Fleck, R. & Fitzpatrick, G. Teachers' and tutors' social reflection around SenseCam images International Journal of Human-Computer Studies, 2009, 67, 1024-1036 4. Kay, J.; Koprinska, I. & Yacef, K. Educational Data Mining to Support Group Work in Software Development Projects Taylor and Francis, 2010, 173-186 5. Krogstie, B. Using Project Wiki History to Reflect on the Project Process 2st Hawaii International International Conference on Systems Science (HICSS-42 2009), Proceedings, 5-8 January 2009 6. Krogstie, B. (Ed) MIRROR Scenarios and Requirements. Deliverable D1.3, MIRROR IP. (2011) 7. Li, I.; Dey, A. K. & Forlizzi, J. Understanding my data, myself: supporting self-reflection with ubicomp technologies Proceedings of the 13th international conference on Ubiquitous computing, ACM, 2011, 405-414 8. Mora, S., Boron, A., & Divitini, M. (2012). CroMAR: Mobile Augmented Reality for Supporting Reflection on Crowd Management. International Journal of Mobile Human Computer Interaction, 4(2), 88–101. doi:10.4018/jmhci.2012040107 9. Pammer, V. & Fessl, A. (Eds): Individual Reflection Apps Version 1. Deliverable 4.2, MIRROR IP (2012) 10. Prilla, M., Degeling, M., Herrmann, T.: Collaborative Reflection at Work: Supporting In- formal Learning at a Healthcare Workplace. Proceedings of the ACM International Con- ference on Supporting Group Work (GROUP 2012) (2012). 11. Prilla, M.; Pammer, V. & Balzert, S. The Push and Pull of Reflection in Workplace Learn- ing: Designing to Support Transitions Between Individual, Collaborative and Organisa- tional Learning. To be published at: 7th European Conf. on Technology-Enhanced Learn- ing (ECTEL), 2012 123 The Functions of Sharing Experiences, Observations and Insights 12. Wessel, D., Knipfer, K. (Eds): Report on User Studies. Deliverable D1.2, MIRROR IP. (2011). 124 Detecting and Reflecting Learning Activities in Personal Learning Environments Alexander Nussbaumer1 , Maren Scheffel2 , Katja Niemann2 , Milos Kravcik3 , and Dietrich Albert14 1 Knowledge Management Institute, Graz University of Technology, Austria {alexander.nussbaumer,dietrich.albert}@tugraz.at 2 Fraunhofer Institute for Applied Information Technology, {maren.scheffel,katja.niemann}@fit.fraunhofer.de 3 Lehrstuhl Informatik 5, RWTH Aachen University kravcik@dbis.rwth-aachen.de 4 Department of Psychology, University of Graz, Austria dietrich.albert@uni-graz.at Abstract. This paper presents an approach for supporting awareness and reflection of learners about their cognitive and meta-cognitive learn- ing activities. In addition to capture and visualise observable data about the learning behaviour, this approach intends to make the leaner aware of their non-observable learning activities. A technical approach and partial implementation is described, how observable data are used to support reflection and awareness about non-observable learning activities. Basis for the technical solution is the extraction of key actions from log data of the interaction of users with resources. Furthermore, a taxonomy of learning activities derived from self-regulated learning theory is used for matching its elements with actually performed actions. Keywords: learning analytics, learning activities, self-regulated learn- ing personal learning environments, widget, ontology 1 Introduction In the recent years a trend became very popular to create small applications for specific purposes with limited functionalities. A second trend became popular in the technology-enhanced learning area, that systems and technology appeared that allow to create learning environments by mashing up such small applications (e.g. iGoogle5 ). The European research project ROLE6 aims to achieve progress beyond the state of the art in providing personal support of creating user-centric responsive and open learning environments. Learners should be empowered to create and use their own personal learning environments (PLE) consisting of different types of learning resources. 5 http://www.google.com/ig 6 http://www.role-project.eu 125 Detecting and Reflecting Learning Activities in Personal Learning Environments Strategies have been developed for supporting the creation of such PLEs which are in fact bundles of widgets. Ideally, such widget bundles should include widgets that support the performance of several cognitive and meta-cognitive learning activities, in order to be used for self-regulated learning. Beside widgets for domain-specific activities, there is also a need for meta-cognitive activities, such as goal setting, self-evaluation, or help seeking (see [1]). For the support of the usage of widget bundles, learning analytics approaches have been imple- mented. The learners’ interactions with widgets and resources are stored and graphically displayed. In this way support for reflection and awareness about the own behaviour is provided. Existing work in the field of learning analytics typically focuses on collecting and visualising directly observable data of learner behaviour. For example the approach presented in [2] describes how student data is collected and how this data is correlated to the achievement in terms of learning progress. Another example presented in [3] describes how typically activities of students using Learning Management Systems (LMS) are captured and used for predictions. In contrast to these approaches, this paper tries to identify way how meta-cogntive and non-observable cognitive behaviour can be captured and used for feedback to the learner. Hence, this paper makes an approach to make the learner aware of the own cognitive and meta-cognitive processes that cannot be directly observed. This paper presents an approach to support awareness and reflection of the non-observable cognitive and meta-cognitive learning activities. Section 2 de- scribes the underlying pedagogical approach (learning ontology and self-regulated learning) and the technical basis (extraction of key actions from captured usage data). Section 3 takes into account these underlying concepts and presents a new approach to support awareness and reflection, which includes a pedagogical and technical perspective. 2 Related Work and Baseline 2.1 Contextualised Attention Metadata and Visualisation Previous work has been done in the context collecting log data in a structured way and visualising these data. Contextualised Attention Metadata (CAM) cap- tures the interactions of users with resources and tools. Each time a user performs an activity with a resource (e.g. a document) in the context of a tool, a dataset structured according to CAM is created and stored. In this way the behaviour of users can be tracked [4]. A tool that exploits CAM information for making users aware about the own learning behaviour is CAMera [5]. CAMera provides simple metrics, statistics and visualizations of the activities of the learner. It also visualizes a social net- work based on email communication. CAMera is not restricted to PLEs, but can also use CAM data created by desktop applications. The objective of CAMera is stimulating self-monitoring of the user. The Student Activity Meter (SAM) and the CAM Dashboard are two further applications that demonstrate how CAM data can be used to support reflection 126 Detecting and Reflecting Learning Activities in Personal Learning Environments of the learner [6, 7]. SAM applies visualization techniques to enable understand- ing and discovery of patterns from monitoring data. Depending on the level of detail in the data, different metrics are provided, like basic time spent and resource use or forum view and post actions. The overall goal of SAM is to as- sist both teachers and learners with reflection and awareness of what and how learners are doing. This can be especially useful for self-regulated learning, where learners are in control of their own learning. The CAM dashboard aims to enable students to reflect on their own activity and compare it with their peers. 2.2 Key Action Extraction In [8] an approach is presented how key actions can be extracted from CAM data. The extraction of key actions is done by analysing CAM data with techniques used in the research field of computational linguistics. Using methodologies from text analysis it is aimed to find patterns within the recorded activities. It is assumed that key actions can semantically represent the session of learners they are taken from. In order to find repeated string patterns, the collected CAM data are analysed with the so-called n-gram approach. The following example illustrates the technique in a simplified way: A B C A B D B C A B A A C D The letters represent the actions of users in a session. The merging of n-grams is possible if the frequency of the new key action is above a set threshold. Let’s assume the threshold in this example was set to 2. As no monograms are below that threshold, all of them are used for further calculations. The bigrams AA, AC, BD, BA, CD and DB only occur once. Hence, they are discarded from further calculations and can consequently neither be a key action nor part of one. This example ends with two key actions, the tetragram BCAB which occurs twice and D. The detailed approach can be found in [8]. 2.3 Self-regulated Learning and Learning Ontology A model for Self-regulated Learning (SRL) in the context of PLEs has been pro- posed in [9]. This approach is based on a modified version of the cyclic model for SRL as proposed by Zimmerman [10]. It states that SRL consists of four cognitive and meta-cognitive phases (or aspects) that should happen during the self-regulated learning process, which are planning the learning process, search for resources, actual learning, and reflecting about the learning process. In addi- tion to these phases and in order to operationalise them, a taxonomy of learning strategies and learning techniques (in short SRL entities) has been defined and assigned to the learning phases. Following the ideas presented in [11], learning strategies and techniques are defined on the cognitive and meta-cognitive level and are related to the cyclic phases in order to define explicit activities related to the SRL learning process. 127 Detecting and Reflecting Learning Activities in Personal Learning Environments Learning strategies and techniques have also been assigned to widgets stat- ing that these techniques are supported by the respective widgets. The basic assumption of creating good PLEs is that the assembly of widgets to a wid- get bundle should follow a pedagogical approach. Assembling widgets to a PLE then follows some guidelines which underlying constructs should be contained and how they should be assembled [12]. The general goal is that a bundle con- sists of widgets for different cognitive and meta-cognitive activities, so that a learner has available at least one widget for the most important learning activ- ities. Examples for meta-cognitive learning activities are goal setting, searching for resources, or time management. Examples for cognitive activities are brain- storming, mind mapping, or note taking. While this approach helps for creating suitable bundles for SRL, it does not help learners how to use such bundles. The approach presented in this paper addresses this gap. 3 Detection and Reflection of Learning Activities The goal of this paper is not only to monitor and visualise the observable actions, but also to monitor the cognitive and meta-cognitive activities that are not di- rectly measurable. To this end the measurable actions are mapped to cognitive and meta-cognitive learning activities. To be precise, the key actions extracted from the CAM data analysis (see Section 2.2) are mapped elements of the learn- ing ontology (see Section 2.3). The mapping is partially done by the learner herself, but also supported by an algorithm that takes into account the previous manual matchings. 3.1 Technical Approach The overall approach from a technical perspective is depicted in Figure 1. The learning environment where CAM data is captured is a ROLE space with a set of widgets. Each widget logs CAM data according to the actions of the learning. In particular, this includes the actions that a learner performs on the widgets or the documents represented by the widgets. The CAM data are stored in the CAM service which is basically a database for CAM events that receives these events over a REST interface. The analysis component accesses these CAM events, in order to detect key actions. This is done in the same way as described in Section 2.2 and [8], respectively. The learning ontology consists of cognitive and meta-cognitive learning ac- tivities describing typical learning activities. It is modelled in RDF format and stored within a service that exposes this ontology over a REST interface (using SPARQL queries). This allows for retrieving lists of learning activities from this service. The core component of this approach is the matching component where key activities are mapped to learning activities. It consists of a user interface and a back-end service. In the user interface the learner can manually assign learn- ing activities to extracted key actions. Based on previous assignments, learning 128 Detecting and Reflecting Learning Activities in Personal Learning Environments activities can be recommended for each of the key actions of the user. So the learner has not to do the whole assignment work, but can chose from a few possibilities or just approves the recommended assignment. The back-end ser- vice provides the key actions for each user and also offers the recommendations. These recommendations are based on previous assignments that are stored in an assignment database. Fig. 1. The conceptual approach. 3.2 Pedagogical Approach The pedagogical perspective of the presented approach focuses on the the re- flection and awareness aspects of the learning process. In contrast to existing approaches where learners are made aware of their observable actions, this ap- proach intends to make learners aware of their non-observable cognitive and meta-cognitive activities. Based on literature review a taxonomy of learning ac- tivities has been created that describe typical learning activities. In order to match observable and non-observable activities, the learner is presented with the key actions of their own learning behaviour. Then the learner should assign which cognitive or meta-cognitive activity is represented by the respective key actions. This assignment task should stimulate the learner to think about the cognitive and meta-cognitive learning activities. In addition, the learner gets suggestions for learning activities that are candidates for the observable perfor- mance. This mixture of active assignment and support through the suggestions for assignments makes up the pedagogical approach. 129 Detecting and Reflecting Learning Activities in Personal Learning Environments 3.3 Implementation Several components of this approach have already been implemented in the con- text of previous work. A widget container where widgets can be added to a widget bundle has been developed in the ROLE project. The CAM service is used to collect CAM data from the widgets and makes them accessible for other components. The key action detection algorithm has already been implemented and described in [8]. A learning ontology and a service to make it accessible has been developed in the context of a mashup recommender for supporting the creation of widget bundles. New development needed for this approach is the component that matches observed key activities with learning activities from the ontology. This compo- nent will consist of a widget as front-end for the user and a Web services as back-end for the widget. The back-end provides recommendations for assign- ments of key actions with learning activities to the leaner. The learner actually commits assignments, which is stored in a database and used for further recom- mendations. The recommendation algorithm takes into account all committed assignments. 4 Conclusion and Outlook This paper presented an approach for supporting awareness and reflection of learners about their cognitive and meta-cognitive learning activities. In contrast to typical learning analytics solutions, this approach focuses on non-observable learning activities that should be made aware and stimulated. Observable track- ing data are analysed and key actions are extracted. By assigning learning ac- tivities to these key actions learners should become aware about the cognitive and meta-cognitive learning activities. A technical approach is presented that supports this pedagogical approach. While some components of the technical approach are already available, others are under development. Next steps include the development of the assignment and recommendation component. This component integrates the existing compo- nents and provides the user interface for the learner. Further work also includes the evaluation of the first prototype regarding its usefulness. Acknowledgements The work reported has been partially supported by the ROLE project, as part of the Seventh Framework Programme of the European Commission, grant agreement no. 231396. References 1. Dabbagh, N., Kitsantas, A.: Supporting Self-Regulation in Student-Centered Web- Based Learning Environments. International Journal on e-Learning 3(1) (2004) 40–47 130 Detecting and Reflecting Learning Activities in Personal Learning Environments 2. Romero-Zaldivar, V.A., Pardo, A., Burgos, D., Kloos, C.D.: Monitoring student progress using virtual appliances: A case study. Computers and Education 58(4) (2012) 1058–1067 3. Macfadyen, L.P., Dawson, S.: Mining LMS data to develop an early warning system for educators: A proof of concept. Computers and Education 54(2) (2010) 588 – 599 4. Schmitz, H.C., Kirschenmann, U., Niemann, K., Wolpers, M.: Contextualized Attention Metadata. In Roda, C., ed.: Human Attention in Digital Environments. Cambridge University Press (2011) 186–209 5. Schmitz, H.C., Scheffel, M., Friedrich, M., Jahn, M., Niemann, K., Wolpers, M.: CAMera for PLE. In Cress, U., Dimitrova, V., Specht, M., eds.: Learning in the Synergy of Multiple Disciplines. Volume 5794 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg (2009) 507–520 6. Govaerts, S., Verbert, K., Klerkx, J., Duval, E.: Visualizing activities for self- reflection and awareness. In Luo, X., Spaniol, M., Wang, L., Li, Q., Nejdl, W., Zhang, W., eds.: Advances in Web-Based Learning ICWL 2010. Volume 6483 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg (2010) 91–100 7. Govaerts, S., Verbert, K., Duval, E., Abelardo, P.: The student activity meter for awareness and self-reflection. In: Proceedings of CHI Conference on Human Factors in Computing Systems,, ACM (May 2012) Accepted. 8. Scheffel, M., Niemann, K., Pardo, A., Leony, D., Friedrich, M., Schmidt, K., Wolpers, M., Kloos, C.: Usage pattern recognition in student activities. In Kloos, C., Gillet, D., Crespo Garca, R., Wild, F., Wolpers, M., eds.: Towards Ubiquitous Learning. Volume 6964 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg (2011) 341–355 9. Fruhmann, K., Nussbaumer, A., Albert, D.: A Psycho-Pedagogical Framework for Self-Regulated Learning in a Responsive Open Learning Environment. In Ham- bach, S., Martens, A., Tavangarian, D., Urban, B., eds.: Proceedings of the In- ternational Conference eLearning Baltics Science (eLBa Science 2010), Fraunhofer (2010) 10. Zimmerman, B.J.: Becoming a Self-Regulated Learner: An Overview. Theory Into Practice 41(2) (2002) 64–70 11. Mandl, H., Friedrich, H.: Handbuch Lernstrategien. Hogrefe, Göttingen (2006) 12. Berthold, M., Lachmann, P., Nussbaumer, A., Pachtchenko, S., Kiefel, A., Albert, D.: Psycho-pedagogical mash-up design for personalising the learning environment. In Ardissono, L., Kuflik, T., eds.: Advances in User Modeling. Volume 7138 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg (2012) 161–175 131 132 Improving Social Practice: Enhancing Learning Experiences with Support for Collaborative Reflection Martin Degeling1, Michael Prilla1 1 Ruhr-University of Bochum, Institute for Applied Work Science, Information and Technology Management Universitätstr. 150, 44801 Bochum, Germany {martin.degeling, michael.prilla}@ruhr-uni-bochum.de Abstract. In this paper we describe collaborative reflection as a core way of informal learning at the workplace. From three case studies we derived reflection on social practice as a good example for learning at the workplace. The way employees talk to third parties like patients or customers was observed to be a major topic in discussions within teams as it triggers the sharing of experiences about cases and fosters building of mutual understanding of common problems. We identified articulation to be a core part for this kind of reflection and derived requirements which were than implemented in a tool to support reflection on this topic focused on a healthcare setting and tested out application to reflect on talks with relatives of patients. Keywords: collaborative reflection, learning at work, articulation, social skills 1 Introduction Besides technology support for the collaborative learning and extension of knowledge, there are many skills that cannot be taught like e.g. physics but have to be learned by experiences made during every day work. Although there is an overlap between formal learning and learning by experience [5], e.g. when professionals compare knowledge from vocational training to their experience, there are many cases in which informal learning is the only way to create new insights on work practice. This is especially true for skills and capabilities, which are crucial for performing well in a job and delivering a suitable quality of work yet not taught well in education for this job. Typical examples of such skills are learning strategies needed to continuously stay on top of current knowledge needed for the jobs and social skills such as the ability to communicate and collaborate positively and successfully with colleagues, superiors, clients and other groups playing a role in daily business. For such skills, informal learning and learning form experiences is indispensable, as, for example, social practice cannot be learned but is a result of a continuous process of comparing own behavior to that of others. This paper reports on a core way of informal learning at work, namely (collaborative) reflection. Reflection is a learning mechanism that transcends the 133 Enhancing Learning Experiences with Support for Collaborative Reflection teaching of facts or the combination of different perspectives to create new knowledge. It rather suggests that re-thinking work practice in the face of current knowledge can support and improve future practice. However, although reflection has been recognized as a frequent and essential part of informal learning and there are hardly any insights into processes of collaborative reflection and their support by tools. This paper describes research aiming at closing the resulting gap. This work will be described in the remainder of this paper by the example of supporting the improvement of social practices at work. The paper is organized as follows. First we describe a model of individual reflection and informal learning to then broaden the view on collaborative reflection and research done in that area so far. In section 4 we then draw on three case studies in different organizations1. Due to the lack of insights into collaborative reflection and in order to create an understanding of processes associated with it, the studies were conducted in an exploratory manner, including interviews with the groups described above and work observations. As an outcome, the studies shed light on collaborative reflection of social practice in particular (section 5) and on process characteristics of collaborative reflection in general. 2 Collaborative Reflection and Informal Learning at the Workplace Besides situations of formal learning in dedicated sessions where knowledge is presented by teachers or facilitators learning at work is often rather informal [5]. It happens when we experience new views on our daily routines by either self-reflecting on who we do things or in discussions with others with whom we might compare or that have different perspectives. Learning then takes place when conclusions are drawn by comparing experiences with own knowledge or experiences of others. This is what we refer to as reflection. Figure 1 Reflection model by [1] 1 This work is part of the MIRROR project funded by the European Commission in FP 7. The MIRROR projects aims at supporting reflection in various settings, stages and levels. More information can be found at http://www.mirror-project.eu/. 134 Enhancing Learning Experiences with Support for Collaborative Reflection Following [1] reflection can be defined as going back to past experiences, re- evaluating them with the background of current ideas or feelings and conclude with new perspectives and changes in behavior. According to [1] experiences are behavior, ideas and feelings towards these (see Figure 1Fehler! Verweisquelle konnte nicht gefunden werden.). Reflection means implicitly or explicitly remembering those experiences, the last time a work task was done, when it re-occurs and re-turning to how it was done e.g. by recognizing process steps that where burdening the last time, but seem easier this time. Reflection is then triggered by recognizing the differences and re-evaluating e.g. what caused them. What distinguishes reflection from rumination is that reflection leads to outcomes in form of new perspectives or changes in behavior that e.g. prevent situations in which a task re-occurs in an unwanted way. It needs to be stressed that the reflection process described is not linear. Instead there can be multiple iterations between remembering past experiences and their evaluation which can lead to a deeper understanding of the experiences. Reflection is therefore closely related to problem based learning (cf.[13]) which does not require a link to past emotions and experiences. In addition reflection is not singly triggered by problems but can also result from positive experiences. The vast majority of research on reflection is done on individual reflection and most models have a strong individual focus [9]. Collaborative reflection on the other can be described as “people engage in finding common meanings in making sense of the collective work they do” [8]. In difference to individual reflection those done in groups has a strong need for articulation of experiences, therefore research has to focus more on coordination and communication where sharing and mutual experiences are the core elements [4]. Learning by collaborative reflection may then occur when an individual links her knowledge to the experience of others [2] or when a group combines different viewpoints stemming from its members’ experience and reflects on them collaboratively [8]. As characteristics of collaborative reflection [15] identified “critical opinion sharing” in discussions, “challenging groupthink” as opposed to stick to norms, “asking for feedback” on own actions and “experimenting with alternatives”. Those criteria also match situations in which groups collaborative rethink situations of social practice and interaction with third parties like customers since those situations are re-occuring in general but each episode is different. 3 Related work: Tools for Informal Learning and Reflection Since reflection is based on going back to past experiences tools to support collaborative reflection and informal learning tools have been researched for quite some time to overcome limitations of fading memories and uncertain remembering. Various approaches were tested on their supportiveness. One way is to use additional hardware and sensors that automatically collects data which afterwards can be used to support reflection processes. For example a SenseCam – a wearable camera that makes photos automatically – was used in [7] 135 Enhancing Learning Experiences with Support for Collaborative Reflection and [6]. The latter with teachers in training and their supervisors to support reflection on lessons. The participants found the images of the camera to be valuable for grounding discussions and supporting them with empirical data. This made discussion with those that were not part of the lesson easier as it provided additional context information. Nevertheless the bad quality of the camera images and missing additional channels like audio made a extensive explanation of the camera wearing person mandatory. Others require participants to manually collect information e.g. in [11, 14] articulations like diaries and portfolios proved their applicability and support for individual and team reflection. Personal notes were used to discuss the progress of a project after it is finished. A third group of authors uses data that is generated during regular work tasks. In [10] the authors described how data from light-weight collaboration tools for software development can support the collaborative reflection on a project after it has ended. They used the project management tool trac that focusses on support for ongoing projects for a workshop in which students retrospectively reflected on the trajectory of their work. Here the empirical data was found helpful to review details of the project and discuss events in detail. All tools developed show the usefulness of collaborative reflection to learn about past experience. Especially they point to the advantages of additional data to foster collaborative reflection (cf. [9]) and support memorizing situations. Nevertheless most of the tools focus on support for formal learning or separated trainings of professionals and require additional articulation work. Our studies focus more on informal learning and we will propose a tool that integrates data collection into daily work to keep the additional work as small as possible. 4 The nature of collaborative reflective learning: An Analysis Do deepen our knowledge on reflection and especially collaborative reflection we organized case studies at three different sites from health care and business professions. For a deeper analysis of modes and types of collaborative reflection and tool support cf. [3]. In this chapter we will focus on collaborative reflection as a learning mechanism, derive requirements for tool support and review the cases studies from these perspectives. 4.1 Methodology We conducted three case studies to deepen our understanding of collaborative reflection. The first case is a residential care home in Great Britain specialized on offering support for elderly people suffering from dementia. The second case is a medium sized IT consulting company based in Germany. Our study and analysis is based on observations and interviews in these cases. We conducted two day observations of two different people at the hospital and consulting company. Part of the observation was shadowing of participants during their workday and participation 136 Enhancing Learning Experiences with Support for Collaborative Reflection in meetings. At the care home observation was limited to meetings due to concerns about residents’ privacy. In addition we interviewed three to five participants at each of the case study sites. Although this paper is focused to the initial two cases, which are both from healthcare, we also describe the third case to broaden the empirical base our insights stem from. 4.2 Case Studies At the first case, a German hospital, our observation and interviews took place at the stroke unit, which is specialized on the treatment of emergency patients that recently suffered from a stroke. As the right timing after a stroke is of critical importance, everything is organized around the process of emergency admissions and immediate diagnostics. The stroke unit operates with three to five physicians depending on the shift caring for up to 16 patients. They are supported by four to six nurses; in addition, therapists join the team for initial work on recovery. All professions working on the stroke unit are highly trained and specialized on strokes and other neurological problems. Some of the assistant physicians work on the ward for several months as part of their two year training to become a neurology specialist, others have already passed that exam, but still participate in additional trainings regarding new methods in treatment or diagnostics. Employees of the nursing staff have to complete a special training, too, before they are allowed to take responsibility for patients without supervisors. The group of therapists consists of specialists in therapy of various disabilities that result from strokes like Aphasia or Paralysis. Besides formal training to e.g. learn special skill in treating stroke patients, which are offered by the human resources department in the hospital, there are additional, more informal learning mechanisms within the ward to improve individual work as well as group collaboration. For example, the three professions meet at least once a month in a ward meeting to discuss issues affecting the whole unit and general work processes. Besides that several smaller meetings like daily physician meetings, ward rounds, chief physician rounds or therapists take place in regular intervals. Moreover, staff working in the same shift meets from time to time on hallways or during breaks and discuss cases or problems occurring during work. During these situations, members of staff reflect on aspects such as their cooperation, the organization of the ward and on treatment of patients. The second case concerns British care homes for people suffering from dementia. Here, care is not organized around emergencies but on daily work routines and sustainable work with residents of the homes to support self-conscious living as long as possible. At a typical care home, five to seven caregivers work with 40 to 50 residents. As the caregivers have no higher education and get just a two-week training one registered nurse per shift is responsible for medical treatments. What differentiates senior caregivers from junior caregivers is the experiences and time spent in the job. This experience is crucial for the job, as the caring for people with dementia is emotionally demanding, as residents may behave unexpectedly and e.g. shout at staff (situations like this are called “challenging behavior” in care homes). Exchanging insights and reflecting on such cases is already recognized as an 137 Enhancing Learning Experiences with Support for Collaborative Reflection important learning mechanism: Caregivers organize what was called in one home “reflective meetings”, during which they talk about experiences with residents that were difficult to cope with. In interviews, especially junior caregivers reported that getting feedback and exchanging experiences with more experienced colleagues is a fruitful way to get better in their job. Other occasions of getting together and collaboratively discussing include the shift handovers, in which the nurses and caregivers from overlapping shifts discuss the status of each resident, e.g. whether they showed unusual behavior, and try to find new ways of handling those residents with problems or challenging behavior. The third case is an IT consulting company in Germany, which focuses on the provision and adaptation of customer relationship management tools for manufacturing companies. In that company our target group are employees from the sales department, who are responsible for customer acquisition and handling the handover from sales to other (development) departments. Learning in the sales department is mostly self-directed and based on experiences from projects and client encounters. They unregularly receive short trainings e.g. about new software features, which are mostly on the web, but according to employees, the main part of learning to improve professional skills is based in practice and self-evaluation as well as evaluation by others. This is also mirrored in regular meetings of the sales department, in which current client activities are described and the participants discuss critical issue in these activities based on their experiences. 4.3 Analysis: Reflection of social practice as an indispensable task Besides differences stemming from the variation in professions, we observed similarities in all cases. While all organizations offer formal training for their employees, we observed hardly any (official) support for informal collaborative learning based on reflection: In all cases, employees used meetings, breaks or short talks on the hallway to discuss cases, residents or customers with colleagues, to ask for their assistance or to offer insights from their experiences to others. This was especially the case for topics that relate to social interactions with those third parties that could be grouped as “service consumers” (patients, residents and clients in the three cases described). For example, at the hospital we observed that especially for young physicians talking to relatives was a critical task: They often have to explain difficult medical cases to relatives without a background in medicine and these talks often include conveying bad news like brain injuries patients may never recover from. These interactions are only partly covered in formal educations of physicians. Therefore, getting bad feedback from relatives or finding themselves in unpredicted situations often causes physicians to talk about their experiences to others. At the care home, we found caregivers to often discuss challenging behavior of patients (e.g. behaving aggressively for no apparent reason) very often. Discussions took place in breaks and meetings with other caregivers. In one meeting, a junior caregiver reported a problem with a woman, who asked when she was allowed to leave the care home several times per day. The caregiver had problems telling her that 138 Enhancing Learning Experiences with Support for Collaborative Reflection this is not possible and reported how this affected him emotionally. Senior caregivers in the meeting then reported from their own experiences what could have caused this behavior and explained how they had dealt with similar situations before. This helped the young caregiver to understand how to deal with such situations and showed him that these problems are not only relevant for him. In the meeting, the participants then also agreed on ways to handle the requests of the respective elderly woman that were supposed to be used by all caregivers dealing with her and similar cases in the future. Reflection topics around social interaction with third parties were also present at the consulting company. We observed consultants to often discuss habits and behavior of their contact persons at a customer as well as how they performed in recent presentations at certain customers. They even reported that these situations would happen often and that they discuss issues with colleagues e.g. if they had been together at a customer’s site. They see the experience from colleagues on how they acted as valuable feedback for improving their abilities and welcome constructive criticism. It can be seen from the examples that collaborative reflection of social practice is an important and common topic across the various professions we investigated. In all cases we observed people to think and talk about the way they interact with customers or patients. They discussed and compared with colleagues, especially more experienced ones, to improve their skills. 4.4 The process of collaborative reflection and the role of articulation Besides the identification of topics for reflection, we developed a reference cycle for collaborative reflection, which is shown in Figure 2. The cycle is intended to derive requirements and support the implementation of computer support for collaborative reflection (see [12] for details on the cycle). Figure 2 Model of Collaborative Reflection (cf. [12]). 139 Enhancing Learning Experiences with Support for Collaborative Reflection The cycle shown in Figure 2 can be illustrated with an example of reflecting social practice from the cases presented above. In what follows, we chose the reflection of conversations with relatives as explained in case 1 for this. It should be noted that the cycle is not necessary linear, but that steps are interchangeable. For example, individual reflection may happen during documentation, e.g. when a physician thinks about a conversation while documenting it, and there might be multiple loops of collaborative reflection in several groups before outcomes can be documented. The cycle starts with the activity of documentation and data capturing, which in the case of conversations is important to support the individuals participating in the talk to remember the situations and their emotions during it in order to come back to them. This sets the stage for later reflection and also enables individuals to sustainably share experiences from talks with others (as part of their practice to talk about them) and discuss them together when there is time for it. Individual documentation of conversations is helpful for individual reflection and enables physicians to reflect on talks some time afterwards, e.g. after they completed their shift on a stressful day. Similar to offline reflection helpers like diaries, a tool needs to support individuals in going back to past experiences on talks, to remember situations in more detail and to articulate insights stemming from reflection of them. As observed in the hospital, there is a need to share experiences from conversations and make it available for sessions of collaborative reflection. Tools for this need to enable user to share documented talks and to discuss talks that were shared with them. This is helpful especially in work situations where time constraints are otherwise impeding like during the day of physicians. Moreover, in meetings of physicians, the group can come back to shared documentation and results from asynchronous discussion and start a face-to-face reflection session. For reflection on conversations to lead to improvement, there is a need to support sustaining outcomes. The lack of means for this is a major shortcoming in daily reflection practice, as it hinders the benefits of reflection from becoming visible to others and to be implemented. The cycle shows that documented outcomes may then serve as input for further reflections, e.g. when a physician changes her way of conducted conversations and makes experiences on these changes. As visible in Figure 2, articulation is a central activity for collaborative reflection. This can be seen in the example: To start the cycle of reflection, physicians need to document (articulate) the content of talks. Then, they need to articulate their thoughts and perceptions on a conversation as part of individual reflection, as they are otherwise not visible to others. Moreover, for collaborative reflection, they have to articulate their perspectives and thoughts on talk documentation shared with them. To close the cycle, there is a need to express insights taken from collaborative reflection in order to make it sustainable and available for implementation. Therefore, articulation support has to be considered a decisive factor in implementing collaborative reflection support. 140 Enhancing Learning Experiences with Support for Collaborative Reflection 4.5 Requirements for collaborative reflection support Besides the importance of articulation derived in the previous session, it is obvious that there is a need for human articulation in reflection of social tasks: These tasks cannot be described (only) by formal criteria and social interactions cannot (only) be learned in formal training. Rather than that, they are subject to informal learning processes, which rely on communication and learning from peers – without articulation, learning is only possible from observation and experiences remain with the individual. Therefore, we regard articulation to be of central importance for the reflection of social interactions as described in this paper. From the above case studies, we can derive corresponding requirements for articulation support in tools for reflective learning. As a prerequisite for these requirements, we assume that articulation needs to transcend verbal communication in order to become available to a larger audience and for reflection participants to refer to details of articulated experiences. However, noting experiences often problematic due to time pressure and other tasks to be done. For future tool development this implies that: Articulations have to be easy and unobtrusive to make: Users should be able to document experiences 'on the fly', e.g. in a very simple interface that is easy to use or by voice input. Articulation tasks should not cause much additional effort or need a lot of attention. For example, the articulation of emotions during conversations with relatives should be as easy as possible as they are not necessary for work and would thus possibly not be done by medical staff. Articulation tasks have to be integrated into work tasks: Tools for articulation in reflection should be easily accessible throughout work and be closely related to regular work tasks to lower the burdens of additional tools. In the case of documenting conversations, it should therefore be avoided to cause additional work by requiring physicians to document conversations in the patient’s folder and in an additional reflection tool. Articulation of experiences has to be accepted as valuable task: Since articulation always causes some effort, tools need to show users that outcomes of articulation and collaborative reflection are helpful – not only to the individual that did the articulation task but also to others participating in reflection sessions. For the reflection of conversations, tools need show users that documenting experiences leads to improvements for their conversations sooner or later. People need to be aware of articulated experiences: For documented experiences to become usable in collaborative reflection, digitally sharing them must result in recipients noticing their availability. This opens up the possibilities for collaboration and mutual commenting. Taking the example of the hospital above it would not be sufficient to add a paper to the patients case folder for documentation of talks because this is only accessible in the patients room. Articulations should be contextualized: As there might be many articulations created over time and as reflection participants look for experiences and insights suiting their respective case or problem, there is a need to contextualize articulations, e.g. by referring to specific cases or actors that took part in 141 Enhancing Learning Experiences with Support for Collaborative Reflection experiences. In the example of reflecting conversations with relatives, contextualizing could be done by grouping conversations on the same medical disease or with relatives of the same patient. The requirements above show how articulation as a key mechanism in collaborative reflection support tools can provide support that can be handled and integrated into daily work easily. In what follows, we describe a sample implementation of these requirements. 5 Implementing articulation support for collaborative reflection Using the example of reflection conversations with relatives in healthcare, below we present a tool built to support articulation and other reflection activities. In addition, we reflect on experiences with implementing the requirements described above. 5.1 The Talk Reflection App – Documenting and Reflecting Relative Talks In close partnership with the hospital described as one case we designed and tested a tool that implements the collaborative reflection model described above and fulfills the requirements described in section 4.4. The aim of the tool shown in Fehler! Verweisquelle konnte nicht gefunden werden. and Figure is to support individual and especially collaborative reflection of conversations physicians have with relatives of patients at the stroke unit. Figure 3 Individual and collaborative reflection spaces: Each documentation can be viewed, shared and discussed. Assessments displayed in spider graphs for a quick overview. 142 Enhancing Learning Experiences with Support for Collaborative Reflection The basic idea is that physicians working on the ward document conversations they had and open them up to discussion with other physicians. It is already mandatory for all physicians to document conversations they had in the patient’s folder by hand and sometimes also separately on a computer to inform physicians in later shift which therapy was agreed on or which measure to take in case of emergencies. To simplify the documentation process the application we developed is designed for mobile devices like smartphones and tablets. The documentations are shown on the right side of the screenshot. On the left you can see lists of documentations done by the users itself (1a) by others users that shared the documentation (1b) and documented outcomes of collaborative reflection (1c). The sharing of documents and a list of users that have access to the currently visible document is shown at (2). The only additional efforts physicians have to take is to make short self-assessments and answer questions about how they felt during the conversation or what they think how the conversation partner felt during their talk. These self-assessments are visible only for the person documenting and are afterwards visualized (3) to make simple comparisons between documented conversations and support remembrance. Least at (4) you can see the space for comments and notes. Here annotations and comments of other users are displayed that can be used to report on similar experiences or discuss want went well or wrong in the case documented above. Figure 4: Outcomes of collaborative reflection sessions can be saved and related to cases To support the sustainment of outcomes of reflections we developed a page to overview the list of documentations (Figure ). Here users that did individual reflection or participated in a synchronous or asynchronous reflection session can select on or more cases that they reflected on (3) and document explicit outcomes e.g. changes in procedures or good practice. Outcomes are divided into a short descriptive title (2) and a more detailed description of the outcome that highlights the commonalities of 143 Enhancing Learning Experiences with Support for Collaborative Reflection the cases selected (1). Afterwards these documented outcomes are shared among users of the app. 5.2 Implementing articulation requirements: Insights from design We conducted two workshops with physicians of the hospital. They were planned and carried out as part of a formative evaluation to prepare a broad roll out in the hospital ward. The first workshop with three physicians was focused on utility and applicability of the app. I the second workshop another four physicians tested and evaluated a second prototype to test-drive the rollout in the ward. Referring to the requirements described in section 4.5 we received valuable feedback. In general users agreed that the application is easy to use and they had fun making documentations with the simple, mobile interface. Nevertheless they had several suggestions for usability improvements like a larger input fields for personal comments and ideas for a more intuitive naming of certain categories. They also discussed a lot about problems with auto-correction of medical terms by the mobile OS and issues with syncing the content of the app with the server resulting from the poor WIFI connection. The fact that all these issues came up during the discussion shows the importance of this requirements and the need to improve user interfaces and input methods to make them less obtrusive. During our workshops we also discussed better ways to integrate the app into daily work. As shown in Figure 3 we already implemented a button to export documentations by e-mail, which allowed them to copy & paste the documentations into the HIS, but due to the connection issues this did not work out very well. Unfortunately a smoother integration with automatic synchronization, which would be most comfortable, is not possible due to constraints of the IT department and high development costs for program interfaces of the proprietary HIS. Therefore participants proposed to give up the benefits of the mobile device and start using the app on the desktop PC as well where they can easily import and export information from on. This decreases possibilities to document cases outside the physician’s office but they also reported that they used this option not as often as thought upfront. We also stated that the articulation of experiences has to be accepted as a valuable task. During the workshop we observed participants heavily referring to what they wrote when explaining the cases again and using the documentations as additional information to more blurry memories. We also received multiple feedbacks that the app and discussions itself resulted in a higher awareness for the topic of conversations with patients and relatives. On user requests we also added a checkbox that says “I want to talk about this later” to raise awareness for certain cases which participants would regard as unusual or more important. There were also ideas for additional organizational support by introducing a bi-weekly meeting in which assistant physicians could talk about documentations they did face to face in addition to sharing them digitally. The first feature to support contextualization of articulation we integrated was the self-assessment form. These short questions were regarded as helpful for quick assessments and during the workshops we agreed on questions that would better fit 144 Enhancing Learning Experiences with Support for Collaborative Reflection the circumstances like “How likely is it that I will think about this at home”. In line with the model they asked for the ability to document cases more detailed e.g. to be able to select from a list of topics like “therapy”, “diagnostic” or “information”. They argued that this would help to find similar cases more easily. While the workshops were conducted in a formative approach they showed that the application and the underlying process and requirements are applicable to support collaborative reflection of social practice at the healthcare workplace. The participants had numerous ideas and scenarios how the app could be improved to fit better in their workplace settings and already used it in the workshops to document, share and discuss cases of conversations they had and wanted to reflect about. 6 Conclusion and further work In this paper we described the importance of collaborative reflection for learning at work. We focused on reflection as a mechanism for informal learning within groups sharing their experiences. Those are especially relevant for learning for topics like social practice that cannot be learned from articulated knowledge but is a result of a continuous process of comparing own behavior to that of others. From two case studies in healthcare and consulting businesses we identified conversations with customers and patients to be a reoccurring topic in collaborative reflection. As an example we took reflection at a hospital about conversations with relatives and developed two prototypes that where tested with groups of physicians on their applicability to support reflective learning about this topic. The requirements that were elicitated during the case studies proved to be supportive for tools use. We designed the tool to integrate into daily work as articulation is already part of it. That notes are digitally shareable and less dependent on the paper based patients folder was very much appreciated. In addition the fact that the availability of the app raised awareness for the topic itself and fostered discussions not only in workshops but also off the record e.g. in breaks or spontaneous meetings. Nevertheless there are improvements to make in the ways physicians can use the app as due to technical restrictions and missing wireless connections it was too difficult to use the app since they had to go to a special room to synchronize data. In addition further work has to be done to simplify technical integration between official documentation and the Talk Reflection App to reduce double work as it sometimes took place during the tests. But as the tests brought promising results and positive feedback we will adapt the process and apps to other domains. 7 References [1] Boud, D. 1985. Reflection: Turning experience into learning. Kogan Page. [2] Daudelin, M.W. 1996. Learning from experience through reflection. Organizational Dynamics. 24, 3 (1996), 36–48. 145 Enhancing Learning Experiences with Support for Collaborative Reflection [3] Degeling, M. and Prilla, M. 2011. Modes of collaborative reflection. Workshop “Augmenting the Learning Experience with Collaborative Reflection” at EC-℡ 2011. [4] Dyke, M. 2006. The role of the Other’in reflection, knowledge formation and action in a late modernity. International Journal of Lifelong Education. 25, 2 (2006), 105–123. [5] Eraut, M. 2004. Informal learning in the workplace. Studies in continuing education. 26, 2 (2004), 247–273. [6] Fleck, R. and Fitzpatrick, G. 2006. Supporting collaborative reflection with passive image capture. Supplementary Proceedings of COOP’06 (2006), 41– 48. [7] Fleck, R. and Fitzpatrick, G. 2009. Teachers’ and tutors’ social reflection around SenseCam images. International Journal of Human-Computer Studies. 67, 12 (Dec. 2009), 1024–1036. [8] Hoyrup, S. 2004. Reflection as a core process in organisational learning. Journal of Workplace Learning. 16, 8 (2004), 442–454. [9] Knipfer, K. et al. 2011. Computer Support for Collaborative Reflection on Captured Teamwork Data. Proceedings of the 9th International Conference on Computer Supported Collaborative Learning (2011), 938–939. [10] Krogstie, B.R. and Divitini, M. 2010. Supporting Reflection in Software Development with Everyday Working Tools. (Aix-en-Provence, 2010). [11] Loo, R. and Thorpe, K. 2002. Using reflective learning journals to improve individual and team performance. Team Performance Management. 8, 5/6 (Jan. 2002), 134–139. [12] Prilla, M. et al. 2012. Collaborative Reflection at Work: Supporting Informal Learning at a Healthcare Workplace. Proceedings of the ACM International Conference on Supporting Group (GROUP 2012) (2012). [13] Schön, D.A. 1983. The reflective practitioner. Basic books New York. [14] Scott, S.G. 2010. Enhancing Reflection Skills Through Learning Portfolios: An Empirical Test. Journal of Management Education. 34, 3 (Jun. 2010), 430 – 457. [15] van Woerkom, M. and Croon, M. 2008. Operationalising critically reflective work behaviour. Personnel Review. 37, 3 (2008), 317–331. 146