Associations Between Students’ Approaches to Learning and Learning Analytics Visualizations Marek Hatala, Sanam Shirazi Beheshitha, Dragan Gašević School of Interactive Arts and Technology Schools of Education and Informatics Simon Fraser University University of Edinburgh Surrey, Canada Edinburgh, UK mhatala, sshirazi@sfu.ca dragan.gasevic@ed.ac.uk ABSTRACT In our research we specifically focus on theoretical constructs of We investigated the connection between Students’ Approaches to aptitudes that can shed light on the observed differences between Learning and different information presented in learning analytics individuals in learning context (e.g., motivational constructs, epis- visualizations. Students’ approaches to learning are a construct temic beliefs, approaches to learning, and attitudes) [26]. studied in educational psychology. They are context dependent 1.1 Individual differences and learning and can be either surface or deep. In a field experiment, we dis- covered a significant interaction effect between learning analytics technology research visualizations and students’ approach to learning on the quality of Research on the role that individual differences play in the context messages posted by students. The associations were both positive of learning systems is scarce. Martinez-Miron et al. [18], using an and negative, depending on the combination of information pre- early conceptualization of the achievement goals theory, modulat- sented in the visualizations and students’ approach to learning. ed how help was offered to 9-11 year olds when using a specifi- The paper contributes to the development of the body of research cally-designed learning environment. No significant correlation knowledge that aims to explain of how aptitude constructs from was discovered between students’ goal orientations and their use educational psychology interact with learning analytics visualiza- of cognitive or motivational strategies. The authors pointed out a tions. methodological problem with the questionnaire they used, i.e. a binary categorization of learners into orientation despite their Categories and Subject Descriptors grouping around the neutral point. K.3.1[Computers and Education] Distance Learning Du Boulay et al. [4] provided a comprehensive proposal for ‘Sys- tems that Care’ – a framework for intelligent educational systems General Terms that considers constructs such as motivation, metacognition and Human Factors, Measurement. affect. Based on how these constructs are detected, reasoned about and deployed, this work provides an ontology of such systems Keywords with several examples of earlier works that demonstrate proposed Learning Analytics, Individual Differences, Students’ Approaches categories. However, although these early systems incorporate to Learning, Visualizations, Dashboards, Online Discussions some aptitude constructs into their design, they do not explicitly examine the extent to which students’ aptitudes affect their learn- 1. INTRODUCTION ing outcomes. One of the envisioned uses of learning analytics tools is to support students’ learning, particularly in higher education [16]. This In our prior work [22], we have shown that two clusters of stu- work is positioned in the context of visualizations and dashboards dents can be identified based on their self-reported approaches to that are used to present learning analytics information to students, learning in the context of independent research projects and the with the intent to offer opportunities for awareness, reflection, analysis of trace data shows how the two clusters use different sense-making and impact on students’ learning [25]. The work on learning strategies. In [23], we have examined the motivational Open Learner Models [6], which predates that on LA visualiza- construct of Achievement Goal Orientations [10]. The findings of tions, aimed at engaging learners with the information collected that work show that quality of the posts in the discussion forums by the system with the purpose to provide personalized learning was significantly associated with different types of information support. Similar to LA Visualization, one direction of independent presented in LA visualizations (see Section 2.2) when controlled OLMs added the dimension of supporting student reflection and for students’ achievement goal orientations. metacognition in general [5]. Both strands of research share the In this work, we examine another aptitude construct that describes same purpose: to influence an individual learner’s decision mak- students' preferred approaches to learning within a particular ing, leading to better learning outcomes. teaching context. The Students’ Approaches to Learning [2] in- Research on educational psychology shows that individuals differ strument measures individual differences using two dimensions: in their readiness to profit from a particular treatment in a particu- motives and strategies. Surface approach to learning is character- lar context [24]. This indicates the possible varying effect of a ized by fear of failure and is dominated by a narrow target, rote treatment for individual students. The work presented here focus- learning, whereas deep approaches have an orientation towards es on individual differences between learners and aims to deter- comprehending and sense making with intrinsic motivation [3]. mine whether these individual differences relate the varying im- Baeten et al. [1] provide a systematic review of research studying pact of information presented through visualizations on different how to encourage deep study approach in user-centered learning aspects of the individual student’s learning process and outcome. environments and identified over forty factors that influence stu- dents’ approaches to learning. The identified factors, such as stu- Copyright © 2016 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes. This volume is published and copyrighted by its editors. LAL 2016 workshop at LAK '16, April 26, 2016, Edinburgh, Scotland. dents’ activity, nature of assessment, and self-direction in learn- ing, are at a higher granularity those examined in our research, i.e. type of information visualized to learners. 1.2 This study We conducted a field experiment to examine the effects of differ- ent types of information presented through learning analytics vis- ualizations on students’ learning behavior while controlling for their individual approaches to learning. We designed three learn- ing analytics visualizations where each showed information about Figure 1: The design of the Class Average visualization a particular aspect of students’ participation in online discussions in a university-level blended course. The visualizations were se- lected in a way to potentially speak to different students’ motiva- tions and influence their behavior in the discussion activity. We were explicitly not concerned with designing the visualizations as tools for future continuous use, rather as experimental means to examine if the studied associations exist and to what extent they influence the learning activity. Asynchronous online discussions are commonly exploited to sup- port collaborative learning [17] and can be seen as an environment in which students can interact to build both collective and individ- ual understanding through conversation with their peers [15]. Critically, the level and quality of students’ participation is largely influenced by students’ agency [27], regardless of what extent the Figure 2: The design of the Top Contributors visualization other learning activities in the course are using learning environ- ment. Additionally, learning analytics in the form of reports and visualizations have been suggested to be supportive of participa- tion and productive engagement in online discussions for the pop- ulation of students as a whole [28]. Our results confirm that when controlling for students’ approaches to learning, different visuali- zations presented to students are significantly associated with different quality characteristics of posted messages. 2. METHOD 2.1 Study Design and Research Questions We executed our study as a field experiment in an authentic blended course setting. Students participated in an online group discussion activity on a topic related to the course content. Each student was randomly assigned to an experimental condition, i.e. Figure 3: The design of the Quality visualization they had access to one of the three visualizations presenting a particular type of information about their performance in the [23]. One aspect that is worth repeating here is that each visualiza- group discussion activity. Students’ approaches to study were tion 1) presented one particular metric measuring the performance measured through a self-reported instrument. rather than multiple metrics as is common in more complex dash- We defined our research questions as follows: boards, and 2) provided a different standard for students to gauge their performance. RQ1: Is there an association between visualization type and the quantity of students’ posts when controlled for their self-reported The Class Average visualization has been the most widely used approaches to learning? approach when offering learning analytics dashboards and visual- RQ2: Is there an association between visualization type and the izations [7]. It allows students to compare their posting perfor- quality of students’ posts when controlled for their self-reported mance with the average number of messages posted by the rest of approaches to learning? the class (Figure 1). Students compare their number of postings with that of their fellow students, which may not measure up to 2.2 Learning Analytics Visualizations the expected number of postings established by an instructor. It The choice of learning analytics visualizations was guided by the has been shown that the effect of class average visualization on main goal of our prior study [23], in which we expected that the students’ participation and learning was not always positive [7, effect of the visualizations would vary with students’ achievement 28]. goal orientations. The three visualizations selected aimed to po- The Top Contributors visualization shows the count of messages tentially align with different types of motivations underlying stu- posted by the student in comparison to the top contributors in the dents’ goals. The achievement goals students have are relatively class. Top contributors are the top 5 individuals in the class who stable over time [21], as opposed to the students’ approaches to have had the highest number of messages posted (Figure 2). The learning that are context dependent [3]. Hence, we considered standard here is set to be the best students. This visualization also students’ goals to be a primary driver for visualization selection in adds an additional dimension of increased personal recognition in our study. Below are high level descriptions of the three visualiza- the class by showing student’s names and profile pictures. tions; for the rationale for their selection readers are referred to Copyright © 2016 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes. This volume is published and copyrighted by its editors. LAL 2016 workshop at LAK '16, April 26, 2016, Edinburgh, Scotland. The Quality visualization focuses on the content of posted mes- student per discussion and counts of visualization views. All the sages, as opposed to focusing on counts of messages posted. It data were time-stamped. represents how many of the key concepts the student has covered The R-SPQ-2F (Revised Two-Factor Study Process Question- within his/her posted messages and how well he/she has integrat- naire) instrument was used to investigate students’ approaches to ed those with logically related ideas. The key concepts for each learning [3] The instrument consists of 20 items that measure two discussion topic were previously identified by the course instruc- scales (surface and deep approach), which in turn are subdivided tor. The visualization (Figure 3) showed the quality for each key into four subscales (deep-motive, deep-strategy, surface-motive, concept as a color-coded square. However, the instructor did not surface-strategy). The responses were recorded on a Likert-type identify which concepts are more important or what the visualiza- scale, from 1 (never or only rarely true of me) to 5 (Always or tion should ‘look like’ for an ideal discussion participation. Ra- almost always true of me). The total scores on 5 items correspond- ther, students see color intensity as a measure of quality for their ing to a subscale were used as the overall measure on that SPQ messages. One comparison they do have is with the average quali- subscale. ty of each concept computed across all posted messages in the class. The color was determined by computing the Latent Seman- 2.6 Data Analysis tic Analysis (LSA), a natural language processing technique for measuring the coherence of the text1, at the sentence level [11]. 2.6.1 Coh-Metrix Analyses To evaluate the effectiveness of discussions and quality of argu- 2.3 Online Group Discussion Activity mentation we used Coh-Metrix, a computational linguistics facili- LA Visualizations were embedded into a mandatory discussion ty that measures text characteristics at different levels, such as text activity inside Canvas LMS, worth 5% of students’ final grade. coherence, linguistic complexity, characteristics of words and Discussion across four courses included in the study were de- readability [14]. These components explained over 50% of the signed using the same guidelines that we prepared following col- variability among over 37,250 texts: laborative learning literature [19, 30]. The students were in groups • Narrativity: the degree to which the text is a narrative and con- of 4-11; the discussions were open for 7-14 days. Each group veys a story. On the opposite end of the spectrum are exposito- posted in their own discussion space without the ability to see ry texts. postings of students outside their group. All students within the same course were given the same open-ended questions and were • Deep Cohesion: the degree to which the ideas in the text are instructed to explore different aspects of the question and come to cohesively connected at a mental and conceptual level. the group resolution supported by material taught in the course as • Referential Cohesion: reflects the degree to which explicit well as their individual research. Marking rubric explicitly stated words and ideas in the text overlap with each other. expectations for quality, collaboration, tone, and quantity of the messages per student. LA visualizations were accessible via the • Syntactic Simplicity: reflects the degree to which sentences link at the top of the discussion page; clicking the link opened a have a lower number of words and use more simple and famil- new tab with the visualization for the specific student. A snapshot iar structures rather than dense sentences and high frequency of of the discussion space setup can be viewed at embedded phrases. http://at.sfu.ca/gCXQNW (permalink). • Word Concreteness: the degree to which the text includes words that are concrete and induce mental images in contrast to 2.4 Participants abstract words. Participants were students recruited from four courses at the se- cond and third levels in a multidisciplinary Design, Media Arts We computed values for each component above for all student and Technology program in a Canadian post-secondary institu- messages that mentioned at least one of the key concepts identi- tion. All students in the four courses included in the study were fied by an instructor. The rationale is based on the work presented randomly assigned to one of the three visualizations. As a result, in [18], which gauged that these messages have traces of higher the students in the same discussion group could be assigned to level of knowledge construction. For each student we averaged different visualizations. Both participating and non-participating the values for each component in students’ retained messages and students engaged in the same discussion activity, and both groups used the averages as component values in our further analysis. had access to the visualizations. The only difference between 2.6.2 Statistical Analysis participants and non-participants was that those who opted to We used hierarchical linear mixed models as a suitable method participate in this study were asked to fill in several question- [20] to reflect the nested structure of our data, i.e. students being naires, including students’ approaches to learning questionnaire embedded in discussion groups, that were part of the discussion (see Section 2.5). The participants were predominantly 18-24 topics. To measure the effect of visualizations in our analysis we years old (93%), both male (66%) and female (34%), with moder- only included those students who had seen the visualizations at ate to expert familiarity with online discussions (80%), Canvas least twice. LMS (90%) and moderate to expert technical skills (95%). For RQ1, the student’s count of posts was the dependent variable, 2.5 Data Collection and Measurement with SPQ scores. For RQ2, we identified 5 dependent variables: We retrieved the log data of students’ discussion activity from the Narrativity, Deep Cohesion, Referential Cohesion, Syntactic Sim- LMS, including texts of posted messages and the discussion group plicity, and Word Concreteness. The independent variables in all composition. We integrated this data with recorded visualization models for both RQ1 and RQ2 were the visualization type as- views. Finally, we computed counts of posted messages by each signed to the student (i.e., Class Average, Top Contributors, or Quality) and the covariates were the scores on four SPQ scales: deep-motive, deep-strategy, surface-motive, and surface-strategy. 1 Coherence has been described as “the unifying element of good writing” We constructed a different linear mixed model for each dependent and hence it can be used in a way to measure quality of text. (http://www.elc.polyu.edu.hk/elsc/material/Writing/coherenc.htm) variable. To select the best fitting model for each dependent vari- able we 1) constructed a null model with student within a course Table 2. Inferential Statistic for Model fit assessment RQ2 as the only random effect2, 2) built a fixed model with the random Narrativity effects introduced in the null model and the interaction between χ2 df R2 AIC visualization type and four SPQ scale scores as the fixed effect, Null Model 0.46 238.33 and 3) compared the null random-effects only model and fixed- Fixed Model 36.607*** 14 0.81 229.72 effects model using both Akaike Information Criterion (AIC) and the likelihood ratio test to decide the best fitting model [12]. Pri- Deep Cohesion marily, the model with lower AIC was suggested to have a better χ2 df R2 AIC fit. We used the likelihood ratio test to confirm AIC result. We Null Model 0.34 233.54 also calculated an estimate of effect size (R2) for each model, Fixed Model 30.456** 14 0.32 231.08 which reveals the variance explained by the model [29]. χ2 values show the differences between the model in the current row and the model in the previous row. 3. RESULTS Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05 Because students’ use of learning analytics visualizations was voluntary, only a subset of students in the courses opted to view them. In our analysis, we considered only those students who Table 3: Analysis of the fixed effects for Narrativity viewed the visualization more than once, which indicated that they returned to the visualization with a purpose to view it, rather Variable β SE 95% CI than just because of curiosity. Table 1 shows the number of stu- Lower Upper dents included in the analyses in RQ1 and RQ2 and how many Intercept (Class Average) -0.222 0.228 -0.677 0.233 times they viewed the visualization. Viz (Top Contributors) 0.203 0.217 -0.231 0.637 Viz (Quality)*** 0.688 0.191 0.304 1.072 Table 1: Count of visualization views for students who used Deep Motive . -0.308 0.181 -0.670 0.053 visualizations Deep Strategy 0.275 0.202 -0.128 0.678 Visualization N Median (25%,75%) Surface Motive -0.201 0.218 -0.636 0.234 Class Average 38 7.00 (4.00, 9.00) Surface Strategy -0.027 0.184 -0.395 0.342 Top Contributors 22 6.50 (3.25, 15.50) Quality 38 5.00 (3.00, 10.00) Viz(TopContr)*Deep Motive 0.511 0.344 -0.177 1.198 Viz(TopContr.)*Deep Strategy*** -1.380 0.357 -2.093 -0.667 3.1 RQ1 Viz(TopContr.)*Surf.Motive* 1.053 0.412 0.230 1.875 According to the AIC and the likelihood ratio test the fixed model Viz(TopContr.)*Surf.Strategy -0.534 0.400 -1.335 0.266 that included the interaction between learning analytics visualiza- Viz (Quality)* Deep Motive 0.325 0.286 -0.246 0.897 tion and SPQ scales did not yield better fit than the null model. Viz (Quality)* Deep Strategy -0.226 0.343 -0.911 0.460 Hence, we have not discovered any association between the stu- Viz (Quality)* Surf.Motive -0.109 0.354 -0.817 0.599 dent’s number of posts and visualization type when controlling for Viz (Quality)* Surf.Strategy . 0.264 0.303 -0.342 0.870 the student’s approach to learning. Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05, . 3.2 RQ2 p<0.1 (marginal) All variables are scaled For two out of the five Coh-Metrix principal components we used to measure the quality of the messages, namely for Narrativity and Deep Cohesion, the fixed effect models that included interaction Table 4: Analysis of the fixed effects model for between learning analytics visualization and the four SPQ scales Deep Cohesion resulted in the better overall goodness of fit measures (AIC, like- lihood ratio test, and R2) than the null models (Table 2). In these Variable β SE 95% CI two cases we proceeded with further analyses. Lower Upper Intercept (Class Average) -0.024 0.147 -0.318 0.270 3.2.1 Narrativity Viz (Top Contributors) -0.091 0.228 -0.547 0.364 Table 3 shows the fixed effects model for narrativity. Further Viz (Quality) 0.112 0.202 -0.292 0.515 examination of the linear mixed model for narrativity revealed the Deep Motive * -0.384 0.178 -0.740 -0.028 significant interaction effect between learning analytics visualiza- Deep Strategy . 0.370 0.200 -0.029 0.768 tion and deep-strategy (F(2,71.40)=7.68, p<0.001) and between Surface Motive . -0.375 0.216 -0.808 0.057 learning analytics visualization and surface-motive Surface Strategy 0.150 0.183 -0.216 0.517 (F(2,67.13)=4.03, p=0.022). Further investigation of the interaction effect between learning Viz(TopContr)*Deep Motive* 0.770 0.341 0.088 1.451 analytics visualizations and deep-strategy showed a significant Viz(TopContr.)*Deep Strategy*** -1.278 0.358 -1.996 -0.561 difference in change of the scores of narrativity with changing Viz(TopContr.)*Surf.Motive*** 1.387 0.412 0.563 2.211 scores of the SPQ Deep Strategy scale of 1) the users of the Top Viz(TopContr.)*Surf.Strategy** -1.128 0.401 -1.929 -0.327 Contributors visualization compared to the users of the Quality Viz (Quality)* Deep Motive 0.479 0.294 -0.108 1.066 visualization (z=2.83, p=0.013), and 2) the users of the Class Av- Viz (Quality)* Deep Strategy -0.342 0.323 -0.988 0.304 erage visualization compared to the users of the Top Contributors Viz (Quality)* Surf.Motive 0.051 0.332 -0.612 0.714 Viz (Quality)* Surf.Strategy . 0.565 0.295 -0.026 1.156 Significance codes: *** p<0.001 , ** p<0.01 , *p<0.05, . 2 We also considered discussion groups and activity counts as additional p<0.1 (marginal) levels in the nested structure of the random effects. None yielded a All variables are scaled better model. (z=-3.87, p<0.001). The association between the deep-strategy Table 5: Summary of Interactions between Learning Ana- and narrativity scores was positive for the Class Average visuali- lytics Visualizations and SPQ Scales on Quality of Posts zation, followed by the small positive association for the users of the Quality visualization, while a strong negative association was Assoc. found for the users of the Top Contributor visualization (see Table SPQ Scale Visualization Dependent Variable Coeff. 5 in the discussion section). Narrativity 0.27 Class Average The analysis of the interaction effect between learning analytics Deep Cohesion 0.37 visualizations and surface-motive shows a significant difference in Deep Top Narrativity -1.11 change of the scores of narrativity with changing scores of the Strategy Contributors Deep Cohesion -0.91 SPQ Surface Motive scale of: 1) the users of Top Contributors Narrativity 0.05 compared to the users of Quality visualizations (z=-2.62, Quality Deep Cohesion 0.03 p=0.023), and 2) the users of the Class Average visualization compared to the users of the Top Contributors visualization Narrativity -0.20 Class Average (z=2.56, p=0.028). The association between the surface-motive Deep Cohesion -0.38 and narrativity scores was negative for the Class Average and Surface Top Narrativity 0.85 Quality visualizations, while a strong positive association was Motive Contributors Deep Cohesion 1.01 found for the users of the Top Contributor visualization (Table 5). Narrativity -0.31 Quality 3.2.2 Deep Cohesion Deep Cohesion -0.32 Table 4 shows the fixed effects model for deep cohesion. Signifi- Class Average Deep Cohesion 0.15 cant interaction effects between learning analytics visualization Surface Top and three SPQ scales were discovered for deep cohesion: 1) deep- Deep Cohesion -0.98 Strategy Contributors strategy (F(2,84.97)=6.37, p=0.0026), 2) surface-motive Quality Deep Cohesion 0.72 (F(2,84.18)=6.23), p=0.003), 3) surface-strategy (F(2,3.81)= Association coefficients are for scaled variables 7.95, p<0.001). In turn, we further investigated each scale in de- tail. negative association was found for the users of the Top Contribu- First, investigation on the interaction effect between learning ana- tor visualization (Table 5). lytics visualizations and deep-strategy shows a significant differ- ence in change of the scores of deep cohesion with changing 4. DISCUSSION AND CONCLUSIONS scores of SPQ Deep Strategy scale of 1) the users of the Top Con- The overall goal of this study was to investigate the association tributors visualization compared to the users of the Quality visual- between the posting behavior of students with different approach- ization (z=2.40, p=0.043), and 2) the users of the Class Average es to learning when presented with different type of information visualization compared to the users of the Top Contributors visu- via learning analytics visualizations. alization (z=-3.56, p=0.001). The positive association between the 4.1 Interpretation of the results deep-strategy and deep cohesion scores was positive for the Class While our prior work [23] illustrates significant associations be- Average visualization, followed by the small positive association tween number of posts and the students’ other-approach goal ori- for the users of the Quality visualization, while a strong negative entation for Quality and Top Contributors visualization, no asso- association was found for the users of the Top Contributor visual- ciation was discovered with students’ approaches to learning. The ization (see Table 5 in the discussion section). students with a high tendency towards other-approach goal orien- Second, the analysis of the interaction effect between learning tation aimed to compare themselves with others. The surface and analytics visualizations and surface-motive shows a significant deep approaches subscales analyzed in this study focus on how difference in in change of the scores of deep cohesion with chang- students approach their learning and the criteria established by the ing scores of the SPQ Surface Motive scale of: 1) the users of the instructor. In our case, the marking criteria explicitly specified the Top Contributors visualization compared to the users of the Quali- minimum number of posts. It appears that no visualization provid- ty visualization (z=-3.10, p=0.005), and 2) the users of the Class ed enough incentive to modulate the number of posts for either the Average visualization compared to the users of the Top Contribu- students with surface approaches (i.e. to do minimum number of tors visualization (z=3.37, p=0.002). The association between the posts to meet the criteria) or deep approaches (i.e. focus on dis- surface-motive and deep cohesion scores was negative for the cussed concepts). Class Average and Quality visualizations, while a strong positive Our results showed that after controlling for students’ approaches association was found for the users of the Top Contributors visu- to learning, some learning analytics visualizations had positive alization (Table 5). and some had negative effects on students’ quality of posts ob- Third, investigation of the interaction effect between learning served through two discourse features, i.e. Narrativity and Deep analytics visualizations and surface-strategy shows a significant Cohesion. Table 5 shows the summary of significant associations difference in change of the scores of deep cohesion with changing for each approach to learning. The values shown in Table 5 are scores of the SPQ Surface Strategy scale of 1) the users of the Top coefficients of change of the discourse feature expressed in stand- Contributors visualization compared to the users of the Quality ard deviations per one standard deviation change in the student’s visualization (z=2.40, p=0.043), and 2) the users of the Class Av- score in their respective strategy. erage visualization compared to the users of the Top Contributors Narrativity is a highly robust discourse component [14]. In gen- visualization (z=-3.56, p=0.001). The association between the eral, one can find higher narrativity values in the texts conveying surface-strategy and deep cohesion scores was strongly positive a story, using familiar words, showing higher prior knowledge for the Quality visualization, followed by the positive association and oral language. In their analysis of K-12 textbooks, Graesser et for the users of the Class Average visualization, while a strong al. observed that the narrativity z-scores decreased by over one standard deviation from grade level 2 to grade level 11 [14]. This Quality visualization follows the same pattern in terms of the decline was consistent across language used in arts, science and association direction. social studies. The opposite of texts with a story are informational Referring back to Biggs [2], p.11, deep-strategy is a meaningful texts, usually on unfamiliar topics and in the printed form. In our approach, characterized by reading widely and inter-relating with case the students discussed an unfamiliar topic for which they had previous knowledge. Our results show that as students’ tendency to study new material. From this perspective, interpreting our towards the deep-strategy approach increases, we observe a posi- findings is challenging as we are dealing with a new topic situa- tive association with deep cohesion of 0.37 for the users of the tion, delivered in the discussion forum, which resembles more the Class Average visualization, a negligible positive association of oral form than the printed one. 0.03 for the Quality visualization and a strong negative associa- It helps to look at the narrativity relative to deep cohesion. As tion of -0.91 for Top Contributors. Exploring the questionnaire found in [14], “informational texts tend to have higher cohesion that determines deep-strategy [3] may provide a clue why Top between sentences, as compared to narratives; cohesion is appar- Contributors can be detrimental to the students’ performance: the ently one way to compensate for the greater difficulty of unfamil- visualization provides no information that can reinforce the stu- iar subject matter”. Deep cohesion measures causal and intention- dent approach, such as encouragement to do more work on a top- al connections between sentences. In the study by Graesser et al., ic, spending extra time to obtain more information, and looking there was a very small increasing trend observed with increasing through the most suggested readings. Rather, the visualization grades and at grade 11+ a very small difference between language drives students’ attention to the highest number of posting per used in arts, science and social science [14]. class, detracting from the meaning and focusing on high volume Dowell et al. [8] in their group chat study with undergraduate and personal recognition. The Class Average visualization does students have shown that increasing deep cohesion and increasing not support deep approach directly, rather it may be providing a syntactic complexity were strong predictors of the individual stu- more meaningful norm for quantity of messages and leaving stu- dents’ learning performance. When evaluating the metrics across dents to concentrate on what is important for their own learning. all messages within the group, the deep cohesion of all messages These suppositions should be tested via more qualitative ap- in the group was predictive of the group performance. These find- proaches, such as student think aloud protocols. Interestingly, the ings align well with underlying cognitive science theories which association of Quality visualization, which aimed to focus student emphasize that deep cohesion should be given a higher weight attention on key concepts to be covered in discussion, resulted in because of its importance for knowledge construction [9]. low deep cohesion association with deep-strategy. This may have been because the visualization did not add any new information to We observed that for the two subscales which showed significant deep -strategy learners, since they already are studying broadly associations with visualization types, i.e. deep strategy and sur- and do not need such a direction. Neither are such students inter- face-motive, the change of students’ approaches to learning sub- ested in a comparison with how others are doing in the class. scale values had the same association direction as the change in narrativity and deep cohesion for each of the visualizations. Given The surface-strategy approach is reproductive, characterized by the fact that the discussion topics were new, and students’ posts students limiting targets to bare essentials and aiming to repro- were expected to be expository, we expected to observe that an duce material by pursuing rote learning [2], p.11. A high associa- increase in coherence would be associated with the decrease in tion for deep cohesion for the users of the Quality visualization narrativity. We observed a similar direction of change in our study follows the definition of the surface-strategy approach: students when exploring students’ goal orientations [23]. This finding is pursuing this approach would benefit from an explicit list of key somewhat contradictory to the previous observations, both by concepts to discuss by pragmatically directing their attention to Graesser et al. [14] and Dowell et al. [8], where the deep cohesion those concepts. The Top Contributors visualization, highly nega- compensated for the reduced narrativity. We speculate that the tively associated with surface-strategy (-0.98), diverts student context within which the text was produced, i.e. discussion activi- attention away from one of the main tenets of the approach: min- ty itself, placed a strong demand on communicating ideas in a imum essential contribution. From this same perspective, the form that is directed at group members as in oral conversation, i.e. Class Average visualization is providing information that gives the texts can be easily absorbed and replied to by the group mem- students a reasonable norm to relate to and which does not fun- bers. damentally interfere with their approach. The second notable observation is that of the rate of change in The surface-motive approach is defined as instrumental; students’ narrativity and deep cohesion: it is nearly identical or very close. main purpose of learning is to meet requirements minimally by As can be seen in Table 5 this observation is repeated six times. balancing between working too hard and failing [2]. The interpre- We do not have any explanation for this observation and it would tation of the observed results is rather difficult. Although one be interesting to see 1) if this relationship holds in other contexts, would expect the Class Average visualization to align with this and 2) if it does, what are the context characteristics under which strategy rather well, the association for Deep Cohesion is negative the text is produced. (-0.38). In contrast, there is a highly positive association with the Top Contributors visualization (1.01). One possible explanation With respect to deep cohesion, our results showed that using a may lie in the original Biggs research, which showed one of three certain visualization showed a positive association between stu- factors that loaded on the surface-motive approach was pragma- dents’ approaches to learning and deep cohesion, while a negative tism (the other two were academic neuroticism and test anxiety). association is observed for a different visualization. The pattern Students showing a high level of pragmatism are grade oriented with respect to the direction and value of the association is ob- and they see university as a means to some other end [2]. The Top served across the three subscales in Table 5. The associations for Contributors visualization, by recognizing the top contributors by both strategy subscales, i.e. deep-strategy and surface-strategy, are name, may appeal to students pursuing the surface-motive strate- nearly a mirror for Class Average and Top Contributor visualiza- gy as it can potentially elevate them in the eyes of their peers. tions, when compared with the surface-motive approach. The Exploring connections between students’ approaches to learning and students’ motivations, in the context of the learning analytics visualizations, may help to understand these discovered associa- approaches arose from the students’ exposure to the visualiza- tions better. Finally, the Quality visualization was negatively as- tions. sociated with Deep Cohesion (-0.32). The Quality visualization in The strengths of the associations, especially with the deep cohe- our study showed 16 to 25 key concepts per discussion topic. The sion component that is a key component for constructing meaning relatively large number of key concepts could have caused confu- from the discourse, makes the students’ approaches to learning sion for students who aimed to do as little work as possible, and one of the candidates for measuring individual differences with aimed only at passing acquaintance with topics [3]. The academic the goal of selectively offering visualizations to students with neuroticism factor, defined as “overwhelmed and confused by certain characteristics. However, before we reach that point, fur- demands of the course work” [2], p.17 that loaded on this ap- ther research is required. proach would further confirm this interpretation. First, we need to reconcile the fact that ideally all the students It is interesting to note that deep-motive approach showed no as- would engage with the course as deep learners. Students adopt sociation with any of the components regardless of the visualiza- surface approaches because the course design allows it [3]. Hence, tion. Deep-motive is intrinsically driven and aims to actualize the it is encouraging to see that there are visualizations, i.e. Top Con- interest and competence in a particular academic subject. Hence, tributors for surface-motive and Class Average and Quality for since the approaches to learning are context dependent, it may be surface-strategy, that showed moderate to strong positive associa- that the visualizations did not affect students’ intrinsically driven tion for deep cohesion. It would be interesting to observe if expo- interests in the subjects sufficiently. sure to these visualizations indeed changes students’ approaches 4.2 Limitations and Future Research to learning, as suggested by Biggs [3] above, or is relatively hard We are aware of several limitations of our study. Two main limi- to change, as indicated for example by Gijbels et al. [13]. tations related to the way the visualizations were developed and Second, we need to be aware that we also found negative associa- deployed include i) the limited types of information presented, tions between some approaches to learning and visualizations. and ii) the need for students to access the visualizations by active- These are worrisome for learners with undesirable surface ap- ly clicking the link. From the theoretical construct point of view, proaches but even more so for learners with the deep-strategy we looked at the students’ approaches to learning in isolation from approach when viewing the Top Contributors visualization. Clear- other ways of measuring individual differences. Even if this re- ly, before we can confidently deploy learning analytics for learn- search complements our prior study that explored motivational ers, a better understanding is needed of how the interplay of stu- construct of achievement goal orientations [23], further analysis dents’ approaches, context, and the information being presented to that considers several constructs and their interrelation is needed. students is affecting learning outcomes. Although our data were collected from six discussion activities in Acknowledgement. This research was supported by the Social four courses, they still originate from the same university pro- Sciences and Humanities Research Council of Canada. The au- gram; a validation in a different setting is needed. Finally, this thors would like to thank reviewers for constructive comments work focused on learning analytics for discussions. Investigating that helped to improve this paper. the association between individual characteristics and different ways of visualizing other learning activities is needed to general- ize our findings. 5. REFERENCES Another possible limitation is that students in blended-learning [1] Baeten, M. et al. 2010. Using student-centred learning courses do interact in person and they may have also discussed the environments to stimulate deep approaches to learning: topic outside of the technology. Although this needs to be Factors encouraging or discouraging their effectiveness. acknowledged, we do not see it as likely because i) the groups Educational Research Review. 5, 3, 243–260. were randomly generated, hence avoiding established friend cir- [2] Biggs, J. 1987. Student Approaches to Learning and cles to form discussion groups, ii) all courses had a major group Studying. Australian Council for Educational Research. project that is known to consume much out-of-class time and the [3] Biggs, J. et al. 2001. The revised two-factor Study Process grouping is different, and iii) relatively short time of 7-14 days Questionnaire: R-SPQ-2F. The British journal of educational and the number of expected posts per discussion do not work well psychology. 71, Pt 1, 133–49. with logistics when students meet on campus face to face. [4] Du Boulay, B. et al. 2010. Towards systems that care: A The students’ approaches to learning instrument can measure Conceptual Framework based on motivation, metacognition several things, depending on how it is deployed [3]: 1) students’ and affect. International Journal of Artificial Intelligence in preferred approaches to learning in a particular context, 2) when Education. 20, 197–229. applied before and after an intervention, the instrument can meas- [5] Bull, S. and Kay, J. 2013. Open Learner Models as Drivers ure its effectiveness in bringing students towards deep approach- for Metacognitive Processes. International Handbook of es, and 3) the ratio of deep and surface approaches, when meas- Metacognition and Learning Technologies. R. Azevedo and ured for the whole class, can be used to compare pedagogical V. Aleven, eds. 349–365. characteristics of different courses. Our study measured students’ [6] Bull, S. and Kay, J. 2007. Student Models that Invite the preferred approaches to learning, as established in the context of a Learner In: The SMILI Open Learner Modelling Framework. particular course. The discussion activity followed immediately International Journal of Artificial Intelligence in Education. after we gathered the self-reported data, hence there was a rather 7, 2, 89–120. limited influence of other activities that may have caused the [7] Corrin, L. and de Barba, P. 2014. Exploring students’ change of the students’ approaches, as the second possible use interpretation of feedback delivered through learning might have suggested. From this perspective, we can assume that analytics dashboards. Proceedings of the ascilite 2014 the discovered associations between the quality of the posted mes- conference sages and the visualization types when controlled for learning [8] Dowell, N.M. et al. 2014. What Works: Creating Adaptive and Intelligent Systems for Collaborative Learning Suppor. Proceedings of the 12th International Conference on [20] Schielzeth, H. and Nakagawa, S. 2013. Nested by design: Intelligent Tutoring Systems - ITS 2014, 124–133. model fitting and interpretation in a mixed model era. [9] Dowell, N.M.M. et al. 2015. Language and Discourse Methods in Ecology and Evolution. 4, 1, 14–24. Analysis with Coh-Metrix : Applications from Educational [21] Senko, C. et al. 2011. Achievement Goal Theory at the Material to Learning Environments at Scale. Journal of Crossroads: Old Controversies, Current Challenges, and New Learning Analytics. (in press) Directions. Educational Psychologist. 46, 1, 26–47. [10] Elliot, A.J. et al. 2005. A conceptual history of the [22] Shirazi Beheshitha, S. et al. 2015. A process mining achievement goal construct. Handbook of competence and approach to linking the study of aptitude and event facets of motivation. 16, 52–72. self-regulated learning. Proceedings of the Fifth [11] Foltz, P.W. et al. 1998. The measurement of textual International Conference on Learning Analytics And coherence with latent semantic analysis. Discourse Knowledge - LAK ’15, 265–269. processes. 25, 2-3, 285–307. [23] Shirazi Beheshitha, S. et al. 2016. The Role of Achievement [12] Friedman, J. et al. 2001. The elements of statistical learning. Goal Orientations When Studying Effect of Learning Springer series in statistics Springer, Berlin. Analytics Visualizations. Proceedings of the Sixth [13] Gijbels, D. et al. 2008. Constructivist learning environments International Conference on Learning Analytics And and the (im)possibility to change students’ perceptions of Knowledge - LAK ’16, in press. assessment demands and approaches to learning. [24] Snow, R.E. 1991. Aptitude-treatment interaction as a Instructional Science. 36, 5-6, 431–443. framework for research on individual differences in [14] Graesser, a. C. et al. 2011. Coh-Metrix: Providing Multilevel psychotherapy. Journal of Consulting and Clinical Analyses of Text Characteristics. Educational Researcher. Psychology. 59, 2, 205. 40, 5, 223–234. [25] Verbert, K. et al. 2013. Learning analytics dashboard [15] Kanuka, H. and Anderson, T. 2007. Online social applications. American Behavioral Scientist. 57, 10, 1500– interchange, discord, and knowledge construction. 1509. International Journal of E-Learning & Distance Education. [26] Winne, P.H. 2010. Improving Measurements of Self- 13, 1, 57–74. Regulated Learning. Educational Psychologist. 45, 4, 267– [16] Kruse, A. and Pongsajapan, R. 2012. Student-centered 276. learning analytics. CNDLS Thought Papers, 1–9. [27] Winne, P.H. and Hadwin, A.F. 1998. Studying as self- [17] Luppicini, R. 2007. Review of computer mediated regulated learning. Metacognition in educational theory and communication research for education. Instructional science. practice. 93, 27–30. 35, 2, 141–185. [28] Wise, A. et al. 2014. Learning analytics for online [18] Martinez-Mirón, E. et al. 2005. The role of learning goals in discussions: Embedded and extracted approaches. Journal of the design of ILEs: Some issues to consider. Proceeding of Learning Analytics. 1, 2, 48–71. the 2005 conference on Artificial Intelligence in Education: [29] Xu, R. 2003. Measuring explained variation in linear mixed Supporting Learning through Intelligent and Socially effects models. Statistics in medicine. 22, 22, 3527–3541. Informed Technology, 427–434. [30] Yuan, J. and Kim, C. 2014. Guidelines for facilitating the [19] Rovai, A.P. 2007. Facilitating online discussions effectively. development of learning communities in online courses. The Internet and Higher Education. 10, 1, 77–88. Journal of Computer Assisted Learning. 30, 3, 220–232.