LMS Course Design As Learning Analytics Variable John Fritz Univ. of Maryland, Baltimore County 1000 Hilltop Circle Baltimore, MD 21250 410.455.6596 fritz@umbc.edu ABSTRACT 1. User & Content Management (e.g., enrollment, notes, In this paper, I describe a plausible approach to operationalizing syllabi, handouts, presentations) 1 existing definitions of learning management system (LMS) course 2. Interactive tools (e.g., forums, chats, blogs, wikis, design from the research literature, to better understand instructor announcements) impact on student engagement and academic performance. I share 3. Assessment (e.g., practice quizzes, exams, electronic statistical findings using such an approach in academic year 2013- assignments, grade center use) 14; discuss related issues and opportunities around faculty development; and describe next steps including identifying and If we are willing to accept LMS course design as an aspect of reverse engineering effective course redesign practices, which instructor pedagogy – and accept student LMS activity as a proxy may be one of the most scalable forms of analytics-based for attention, if not engagement – then it may be possible to use interventions an institution can pursue. one to inform the other. Specifically, patterns of student LMS behavior around tools or functions could retroactively shine light on implemented course design choices that align with the broad, Categories and Subject Descriptors research-based LMS course design types described above. K.3.1 [Computer Uses in Education]: Collaborative learning; Computer-assisted instruction (CAI); Computer-managed For example, if students in one course appear to use the online instruction (CMI); Distance learning. discussion board more than students in another course, could one reasonably assume that instructors of the two courses varied at General Terms least in their conceptual value and effective use of this interactive Algorithms, Design, Human Factors, Measurement, tool? Perhaps this is evident by how instructors differ in their Standardization, Theory weighting or reward for the discussion board’s use in the course’s grading scheme, or model and facilitate its use, or simply enable it as a tool in the LMS course’s configuration. Admittedly, the Keywords challenge is determining how much variance in student LMS Course Design, Instructor Pedagogy, Learning Analytics course usage is statistically significant or attributable to and Methodology indicative of instructor course design. For assessment purposes, though, these three broad LMS course design types (content, 1. INTRODUCTION OF PROBLEM interaction and assessment) provide at least a theoretical way to Given wide spread use of the learning management system (LMS) operationalize variability in faculty LMS course design and usage. in higher education, it is not surprising this form of instructional technology has frequently been the object of learning analytics While there may be a default institutional LMS course studies [1, 2, 3, 4, 5]. While methods and results have been mixed configuration most instructors blindly accept, in trying to explain in terms of predicting student success, let alone leading to actual, why one tool or function is used by students more in one course effective and scalable interventions, there is one potential LMS vs. another, it seems odd that we shouldn’t be able to consider the analytics variable that has received comparatively little attention: pedagogical design choices of the instructor as an environmental the role of course design. factor that may impact student awareness, activity and engagement. True, this may also reflect an instructor’s capability Part of the problem is how to operationalize something as or capacity to effectively express his or her pedagogy in the LMS, theoretical, subjective or varied as instructor pedagogy. Indeed, but to simply ignore the possible impact of course design on Macfadyen and Dawson [6] attributed variations in “pedagogical student engagement seems un-necessary and disingenuous if we intention” as a reason why the LMS could never serve a “one size want to use learning analytics to predict and hopefully intervene fits all” dashboard to predict student success across an institution. with struggling students. If students who perform well use the Similarly, Barber and Sharkey [7] eliminated theoretical student LMS more, do we not want to know what tools, functions and engagement factors such as self-discipline, motivation, locus of pedagogical practices may facilitate this dynamic? control and self-efficacy because they were “not available” (i.e., quantifiable) in the LMS data set, which was their primary object 2. SOLUTION & METHOD of analysis. Basically, how does one quantify course design that Despite the striking similarity in how several LMS-based seems qualitatively different from usage log data like logins? analytics studies have categorized LMS course design practices (if Despite these operational challenges, some of the most frequently 1 cited LMS analytics studies referenced above actually provide a Dawson et al (2008) proposed a 4th type of LMS use called surprisingly uniform characterization of course design that can be “administration” that roughly equates to course logistics of roughly broken down into three broad, but distinct categories: enrollment, file management, etc. For convenience, I’ve combined this into the “user & content management” category. not pedagogical intent), what’s needed is a plausible, systematic regardless of design type. Not all instructors use electronic approach to operationalize these common definitions. assignments or discussion forums, but short of simply dropping a course, all students generate at least some activity that can be 2.1 Weighted Item Count by Design Type measured as logins, clicks or hits and duration. Conveniently, Blackboard used these same research-based definitions of course design for its Analytics for Learn (A4L) To calculate the student summary, we must first convert each raw product. Specifically, A4L’s “course design summary” is a activity measure to a standardized Z-score, which shows how statistical comparison of a Bb course’s relative, weighted item many standard deviations and in which direction a particular raw count compared to all courses in a department and the institution score is from the mean of that measure in a normal distribution of based on the three major item types found in the LMS analytics cases. Because the scale of each activity varies greatly during a literature. Essentially, all items in any Bb course, such as semester (e.g., accesses or logins could be under one hundred, documents or files, discussions or chats, and assignments or interactions or hits could be in hundreds and duration or minutes quizzes, are grouped into 1) content, 2) interactive tools or 3) could be in the thousands), converting these variables to Z-scores assessments. Then, A4L’s course design summary uses a simple allows us to compare and summarize them across measures more algorithm to categorize all courses into department and efficiently. It also allows us to identify and remove outliers, which institutional statistical quartiles through the following process: for this purpose is defined as scores greater than three (3) standard deviations from the mean. The formula for converting Z-scores is as follows: 1. Sum all course items by primary Item Type (e.g., Content, Tools, Assessments). 2. Multiply the group total using a weighting factor (wf): Content (wf = 1), Interaction (wf = 2) and Assessments (wf = 2). 2 The Z-score is equal to X (value of the independent variable) less 3. Statistically compare each course to all other courses in the (the value of the class mean for X), divided by (the class department and all other courses across the entire institution. standard deviation of X). 4. Tag each course with a quartile designation for both the Accordingly, the steps to analyze and summarize student activity department and institution dimension. in all courses include the following: 1. Convert accesses, interactions and duration student Bb Again, the “course design summary” is already provided in A4L activity measures to Z-scores. and is really just a way of categorizing how a course is 2. Average the combined student activity scores into a summary constructed, compared to all courses in the department and across measure. the institution, not necessarily if and how it is actually used by 3. Assess the internal consistency of items using a Cronbach students. To understand and relate student activity to course alpha test of reliability for each approach (e.g., comparing design, we need to calculate a similar summary of student activity converted Z-scores). from existing A4L measures. 2.2 Student Activity Summary In addition to student LMS activity and course design measures Bb Analytics 4 Learn (A4L) contains several student activity described above, I used a “threshold” approach to academic measures that include the following: performance. Specifically, I used “C or better” final grade in a course and “2.0 or better” term grade point average (GPA) as • Course accesses after initially logging into the LMS; dependent variables. • Interactions with any part of the course itself, equivalent to “hits” or “clicks”; • Minutes using a particular course (duration tracking ends 3. FINDINGS after 5 minutes of inactivity); 3.1 Data • Submission of assignments, if the instructor uses The participants for my study were all first-time, full-time, assignments; degree-seeking, undergraduate freshmen or transfer students • Discussion forum postings, if the instructor uses discussions. starting their enrollment in Fall 2013. According to the UMBC Office of Institutional Research and Decision Support (IRADS), this included 2,696 distinct students (1,650 freshmen and 1,046 However, for calculating the companion student activity summary transfers) or 24.48% of all 11,012 degree-seeking to correlate with A4L’s course design summary, I have only used undergraduates.3 The demographic distribution was as follows: the first three measures (accesses, interactions and minutes) because ALL courses generate this kind of student activity, (Fresh.%) (Trans.%) (Total%) 2 When Blackboard developers were prototyping A4L, I urged Gender them to consider giving “assessments” (e.g., quizzes, surveys, Male 57 48 54 assignments, etc.) a higher weighting of 3, because assessments Female 43 52 46 are more complex for faculty to develop and potentially more impactful on student activity, if not learning. Bb decided not do Subtotal 100 100 100 to this, but does allow A4L’s “1-2-2” default weighting to be “customer configurable.” We are still evaluating Bb’s default weighting, which may be more conservative than my own, but 3 either approach seems reasonable. http://oir.umbc.edu/files/2013/11/CDS_2013-2014-.pdf Race 2014) and reached a high of 2.455 (for freshmen in Spring 2014). This means that selected subsets of my sample of students had a Asian 24 11 19 1.6 to 2.5 times chance of earning a C or better after controlling Black 11 20 15 for other demographic and academic variables. Hispanic 5 10 7 Using the same approach for 2.0 or better term GPA, the odds White 45 41 43 ratio for freshmen under the grade center * Bb activity interaction effect model was 2.610 and 3.504 for Fall 2013 and Spring 2014, Other4 9 9 9 respectively. This means freshmen were 2.6 to 3.5 times more Unknown 6 10 8 likely to earn a 2.0 term GPA in their Bb courses that used the grade center. By contrast, the institutional course design quartile Subtotal 100 100 100 (ICDQ) * Bb interaction effect model remained essentially the same as the C or better findings described above. Table 1: Study Sample, 2013-14 FT Freshmen & Transfers 3.2 Grades by Student LMS Activity Generally, students who performed well academically in courses and a given term overall, showed a higher, statistically significant (p < .001) use of Bb compared to peers who did not perform as well. Specifically, using logistic regression to control for other factors such as gender, race, age, Pell eligibility, academic preparation and admit type, students were 1.5 to 2 times more likely to earn a C or better in Fall 2013 and Spring 2014, respectively. Similarly, students were 2.4 to 2.8 times more likely to earn a 2.0 term GPA in Fall 2013 and Spring 2014, respectively. 3.3 Student LMS Activity by Course Design Generally, students were much more active in Bb courses that used a wider array of Bb functionality. Specifically, after using linear regression, both the institutional course design quartile and instructor use of the grade center were statistically significant (p < Figure 1: Outcomes by Inst. Course Design Quartile .001) in terms of freshmen and transfer LMS activity in both semesters. As indicated by the R2 change, course design and grade center use contributed more than 20% to the overall models, whose adjusted R2 of .265 and .239 explained 26.5% and 23.9% of the variance in student Bb usage for freshmen and transfers, respectively in Fall 2013. A similar pattern emerged in Spring 2014, with course design and grade center use contributing more than 22% to the overall models’ adjusted R2 of .333 and .278, which explained 33.3% and 27.8% of freshmen and transfer student use of Bb, respectively. 3.4 Student Grades by Course Design Generally, there was a statistically significant (p < .001) relationship for student academic outcomes based on the interaction of course design and student activity in the LMS. However, there was a marked difference in the Expected (B) or odds ratio for both groups of students across both terms, Figure 2: Outcomes by Grade Center Use depending on whether I used institutional course design quartiles (ICDQ) or course grade center use as the covariate interaction 4. DISCUSSION effect with student Bb activity. For example, the ICDQ * Bb While the correlation between LMS course design and student activity interaction effect never produced an odds ratio higher outcome is compelling, I cannot confirm or reject a hypothesis than 1.009, which translates into little more than 1 times the that it is a causal relationship. I’d want to study these relationships likelihood of earning a C or better final grade (essentially, a 50/50 over a longer time, across the entire student population, and even chance). replicate it at other schools. However, is it necessary to establish causality to leverage let alone prove a prediction? Desirable: yes. By contrast, the odds ratio for the grade center use * Bb activity Necessary: I’m not so sure. interaction effect was no less than a 1.571 (for transfers in Spring I tend to view LMS use – by faculty and students – as a real-time 4 The “Other” category is my combination of relatively small proxy for their respective attention, engagement and effort in the numbers for “International,” “Native American,” “Pacific larger context of teaching and learning. As such, we’ve developed Islander,” and “Two or More” UMBC Census Data categories. a simple “Check My Activity” (CMA) feedback tool for students allowing them to compare their own LMS activity with peers who 4.1 Course Design as Scalable Intervention earn the same, higher or lower grade for any assignment – If course design has a relationship with student academic provided the instructor uses the grade center. [3] After controlling performance, then faculty development could be a necessary first for other factors (e.g., gender, race, academic prep, Pell eligibility, step toward a more scalable form of institutional intervention with etc.,) freshmen using the CMA were 1.7 times more likely to earn at-risk students. In fact, in describing self-directed learning, a C or higher final grade (p < .001), but transfers were barely 1 Ambrose et al [8] suggest that “students must learn to assess the times more likely and the findings were not statistically demands of a task, evaluate their own knowledge and skills, plan significant.5 We also show students how active the LMS course is their approach, monitor their progress, and adjust their strategies overall compared to other courses in the discipline, and recently as needed” (p. 191). However, instructors also need to be extended this same view to faculty themselves. This way, pedagogically ready and secure in their own roles as teachers to everyone can decide how to gauge or interpret the importance of desire this kind of empowerment for their students, let alone seek their own – or even an entire course’s – LMS activity in the it out by design. context of that exhibited by others. For example, Robertson [9] proposed what is now considered a Additionally, Blackboard has developed a compelling predictive classic model for how faculty beliefs about teaching influence risk model based on this combination of student activity and their evolving pedagogical practice, including the following course design to derive a student “engagement” indicator that is stages: reflected in UMBC’s actual full-time freshmen and transfer retention status from Fall 2013 (see figures 3 and 4 below).6 • Egocentrism – focusing mainly on their role as teachers; • Aliocentrism –focusing mainly on the role of learners; and • Systemocentrism – focusing on the shared role of teachers and learners in a community. If this evolution of thought and practice occurs at all among teachers, Robertson identifies telltale signs of the transformation. First, as faculty move from one stage to the next, they bring the benefits and biases of the previous stage. Second, they typically change their beliefs and practices only when confronted by the limitations of a current stage, which is brought about by teaching failures. Finally, the desire for certainty, stability and confidence Figure 3: Freshmen Retention by Bb Learn Risk Profile, FA13 either keeps faculty frozen in a current, status quo framework or drives their progression to the next one in an effort to avoid a potentially paralyzing neutral zone: “a familiar teaching routine that they have deemed inappropriate and with nothing to replace it” (p. 279). Just as Robertson showed how faculty beliefs about teaching influenced their practice, Steel [10] showed how teaching beliefs influenced their perceptions about what they believe various instructional technologies will allow them to do. For example, using detailed case studies about faculty use of online discussions in an LMS, Steel illustrates the creative tensions between how faculty conceptualize teaching and how they perceive the Figure 4: Transfer Retention by Bb Learn Risk Profile, FA13 affordances of web-based technologies like an LMS. “The velocity of change in the affordances offered by learning Notice how less successful but more engaged students (#3) are technologies presents a significant challenge as does the minimal retained next year at higher rates than more successful but less incentives available to university teachers to use technologies engaged peers (#2), particularly transfers (figure #4). Moving effectively in their teaching practices.” (p. 417) forward, I can see the Bb integrated model becoming a valuable tool in studying the long-term impact of an LMS on student Whether faculty like it or not, when they teach online or use retention, persistence and graduation. If so, it might also reinforce online tools as supplements in their traditional classrooms, they the value of using the LMS as a real-time indicator of student also become webmasters. As such, they need to understand the engagement, not just the passive, one-way delivery of content for potential affordances and limitations of web technologies as they which it has typically been used. attempt to express and implement their pedagogy in course designs. Steel argues that this “reconciliation process” between pedagogical beliefs and rapidly changing technology affordances “needs to be incorporated more fully into informal teacher development approaches as well as formal programs” (p. 417). 5 Based on my recently defended dissertation available at http://umbc.box.com/johnfritzdissertation. To me, faculty who are in Robertson’s “neutral zone” between 6 “teaching failures” and “nothing to replace [them]” may be ripe Larger images and screencast demo available at the following: https://umbc.box.com/fritzpclashortpaperimages for a course design intervention based on learning analytics, but only if they are aware of peers who they believe have a more learning and teaching practice. Proceedings Ascilite effective approach. This is why and how learning analytics may Melbourne 2008. Retrieved from be able to identify, support, promote and evaluate effective http://ascilite.org.au/conferences/melbourne08/procs/dawson. practices and practitioners, to serve as a standard by which faculty pdf not only measure themselves, but also point to a way forward, by [3] Fritz, J. (2013). Using analytics at UMBC: Encouraging ideally helping students take responsibility for learning. Yes, student responsibility and identifying effective course designs technology may help, but per Robertson’s and Steel’s research, it (Research Bulletin) (p. 11). Louisville, CO: Educause Center may not do so unless faculty first believe that it can, enough so as for Applied Research. Retrieved from to try or look for peers who have done so. Just as students taking http://www.educause.edu/library/resources/using-analytics- responsibility for their learning is the only scalable form of umbc-encouraging-student-responsibility-and-identifying- learning, so too must faculty take responsibility for “teaching effective-course-designs failures.” This includes being open to other pedagogical examples and working hard to master and implement them, which requires a [4] Macfadyen, L. P., & Dawson, S. (2012). Numbers are not willingness to explore, practice, refine and self-assess. enough. Why e-learning analytics failed to inform an institutional strategic plan. Journal of Educational Technology & Society, 15(3), 149–163. Retrieved from 5. NEXT STEPS http://www.ifets.info/journals/15_3/11.pdf In recent posts, e-Literate bloggers Michael Feldstein and Phil Hill lament the ubiquitous, but essentially boring LMS [11] and [5] Whitmer, J. (2012). Logging on to improve achievement: even equate it to the minivan of education technology that has Evaluating the relationship between use of the learning long-lasting utility, but not much zip or cache [12]. But if we are management system, student characteristics, and academic willing to go beyond a conventional view of the LMS as more achievement in a hybrid large enrollment undergraduate than a content repository or one-way (ego centric?) delivery of course. University of California, Davis. Retrieved from knowledge from instructor to student, we might just find that http://johnwhitmer.net/dissertation-study/ variations in student behavior can shine light on effective course [6] Macfadyen, L. P., & Dawson, S. (2010). Mining LMS data to design practices. develop an “early warning system” for educators: A proof of Toward this end, we are beginning to look at the LMS as a way to concept. Computers & Education, 54(2), 588–599. identify effective course design practices and practitioners. While http://doi.org/10.1016/j.compedu.2009.09.008 a given semester is underway, we monitor positive outlier courses [7] Barber, R., & Sharkey, M. (2012). Course correction: Using that appear to generate inordinately high student LMS usage. analytics to predict course success. In Proceedings of the 2nd When the semester is over, we correlate final grades and follow International Conference on Learning Analytics and up with instructors whose students may also be performing higher Knowledge (pp. 259–262). New York, NY, USA: ACM. than peers within a department or the institution. To be sure, we http://doi.org/10.1145/2330601.2330664 conduct these qualitative interviews without necessarily relying on student LMS usage. But taken together, high student LMS [8] Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., usage and grade distribution analysis adds a real-time indicator of & Norman, M. K. (2010). How learning works: Seven student engagement and academic performance that is no longer research-based principles for smart teaching. John Wiley limited to the end of semester post-mortem. and Sons. Finally, as instructional technology support staff, it is not our job [9] Robertson, D. L. (1999). Professors’ perspectives on their to shine light on instructors or course designs that could be better. teaching: A new construct and developmental model. We’ve learned instructors learn best from each other, but we can Innovative Higher Education, 23(4), 271–294. help by using the technology and methodology of learning http://doi.org/10.1023/A:1022982907040 analytics to identify and reverse engineer effective course design [10] Steel, C. (2009). Reconciling university teacher beliefs to practices we wish all faculty knew about and would emulate. In create learning designs for LMS environments. Australasian this way, course redesign could be the most scalable form of Journal of Educational Technology, 25(3), 399–420. analytics-based intervention any institution could pursue. Retrieved from http://www.ascilite.org.au/ajet/ajet25/steel.html 6. REFERENCES [11] Feldstein, M. (2014, November 10). Dammit, the LMS -. [1] Campbell, J. (2007). Utilizing student data within the course Retrieved from http://mfeldstein.com/dammit-lms/ management system to determine undergraduate student [12] Hill, P. (2015, May 7). LMS Is The Minivan of Education academic success: An exploratory study. Retrieved from (and other thoughts from #LILI15) -. Retrieved from http://proquest.umi.com/pqdweb?did=1417816411&Fmt=7& http://mfeldstein.com/lms-is-the-minivan-of-education-and- clientId=11430&RQT=309&VName=PQD other-thoughts-from-lili15/ [2] Dawson, S., McWilliam, E., & Tan, J. P. L. (2008). Teaching smarter: How mining ICT data can inform and improve