=Paper= {{Paper |id=Vol-1446/GEDM_2015_Submission_7 |storemode=property |title=Graph Analysis of Student Model Networks |pdfUrl=https://ceur-ws.org/Vol-1446/GEDM_2015_Submission_7.pdf |volume=Vol-1446 |dblpUrl=https://dblp.org/rec/conf/edm/GuerraHHB15 }} ==Graph Analysis of Student Model Networks== https://ceur-ws.org/Vol-1446/GEDM_2015_Submission_7.pdf
                 Graph Analysis of Student Model Networks

                          Julio Guerra                                             Yun Huang
                 School of Information Sciences                            Intelligent Systems Program
                    University of Pittsburgh                                  University of Pittsburgh
                     Pittsburgh, PA, USA                                        Pittsburgh, PA, USA
                        jdg60@pitt.edu                                          yuh43@pitt.edu

                         Roya Hosseini                                         Peter Brusilovsky
                   Intelligent Systems Program                           School of Information Sciences
                      University of Pittsburgh                              University of Pittsburgh
                        Pittsburgh, PA, USA                                  Pittsburgh, PA, USA
                        roh38@pitt.edu                                          peterb@pitt.edu


ABSTRACT                                                         same for all students and at all times. In this work we ex-
This paper explores the feasibility of a graph-based approach    plore the idea that it might be beneficial for a student model
to model student knowledge in the domain of programming.         to include connections between domain KEs that represent
The key idea of this approach is that programming concepts       some aspects of individual student knowledge rather than
are truly learned not in isolation, but rather in combina-       domain knowledge. This idea is motivated by the recogni-
tion with other concepts. Following this idea, we represent      tion that the mastery in many domains is reached as the
a student model as a graph where links are gradually added       student practices connecting different KEs, i.e., each KE is
when the student’s ability to work with connected pairs of       practiced in conjunction with other KEs. To address this, we
concepts in the same context is confirmed. We also hypothe-      build a model represented as a network of KEs that get pro-
size that with this graph-based approach a number of tradi-      gressively connected as the student successfully works with
tional graph metrics could be used to better measure student     problems and assessment items containing multiple KEs. As
knowledge than using more traditional scalar models of stu-      the student succeeds in more diverse items mapped to dif-
dent knowledge. To collect some early evidence in favor of       ferent KEs, her model gets better connected.
this idea, we used data from several classroom studies to
correlate graph metrics with various performance and moti-       To explore the value of this graph-based representation of
vation metrics.                                                  student knowledge, we compute different graph metrics (e.g.,
                                                                 density, diameter) for each student and analyze them in re-
1.   INTRODUCTION                                                lation to student performance metrics and attitudinal ori-
Student modeling is widely used in adaptive educational sys-     entations drawn from a motivational theory. This analysis
tems and tutoring systems to keep track of student knowl-        was performed using data collected from 3 cohorts of a Java
edge, detect misconceptions, provide targeted support and        programming course using the same system and the same
give feedback to the student [2]. The most typical overlay       content materials. In the remaining part of the paper, we
student model dynamically represents the inferred knowl-         describe related work, introduce and illustrate the suggested
edge level of the student for each knowledge element (KE)        approach, describe graph and performance metrics, and re-
(also called knowledge component or KC) defined in a do-         port the results of the correlation analysis.
main model. These knowledge levels are computed as the
student answers questions or solves problems that are mapped
to the domain KEs. Student models are frequently built over
                                                                 2.   RELATED WORK
networked domain models where KEs are connected by pre-          Graph representation of student activity is not new. The
requisite, is-a, and other ontological relationships that are    2014 version of the Graph-Based Educational Data Mining
used to propagate the knowledge levels and produce a more        Workshop 1 contains two broad types of related work: the
accurate representation of the knowledge of the learner. Since   analysis of the networking interaction among students, for
these connections belong to domain models, they stay the         example work on social capital [14] and social networking in
                                                                 MOOCs [3, 12]; and analyses of learning paths over graph
                                                                 representation of student traces while performing activities
                                                                 in the system [1, 5]. Our work fits in the second type since
                                                                 we model traces of each student interacting with the sys-
                                                                 tem. However, our approach is different as it attempts to
                                                                 combine an underlying conceptual model with the traces of
                                                                 the student learning.

                                                                 1
                                                                   http://ceur-ws.org/Vol-1183/gedm2014_proceedings.
                                                                 pdf
A considerable amount of work focused on graph repre-
sentation of domain models that serve as a basis for over-
lay student models. The majority of this work focused on
constructing the prerequisite relationships between domain
knowledge components (concept, skills) [6, 13]. In this case
links established between a pair of concepts represent prereq-
uisite - outcome relationship. Another considerable stream
of work explored the use of formal ontologies with such re-
lationships as is-a and part-of for connecting domain knowl-
edge components [7]. Ontological representation, in turn,
relates to another stream of work that applies graph tech-
niques to structural knowledge representation, for example
by analyzing the network properties of ontologies [9].                            Figure 1: Exercise jwhile1

The research on graph-based domain models also leads to
a stream of work on using Bayesian networks to model the
relationships between domain concepts for knowledge prop-        path length, and average node weight can be good indicators
agation in the process of student modeling [15, 4]. Yet, in      of student knowledge compared to the amount of activities
both cases mentioned above links between knowledge com-          done or overall measures of assessment like success rate on
ponents were not parts of individual student model, but ei-      exercises. We further explore these graph metrics in relation
ther parts of the domain model or student modeling pro-          with motivational factors drawn from a learning motivation
cess and thus remain the same for all students. In con-          theory.
trast, the approach suggested in this paper adds links be-
tween knowledge components to individual student models          3.1     Domain Model and Content
to express combinations of knowledge components that the
                                                                 Our content corpus is composed by a set of 112 interactive
given student explored in a problem solving or assessment
                                                                 parameterized exercises (i.e., questions or problems) in the
process. This approach is motivated by our belief that in
                                                                 domain of Java programming from our system QuizJet [11].
the programming domain, student knowledge is more effec-
                                                                 Parameterized exercises are generated from a template by
tively modeled by capturing student progress when students
                                                                 substituting a parameter variable with a randomly generated
needed to apply multiple concepts at the same time.
                                                                 value. As a result each exercise can be attempted multiple
                                                                 times. To answer the exercise the student has to mentally
3.   THE APPROACH                                                execute a fragment of Java code to determine the value of a
The idea behind our approach is that knowledge is likely to      specific variable or the content printed on a console. When
be stronger for concepts which are practiced together with       the student answers, the system evaluates the correctness,
a larger variety of other concepts. We hypothesize, for ex-      reports to the student whether the answer was correct or
ample, that a student who solves exercises, in which the         wrong, shows the correct response, and invites the student
concept for-loop is used with post-incremental operator and      to “try again”. As a result, students may still try the same
post-decremental operator will have a better understanding       exercises even after several correct attempts. An example of
of for-loop than another student who practices (even the         parameterized java exercise can be seen in Figure 1.
same amount of times) the for loops concept in a more nar-
row context, i.e., only with post-incremental operator. To       In order to find the concepts inside all of the exercises, we
represent our approach, for each student we build a graph-       used a parser [10] that extracts concepts from the exercise’s
based student model as a network of concepts where the           template code, analyzes its abstract syntax tree (AST), and
edges are created as the student succeeds in exercises con-      maps the nodes of the AST (concepts extracted) to the nodes
taining both concepts to be connected. The Domain Model          in a Java ontology 2 . This ontology is a hierarchy of pro-
defining the concept space and the mapping between the           gramming concepts in the java domain and the parser uses
concepts and programming exercises is explained in the next      only the concepts in the leaf nodes of the hierarchy.
section. The weight of the edges in the graph is computed
as the overall success rate on exercises performed by the        In total there are 138 concepts extracted and mapped to
student which contain the pair of concepts. Pairs of con-        QuizJet exercises. Examples of concepts are: Int Data Type,
cepts that do not co-occur in exercises succeeded by the stu-    Less Expression, Return Statement, For Statement, Subtract
dent are not connected in her graph. In this representation,     Expression, Constant, Constant Initialization Statement, If
highly connected nodes are concepts successfully practiced       Statement, Array Data Type, Constructor Definition, etc.
with different other concepts. We also compute a measure         We excluded 8 concepts which appear in all exercise tem-
of weight for each node by taking the average weight among       plates (for example “Class Definition” or “Public Class Spec-
edges connecting the node. This measure of the success rate      ifier” appear in the first line of all exercises). Each concept
on concepts favors exercises that connect more concepts be-      appears in one or more Java exercises. Each of the 112 ex-
cause each exercise containing n concepts produce or affects     ercises maps to 2 to 47 Java concepts. For example, the
n(n − 1)/2 edges. For example, a success on an exercise hav-     exercise “jwhile1”, shown in Figure 1, is mapped to 5 con-
ing 10 concepts contributes to 45 edges, but a successful at-    cepts: Int Data Type, Simple Assignment Expression, Less
tempt to an exercise connecting 5 concepts only contributes      Expression, While Statement, Post Increment Expression.
to 10 edges. We hypothesize that in a graph built following
                                                                 2
this approach, metrics like average degree, density, average         http://www.sis.pitt.edu/~paws/ont/java.owl
3.2   Graph Metrics                                             Table 1: Correlation between activity measures and grade
To characterize the student knowledge graph we computed         and between graph metrics and grade. * significance at 0.1,
standard graph metrics listed below.                            ** significance at 0.05.
                                                                 Measure of Activity                 Corr. Coeff.    Sig. (p)
   • Graph Density (density): the ratio of the number of         Correct Attempts to Exercises          .046           .553
     edges and the number of possible edges.                     Distinct Correct Exercises             .114           .147
   • Graph Diameter (diameter): length of the longest            Overall Success Rate                   .298          .000**
     shortest path between every pair of nodes.                  Avg. Success Rate on Concepts          .188          .016**
   • Average Path Length (avg.path.len): average among
     the shortest paths between all pairs of nodes.              Graph Metric                        Corr. Coeff.    Sig. (p)
   • Average Degree (avg.degree): average among the              Average Degree                         .150           .055*
     degree of all nodes in an undirected graph.                 Graph Density                          .102            .190
   • Average Node Weight (avg.node.weight): the weight           Graph Diameter                         .147            .081
     of a node is the average of the weight of its edges. We     Average Path Length                    .152           .052*
     then average the weights of all nodes in the graph.         Average Node Weight                    .201          .010**

3.3   Measures of Activity
To measure student activity so that it could be correlated      4. EXPERIMENTS AND RESULTS
with the graph metrics we collected and calculated the fol-     4.1 Dataset
lowing success measures:                                        We collected student data over three terms of a Java Pro-
                                                                gramming course using the system: Fall 2013, Spring 2014,
   • Correct Attempts to Exercises (correct.attempts):          and Fall 2014. Since the system usage was not mandatory,
     total number of correct attempts to exercises. It in-      we want to exclude students who just tried the system while
     cludes repetition of exercises as well.                    likely using other activities (not captured by the system)
   • Distinct Correct Exercises (dist.correct.attempts):        for practicing Java programming. For this we looked at the
     number of distinct exercise attempted successfully.        distribution of distinct exercises attempted and we exclude
   • Overall Success Rate (success.rate): the number            all student below the 1st quartile (14.5 distinct exercises
     of correct attempts to exercises divided by the total      attempted). This left 83 students for our analysis. In to-
     number of attempts.                                        tal these students made 8,915 attempts to exercises. On
   • Average Success Rate on Concepts                           average, students have attempted about 55 (Standard Devi-
     (avg.concept.succ.rate): we compute the success rate       ation=22) distinct exercises while performing an average of
     of each concept as the average success rate of the ex-     107 (SD=92) exercises attempts. On average, students have
     ercises containing the concept. Then we average this       covered about 63 concepts with SD=25 (i.e., succeeded in at
     among all concepts in the domain model.                    least one exercise containing the concept), and have covered
                                                                about 773 concept pairs with SD=772 (i.e., succeeded in at
                                                                least one exercise covering the concept pair.) The average
3.4   Motivational Factors                                      success rate ( #correct attempts
                                                                                                 ) across students is about 69%
We use the revised Achievement-Goal Orientation question-                       #total attempts

naire [8] which contains 12 questions in a 7-point Likert       (SD=11%).
scale. There are 3 questions for each of the 4 factors of
the Achievement-Goal Orientation framework : Mastery -          4.2    Graph Metrics and Learning
Approach, Mastery-Avoidance, Performance-Approach and           We compare graph metrics (Avg. Degree, Graph Density,
Performance-Avoidance. Mastery-Approach goal orien-             Graph Diameter, Avg. Path Length and Avg Node Weight)
tation relates to intrinsic motivation: “I want to learn this   and measures of activity (Correct Attempts to Exercises,
because it is interesting for me”, “I want to master this       Distinct Correct Exercises, Overall Success Rate and Avg.
subject”; Mastery-Avoidance relates to the attitude of          Success Rate on Concepts) by computing the Kendall’s τB
avoid to fail or avoid learning less than the minimum; Per-     correlation of these metrics with respect to the students’
formance -Approach goal orientation stresses the idea of        grade on the programming course. Results are displayed in
having a good performance and relates well with social com-     Table 1.
parison: “I want to perform good in this subject”, “I want to
be better than others here”; and Performance-Avoidance          Surprisingly, the plain Overall Success Rate (which does
oriented students avoid to get lower grades or avoid to per-    not consider concepts disaggregation, nor graph informa-
form worse than other students. The goal orientation of         tion) is better correlated with course grade than any other
a student helps to explain the behavior that the student        measure. Students who succeed more frequently, get in gen-
exposes when facing difficulty, but does not label the final    eral better grades. Interestingly, both the Average Suc-
achievement of the student. For example, if a student is        cess Rate on Concepts and the Average Node Weight
Mastery-Approach oriented, it does not necessarily mean         are both significantly correlated with grade. This last mea-
that the student reached the mastery level of the skill or      sure uses the graph information and presents a slightly bet-
knowledge. In our case, we believe the achievement-goal ori-    ter correlation than the former, which does not consider the
entation of the student can convey the tendency to pursue       graph information.
(or avoid) to solve more diverse (and more difficult) exer-
cises, which contain more heterogeneous space of concepts,      Among the other graph metrics, Average Degree and Av-
thus contribute to form better connected graphs.                erage Path Length are marginally correlated with course
                                                                  which extent the motivational profile of the student explains
                                                                  the graph’s shape. Step-wise regression models were used
                                                                  where the dependent variables are the graph metrics and
                                                                  the independent variables are the motivational factors. We
                                                                  found a significant model of the diameter of the graph (R2 =
                                                                  0.161, F = 6.523, p = 0.006) with the factors Mastery-
                                                                  Avoidance (B = 0.952, p = 0.001) and Mastery-Approach
                                                                  (B = −0.938, p = 0.006). Note the negative coefficient for
                                                                  Mastery-Approach and the positive coefficient for Mastery-
                                                                  Avoidance. As the Achievement-Goal Orientation frame-
      Figure 2: Graph representation of two students.             work suggests, Mastery-Approach oriented students are mo-
                                                                  tivated to learn more, tend to explore more content and do
                                                                  not give up easily when facing difficulties; Mastery-Avoidance
Table 2: Graph metrics, measures of activity and motiva-
                                                                  students, in the other hand, do not cope well with diffi-
tional scores of 2 students.
                                                                  culties and tend to give up. Then, a possible explanation
                                   Student A     Student B        of the results is that, in one hand, students with higher
  Graph Density                      0.077         0.094          Mastery-Approach orientation are more likely to solve diffi-
  Graph Diameter                      2.85          2.00          cult questions which connects more and more distant con-
  Avg. Path Length                    1.77          1.78          cepts which decreases the graph diameter; and on the other
  Avg. Degree                         8.64         10.55          hand, Mastery-Avoidance students avoid difficult exercises
  Avg. Node Weight                    0.49          0.51          containing many concepts, thus making less connections and
  Correct Attempts                     71            83           producing graphs with higher diameters. Correlations be-
  Dist. Correct Exercises              66            61           tween graph metrics and motivational factors confirmed the
  Overall Succ. Rate                  0.82          0.76          relation between Mastery-Avoidance and Graph Diameter
  Avg. Succ.Rate on Concepts          0.50          0.53          (Kendall’s τB = 0.197, p = 0.030). Although these re-
  Mastery-Approach                    0.83          0.78          sults are encouraging, they are not conclusive. For exam-
  Mastery-Avoidance                   0.83          0.56          ple, Mastery-Approach students might just do more work,
  Performance-Approach                 1.0          0.17          not necessarily targeting difficult questions. More analysis
  Performance-Avoidance                1.0           0.0          is needed to deeply explore these issues.
  Grade (%)                           100            97
                                                                  5.   DISCUSSIONS AND CONCLUSIONS
                                                                  In this paper we proposed a novel approach to represent stu-
grade (p values less than 0.1). Although this is a weak ev-       dent model in the form of a dynamic graph of concepts that
idence, we believe that we are in the good track. A higher        become connected when the student succeed in assessment
Average Degree means a better connected graph, thus it            item containing a pair of concepts to be connected. The
follows our idea that highly connected nodes signal more          idea behind this approach is to strengthen the model for
knowledge. Average Path Length is more difficult to in-           those concepts that are applied in more different contexts,
terpret. A higher Average Path Length means a less                i.e., in assessment items containing other different concepts.
connected graph (which contradicts our assumption), but           We applied this approach to data of assessment items an-
also, it can express students reaching more “rear” concepts       swered by real students and analyzed the graph properties
which appear in few more-difficult-exercises and generally        comparing them to several performance measures such as
have longer shortest paths. We think that further explo-          course grade as well as motivational factors. Results showed
ration of metrics among sub-graphs (e.g. a graph for an           that this idea is potentially a good indicator of knowledge
specific topic), and further refinement of the approach to        of the students, but further refinement of the approach is
build edges (e.g. connecting concepts that co-occur close to      needed. We used several measures of the built graphs as
each other in the exercise) could help to clarify these results   descriptors of student knowledge level, and we found that a
                                                                  metric aggregating the success rates of the edges to the level
Figure 2 shows the graphs of 2 students who have similar          of concepts (nodes) is highly correlated to course grade, al-
amount of distinct exercises solved correctly but present dif-    though it does not beat the plain overall success rate of the
ferent graph metrics and motivational profile. See metrics in     student in assessment items.
Table 2. Student B has more edges, lower diameter, higher
density, higher degree, solved less questions more times. Stu-    In the future work, we plan to repeat our analysis using
dent A presents a less connected graph although she he/she        more reliable approaches to construct the knowledge graph.
solved more distinct questions (66 compared to 61 on Stu-         One idea is to use rich information provided by the parser
dent B). Student B has lower Mastery-Avoidance orientation        (mapping between exercises and concepts) to ensure that
score and lower Performance orientation scores than Student       each new link connects concepts that interact considerably
A, which could explain why Student B work result in a bet-        in the program code. This could be done by controlling
ter connected graph. Analyses of Motivational factors are         the concepts proximity in the question code (e.g. only con-
described in the following section.                               sider co-occurrence when concepts are close to each other
                                                                  in the parser tree.) Another approach to keep more reliable
4.3    Metrics and Motivation                                     edges is to consider only a subset of important concepts
We now explore the relationship between motivational fac-         for each problem using feature selection techniques. Also
tors and the graphs of the students. The idea is to see to        we plan to perform analyses of sub-graphs targeting specific
“zones” of knowledge. For example, a partial graph with          [7] P. Dolog, N. Henze, W. Nejdl, and M. Sintek. The
only concepts that belongs to a specific topic, or concepts          personal reader: Personalizing and enriching learning
that are prerequisites of a specific concept. Another inter-         resources using semantic web technologies. In
esting idea relates to recommendation of content: guide the          P. De Bra and W. Nejdl, editors, Adaptive Hypermedia
                                                                     and Adaptive Web-Based Systems, volume 3137 of
student to questions that will connect the isolated parts of         Lecture Notes in Computer Science, pages 85–94.
the knowledge graph or minimize the average path length of           Springer Berlin Heidelberg, 2004.
the graph. Along the same lines, the analysis of the graph       [8] A. J. Elliot and K. Murayama. On the measurement
shortest paths and overall connectivity can help in designing        of achievement goals: Critique, illustration, and
assessment items that better connect distant concepts.               application. Journal of Educational Psychology,
                                                                     100(3):613, 2008.
                                                                 [9] B. Hoser, A. Hotho, R. Jäschke, C. Schmitz, and
6.   REFERENCES                                                      G. Stumme. Semantic network analysis of ontologies.
 [1] N. Belacel, G. Durand, and F. Laplante. A binary                Springer, 2006.
     integer programming model for global optimization of       [10] R. Hosseini and P. Brusilovsky. Javaparser: A
     learning path discovery. In Workshop on Graph-Based             fine-grain concept indexing tool for java exercises. In
     Educational Data Mining.                                        The First Workshop on AI-supported Education for
 [2] P. Brusilovsky and E. Millán. User models for                  Computer Science (AIEDCS 2013), pages 60–63, 2013.
     adaptive hypermedia and adaptive educational               [11] I.-H. Hsiao, S. Sosnovsky, and P. Brusilovsky. Guiding
     systems. In P. Brusilovsky, A. Kobsa, and W. Nejdl,             students to the right questions: adaptive navigation
     editors, The Adaptive Web, volume 4321 of Lecture               support in an e-learning system for java programming.
     Notes in Computer Science, chapter 1, pages 3–53.               Journal of Computer Assisted Learning,
     Springer Berlin Heidelberg, Berlin, Heidelberg, 2007.           26(4):270–283, 2010.
 [3] V. Cateté, D. Hicks, C. Lynch, and T. Barnes.             [12] S. Jiang, S. M. Fitzhugh, and M. Warschauer. Social
     Snag’em: Graph data mining for a social networking              positioning and performance in moocs. In Workshop
     game. In Workshop on Graph-Based Educational Data               on Graph-Based Educational Data Mining, page 14.
     Mining, volume 6, page 10.                                 [13] T. Käser, S. Klingler, A. G. Schwing, and M. Gross.
 [4] C. Conati, A. Gertner, and K. Vanlehn. Using                    Beyond knowledge tracing: Modeling skill topologies
     bayesian networks to manage uncertainty in student              with bayesian networks. In Intelligent Tutoring
     modeling. User modeling and user-adapted interaction,           Systems, pages 188–198. Springer, 2014.
     12(4):371–417, 2002.                                       [14] V. Kovanovic, S. Joksimovic, D. Gasevic, and
 [5] R. Dekel and Y. Gal. On-line plan recognition in                M. Hatala. What is the source of social capital? the
     exploratory learning environments. In Workshop on               association between social network position and social
     Graph-Based Educational Data Mining.                            presence in communities of inquiry. 2014.
 [6] M. C. Desmarais and M. Gagnon. Bayesian student            [15] E. Millán, T. Loboda, and J. L. Pérez-de-la Cruz.
     models based on item to item knowledge structures.              Bayesian networks for student model engineering.
     Springer, 2006.                                                 Computers & Education, 55(4):1663–1683, 2010.