<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>I.F.: Using conceptual lattices
to represent ne granular learning objects through scorm meta-objects. The Elec-
tronic Journal of e</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Proceedings of the 1st International Workshop on eLearning Approaches for the Linked Data Age (Linked Learning 2011)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>collocated with the</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>th Extended Semantic Web Conference (ESWC</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Stefan Dietze, The Open University, UK Mathieu d'Aquin, The Open University, UK Dragan Gasevic, Athabasca University, Canada</institution>
          ,
          <addr-line>Miguel-Angel Sicilia</addr-line>
          ,
          <institution>University of Alcalá</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2009</year>
      </pub-date>
      <volume>2</volume>
      <fpage>2</fpage>
      <lpage>3</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The workshop would not be possible without contributions of many persons and
institutions. We are very thankful to the organisers of the ESWC 2011 conference for
providing us with an opportunity to organize the workshop, for their excellent
collaboration, and for looking after many important logistic issues. We are also very
grateful to the members of the program committee for their commitment in reviewing the
papers and assuring the good quality of the workshop program. We also thank the authors
for their invaluable contributions to the workshop by writing, revising and presenting their
papers. Of course, great appreciation of her time and expertise goes to our keynote
speaker Vania Dimitrova. We also want to express our strong gratitude to the publishers
of CEUR for publishing the Linked Learning 2011 workshop proceedings, to the
European Commission (EC) and the EC-funded research project mEducator for
sponsoring the best paper award and to the EasyChair developers for supporting the
submission and review process.</p>
      <p>May 2011,
Stefan Dietze, Mathieu d'Aquin, Dragan Gasevic, Miguel-Angel Sicilia</p>
    </sec>
    <sec id="sec-2">
      <title>Program Committee</title>
      <sec id="sec-2-1">
        <title>Lora Aroyo, Free University of Amsterdam, The Netherlands</title>
        <p>Soeren Auer, University of Leipzig, Germany
Panagiotis Bamidis, Aristotle University of Thessaloniki, Greece
Charalampos Bratsas, Aristotle University of Thessaloniki, Greece
Dan Brickley, W3C &amp; Free University of Amsterdam, The Netherlands
Vania Dimitrova, University of Leeds, UK
John Domingue, The Open University, UK &amp; Semantic Technologies Insitute
International, Austria.</p>
        <p>Nikolas Dovrolis, Democritus University of Thrace, Greece
Marek Hatala, Simon Fraser University, Canada
Jelena Jovanovic, University of Belgrade, Serbia
Eleni Kaldoudi,Democritus University of Thrace, Greece
Tomi Kauppinen, University of Münster, Germany
Carsten Keßler, University of Münster, Germany
Effie Lai-Chong Law, Leicester University, UK &amp; ETH, Zurich, Switzerland
Nikos Manouselis, Greek Research and Technology Network, Greece
Dave Millard, University of Southampton, UK
Evangelia Mitsopoulou, St George's University London, UK
Wolfgang Nejdl, L3S Research Center, Germany
Mikael Nilsson, Royal Institute of Technology, Sweden
Carlos Pedrinaci, The Open University, UK
Davide Taibi, Institute for Educational Technologies, Italian National Research
Council, Italy.</p>
        <p>Vlad Tanasescu, University of Edinburgh, UK
Fridolin Wild, The Open University, UK
Martin Wolpers, Fraunhofer FIT.ICON, Germany</p>
        <p>Hong Qing Yu, The Open University, UK</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Reviewers</title>
      <sec id="sec-3-1">
        <title>Dhaval Thakker, University of Leeds, UK</title>
        <sec id="sec-3-1-1">
          <title>The OU Linked Open Data:</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>Production and Consumption</title>
          <p>Fouad Zablith, Miriam Fernandez and Matthew Rowe
Knowledge Media Institute (KMi), The Open University
Walton Hall, Milton Keynes, MK7 6AA, United Kingdom</p>
          <p>ff.zablith, m.fernandez, m.c.roweg@open.ac.uk
Abstract. The aim of this paper is to introduce the current e orts
toward the release and exploitation of The Open University's (OU) Linked
Open Data (LOD). We introduce the work that has been done within
the LUCERO project in order to select, extract and structure subsets
of information contained within the OU data sources and migrate and
expose this information as part of the LOD cloud. To show the potential
of such exposure we also introduce three di erent prototypes that exploit
this new educational resource: (1) the OU expert search system, a tool
focused on nding the best experts for a certain topic within the OU
sta ; (2) the Buddy Study system, a tool that relies on Facebook
information to identify common interest among friends and recommend
potential courses within the OU that `buddies' can study together, and; (3)
Linked OpenLearn, an application that enables exploring linked courses,
Podcasts and tags to OpenLearn units. Its aim is to enhance the
browsing experience for students, by detecting relevant educational resources
on the y while reading an OpenLearn unit.
1</p>
          <p>
            Introduction
The explosion of the Linked Open Data (LOD) movement in the last few years
has produced a large number of interconnected datasets containing information
about a large variety of topics, including geography, music and research
publications among others. [
            <xref ref-type="bibr" rid="ref13 ref15 ref2">2</xref>
            ]
          </p>
          <p>
            The movement is receiving worldwide support from public and private sectors
like the UK1 and US2 governments, international media outlets, such as the
BBC [
            <xref ref-type="bibr" rid="ref18 ref5">5</xref>
            ] or the New York Times [
            <xref ref-type="bibr" rid="ref1 ref14">1</xref>
            ], and companies with a social base like
Facebook.3 Such organisations are supporting the movement either by releasing
1 http://data.gov.uk
2 http://www.data.gov/semantic/index
3 http://developers.facebook.com/docs/opengraph
large datasets of information or by generating applications that exploit it to
connect data across di erent locations.
          </p>
          <p>Despite its relevance and the support received in the last few years, very few
pieces of work have either released or exploited LOD in the context of education.
One of these few examples is the DBLP Bibliography Server Berlin,4 which
provides bibliographic information about scienti c papers. However, education is
principally one of the main sectors where the application of the LOD technologies
can provoke a higher impact.</p>
          <p>When performing learning and investigation tasks, students and academics
have to go through the tedious and laborious task of browsing di erent
information resources, analysing them, extracting their key concepts and mentally
linking data across resources to generate their own conceptual schema about the
topic. Educational resources are generally duplicated and dispersed among
different systems and databases, and the key concepts within these resources as well
as their inter and intra connections are not explicitly shown to users. We believe
that the application of LOD technologies within and across educational
institutions can explicitly generate the necessary structure and connections among
educational resources, providing better support to users in their learning and
investigation tasks.</p>
          <p>In this context, the paper presents the work that has been done within The
Open University (OU) towards the release and exploitation of several educational
and institutional resources as part of the LOD cloud. First, we introduce the
work that has been done within the LUCERO project to select, extract and
structure subsets of OU information as LOD. Second, we present the potential
of this data exposure and interlinking by presenting three di erent prototypes:
(1) the OU expert search system, a tool focused on nding the best experts for a
certain topic within the OU sta ; (2) the Buddy Study system, a tool focused on
exploiting Facebook information to identify common interests among friends and
recommend potential courses within the OU that `buddies' can study together,
and; (3) Linked Open Learn, an application that enables exploring linked courses,
Podcasts and tags to OpenLearn units.</p>
          <p>The rest of the paper is organised as follows: Section 2 presents the state of the
art in the areas of LOD within the education context. Section 3 presents the work
that has been done within the LUCERO project to expose OU data as part of
the LOD cloud. Sections 4, 5 and 6 present example prototype applications that
consume the OU's LOD for Expert Search, Buddy Study and Linked OpenLearn
respectively. Section 7 describes the conclusions that we have drawn from this
work, and section 8 presents our plans for future work.
2</p>
          <p>
            Related Work
While LOD is being embraced in various sectors as mentioned in the previous
section, we are currently witnessing a substantial increase in universities adopting
4 http://www4.wiwiss.fu-berlin.de/dblp/
the Linked Data initiative. For example, the University of She eld's
Department of Computer Science5 provides a Linked Data service describing research
groups, sta and publications, all semantically linked together[
            <xref ref-type="bibr" rid="ref19 ref6">6</xref>
            ]. Similarly the
University of Southampton has recently announced the release of their LOD
portal (http://data.southampton.ac.uk), where more data will become available in
the near future. Furthermore, the University of Manchester's library catalogue
records can now be accessed in RDF format6. In addition, other universities are
currently working on transforming and linking their data: University of
Bristol,7 Edinburgh (e.g., the university's buildings information is now generated
in LOD8), and Oxford9. Furthermore the University of Muenster announced
a funded project, LODUM, the aim of which is to release the university's
research information as Linked Data. This includes information related to people,
projects, publications, prizes and patents.10
          </p>
          <p>With the increase of the adoption of LOD publishing standards, the exchange
of data will be much easier, not only within one university, but also across the
LOD ready ones. This enables, for example, the comparison of speci c quali
cations o ered by di erent universities in terms of courses required, pricing and
availability.
3</p>
          <p>The Open University Linked Open Data
The Open University is the rst UK University to expose and publish its
organizational information in LOD.11 This is accomplished as part of the LUCERO
project (Linking University Content for Education and Research Online)12, where
the data extraction, transformation and maintenance are performed. This
enables having multiple hybrid datasets accessible in an open way through the
online access point: http://data.open.ac.uk.</p>
          <p>The main purpose of releasing all this data as part of the LOD cloud is that
members of the public, students, researchers and organisations will be able to
easily search, extract and, more importantly, reuse the OU's information and
data.
3.1</p>
          <p>Creating the OU LOD
Detailed information about the process of LOD generation within the OU is
available at the LUCERO project website.12 We brie y discuss in this section
5 http://data.dcs.shef.ac.uk
6 http://prism.talis.com/manchester-ac
7 https://mmb.ilrt.bris.ac.uk/display/ldw2011/University+of+Bristol+data
8
http://ldfocus.blogs.edina.ac.uk/2011/03/03/university-buildings-as-linked-datawith-scraperwiki
9 http://data.ox.ac.uk
10 http://www.lodum.de
11 http://www3.open.ac.uk/media/fullstory.aspx?id=20073
12 http://lucero-project.info
the steps involved in the creation of Linked Data. To achieve that, the main
requirement is to have a set of tools that generate RDF data from existing data
sources, load such RDF into a triple store, and make it accessible through a web
access point.</p>
          <p>Given the fact that the OU's data repositories are scattered across many
departments, using di erent platforms, and subject to constant update, a
wellde ned over ow needs to be put in place. The initial work ow is depicted in
Figure 1, and is designed to be e cient in terms of time, exibility and
reusability. The work ow is component based, and the datasets characteristics played
a major role in the implementation and setup of the components. For
example, when the data sources are available in XML format, the XML updater will
handle the process of identifying new XML entities and pass them to the RDF
extractor, where the RDF data is generated, and ready to be added to (or
removed from) the triple store. Finally the data is exposed to the web, and can be
queried through a SPARQL endpoint.13</p>
          <p>
            The scheduler component takes care of initiating the extraction/update
process at speci c time intervals. This update process is responsible for checking
what was added, modi ed, or removed from the dataset, and accordingly
applies to the triple store the appropriate action. Having such a process in place
is important in the OU scenario where the data sources are continuously
changing. Another point worth mentioning is the linking process that links entities
coming from di erent OU datasets (e.g., courses mentioned in Podcast data and
library records), in addition to linking external entities (e.g., course o erings in
a GeoNames de ned location14). To achieve interlinking OU entities,
independently from which dataset the extraction is done, we rely on an Entity Named
System, which generates a unique URI (e.g., based on a course code)
depending on the speci ed entity (this idea was inspired from the Okkam project15) .
Such unique URIs enable a seamless integration and extraction of linked entities
within common objects that exist in the triple store and beyond, one of the core
Linked Data requirements [
            <xref ref-type="bibr" rid="ref16 ref3">3</xref>
            ].
3.2
          </p>
          <p>The Data
Data about the OU courses, Podcasts and academic publications is already
available to be queried and explored, and the team is now working to bring
together educational and research content from the university's campus
information, OpenLearn (already available for testing purposes) and library
material. More concretely, data.open.ac.uk o ers a simple browsing mechanism, and
a SPARQL endpoint to access the following data:
13 http://data.open.ac.uk/query
14 http://www.geonames.org
15 http://www.okkam.org</p>
          <p>
            Fig. 1. The LUCERO Work ow
{ The Open Research Online (ORO) system16, which contains information
about academic publications of OU research. For that, the Bibliographic
Ontology (bibo)17 is mainly used to model the data.
{ The OU Podcasts,18 which contain Podcast material related to courses and
research interests. A variety of ontologies are used to model this data,
including the W3C Media Ontology,19 in addition to a specialised SKOS20
representation of the iTunesU topic categories.
{ A subset of the courses from the Study at the OU website,21 which provides
courses information and registration details for students. We model this data
by relying on the Courseware,22 AIISO23 and GoodRelations ontologies [
            <xref ref-type="bibr" rid="ref17 ref4">4</xref>
            ],
in addition to extensions that re ect OU speci c information (e.g., course
assessment types).
          </p>
          <p>Furthermore, there are other sources of data that are currently being
processed. This includes for example the OU list of provided publications, the
16 http://oro.open.ac.uk
17 http://bibliontology.com/speci cation
18 http://podcast.open.ac.uk
19 http://www.w3.org/TR/mediaont-10
20 http://www.w3.org/2004/02/skos
21 http://www3.open.ac.uk/study
22 http://courseware.rkbexplorer.com/ontologies/courseware
23 http://vocab.org/aiiso/schema
library catalogue, and public information about locations on the OU campus
(e.g., buildings) and university sta .
4</p>
          <p>The OU Expert Search
Expert search can be de ned as the task of identifying people who have relevant
expertise in a topic of interest. This task is key for every enterprise, but especially
for universities, where interdisciplinary collaborations among research areas is
considered a high success factor. Typical user scenarios in which expert search is
needed within the university context include: a) nding colleagues from whom
to learn, or with whom to discuss ideas about a particular subject; b) assembling
a consortium with the necessary range of skills for a project proposal, and; c)
nding the most adequate reviewers to establish a program committee.</p>
          <p>
            As discussed by Yimam-Seid and Kobsa [
            <xref ref-type="bibr" rid="ref20 ref7">7</xref>
            ], developing and manually
updating an expert system database is time consuming and hard to maintain.
However, valuable information can be identi ed from documents generated within
an organisation [
            <xref ref-type="bibr" rid="ref21 ref8">8</xref>
            ]. Automating expert nding from such documents provides
an e cient and sustainable approach to expertise discovery.
          </p>
          <p>OU researchers, students and lecturers constantly produce a plethora of
documents, including for example conference articles, journal papers, thesis, books,
reports and project proposals. As part of the LUCERO project, these
documents have been pre-processed and made accessible as LOD. The purpose of
this application is therefore to exploit such information so that OU students
and researchers can nd the most appropriate experts starting from a topic of
interest.24
4.1</p>
          <p>Consumed Data
This application is based on two main sources of information: (a) LOD from the
Open Research Online system, and (b) additional information extracted from
the OU sta directory. The rst information source is exploited in order to
extract the most suitable experts about a certain topic. The second information
source complements the previous recommended set of experts by providing their
corresponding contact information within the OU. Note that sometimes, ex-OU
members and external collaborators or OU researchers may appear in the ranking
of recommended experts. However, for those individuals, no contact information
is provided, indicating that those experts are not part of the OU sta .</p>
          <p>As previously mentioned, the information provided by Open Research
Online contains data that describe publications originating from OU researchers.
In particular, among the properties provided for each publication, this system
exploits the following ones: a) the title, b) the abstract, c) the date, d) the
authors and, e) the type of publication, i.e., conference paper, book, thesis, journal
paper, etc.
24 The OU Expert Search is accessible to
web15.open.ac.uk:8080/ExpertSearchClient
OU
sta
at:
http://kmi</p>
          <p>To exploit this information the system performs two main steps. Firstly when
the system receives the user's query, i.e., the area of expertise where a set of
experts need to be found (e.g., \semantic search"), the system uses the title and
abstract of the publications to nd the top-n documents related to that area of
expertise. At the moment n has been empirically set to 10.</p>
          <p>Secondly, once the top-n documents have been selected, the authors of these
documents are extracted and ranked according to ve di erent criteria: (a)
original score of their publications, (b) number of publications, (c) type of
publications, (d) date of the publications and, (e) other authors of the publication.</p>
          <p>The initial score of the publications is obtained by matching the user's
keyword query against the title and the abstract of the OU publications.
Publications that provide a better match within their title and abstract against the
keywords of the query are ranked higher. This matching is performed and computed
using the Lucene25 text search engine. Regarding the number of publications,
authors with a higher number of publications (among the top-n previously
retrieved) are ranked higher. Regarding the type of publication, theses are ranked
rst, then books, then journal papers, and nally conference articles. The
rationality behind this is that an author writing a thesis or a book holds a higher level
of expertise than an author who has only written conference papers. Regarding
the date of the publication, we consider the `freshness' of the publications and
continuity of an author's publications within the same area. More recent
publications are ranked higher than older ones, and authors publishing in consecutive
years about a certain topic are also ranked higher than authors that have
sporadic publications about the topic. Regarding other authors, experts sharing a
publication with fewer colleagues are ranked higher. The rationality behind this
is that the total knowledge of a publication should be divided among the
expertise brought into it, i.e., the number of authors. Additionally we also consider
the order of authors in the publication. Main authors are considered to have a
higher level of expertise and are therefore ranked higher.</p>
          <p>To perform the rst step (i.e., retrieving the top-n documents related to
the user's query) we could have used the SPARQL endpoint and, at run-time,
searched for those keywords within the title and abstract properties of the
publications. However, to speed the search process up, and to enhance the
querydocument matching process, we have decided to pre-process and index the title
and abstract information of the publications using the popular Lucene search
engine. In this way, the fuzzy and spelling check query processing and
ranking capabilities of the Lucene search engine are exploited to optimise the initial
document search process.</p>
          <p>To perform the second step, once the top-n documents have been selected,
the rest of the properties of the document (authors, type, and date) are obtained
at run-time using the SPARQL endpoint.</p>
          <p>Finally, once the set of authors have been ranked, we look for them in the OU
sta directory (using the information about their rst name and last name). If the
author is included in the directory, the system provides related information about
25 http://lucene.apache.org/java/docs/index.html
the job title, department within the OU, e-mail address and phone number.
By exploiting the OU sta directory we are able to identify which experts are
members of the OU and which of them are external collaborators, or old members
not further working for the institution.</p>
          <p>Without the structure and conceptual information provided by the OU LOD,
the implementation of the previously described ranking criteria, as well as the
interlinking of data with the OU sta directory, would have required a huge
data pre-processing e ort. The OU LOD provides the information with a
negrained structure that facilitates the design of ranking criteria based on multiple
concepts, as well as the interlinking of information with other repositories.
4.2</p>
          <p>System Implementation
The system is based on lightweight client server architecture. The back end
(or server side) is implemented as a Java Servlet, and accesses the OU LOD
information by means of HTTP requests to the SPARQL endpoint. Some of
the properties provided by the LOD information (more particularity the title
and the abstract of the publications) are periodically indexed using Lucene to
speed-up and enhance the search process by means of the exploitation of its
fuzzy and spell checker query processing, and ranking capabilities. The rest of
the properties (authors, date, and type of publications) are accessed at run time,
once the top-n publications have been selected.</p>
          <p>The front end is a thin client implemented as a web application using only
HTML, CSS and Javascript (jQuery).26 The client doesn't handle any processing
of the data, it only takes care of the visualisation of the search results and the
search input. It communicates with the back-end by means of an HTTP request
that passes as a parameter the user's query and retrieves the ranking of authors
and their corresponding associated information by means of a JSON object.
4.3</p>
          <p>Example and Screenshots
In this section, we provide an example of how to use the OU expert search
system. As shown in Figure 2, the system receives as a keyword query input
\semantic search", with the topic for which the user aims to nd an expert. As
a result, the system provides a list of authors (\Enrico Motta", \Vanessa Lopez ",
etc), who are considered to be the top OU experts in the topic. For each expert,
if available, the system provides the contact details (department, e-mail, phone
extension) and the top publications about the topic. For each publication, the
system shows its title, the type of document, and its date. If the user passes the
cursor on the top of the title of the publication, the summary is also visualised
(see the example in Figure 2 for the publication \Re ections of ve years of
evaluating semantic search systems "). In addition the title of the publication
also constitutes a link to its information in the open.ac.uk domain.
26 http://www.jquery.com</p>
          <p>Fig. 2. The OU Expert Search system
5</p>
          <p>Buddy Study
The Open University is a well-established institution in the United Kingdom,
offering distance-learning courses covering a plethora of subject areas. A key factor
in enabling learning and understanding of course materials is support for
students, provided in the form of an on-hand tutor for each studied module, where
interactions with the tutor are facilitated via the Web and/or email exchanges.
An alternative method of support could be provided through peers, in a similar
manner to a classroom environment, where working together and explanations
of problems from disparate viewpoints enhances understanding.</p>
          <p>Based on this thesis, Buddy Study27 combines the popular social networking
platform Facebook with the OU Linked Data service, the goal being to suggest
learning partners { so called `Study Buddies' { from a person's social network
on the site together with possible courses that could be pursued together.
5.1
Buddy Study combines information extracted from Facebook with Linked Data
o ered by The Open University, where the former contains `wall posts' {
messages posted publicly on a person's pro le page { and comments on such wall
posts, while the latter contains structured, machine-readable information
describing courses o ered by The Open University.
27 http://www.matthew-rowe.com/BuddyStudy</p>
          <p>Combining the two information sources, in the form of a `mashup', is
performed using the following approach. First the user logs into the application {
using Facebook Connect { and grants access to their information. The
application then extracts the most recent n wall posts and the comments on those
posts { n can be varied, thereby a ecting the later recommendations. Given the
extracted content, cleaning is then performed by removing all the stop words,
thus reducing the wall posts and comments to their basic terms.</p>
          <p>A bag of words model is compiled for each person in the user's social network
as follows: for each wall post or comment posted by a given person all the terms
are placed in the bag, maintaining duplicates and therefore frequencies. This
model maintains information of the association between a user and his/her social
network members in the form of shared terms. A bag of words model is then
compiled for each OU course in a similar manner: rst we query the SPARQL
endpoint of the OU's Linked Data asking for the title and description for each
course. For the returned information, stop words are removed and the title and
description { containing the remaining terms { are then used to build the bag
of words model for the course.</p>
          <p>The goal of Buddy Study is to recommend study partners to support course
learning. Therefore we compare the bag of words model of each person with
the bag of words model of each course, recording the frequency and terms that
overlap. The user's social network members are then ranked based on the number
of overlapping terms { the intuition being that the greater the number of common
terms with courses, the greater the likelihood of a course being correlated with
the user. Variance of n will therefore a ect this ranking, given that the inclusion
of a greater number of posts will increase the number of possible study partners,
while smaller values for n will yield more recently interacted with social network
members. Variance of this parameter is provided in the application.</p>
          <p>The application is not nished yet; we still need to recommend possible
courses that could be studied with each possible study buddy. This is performed
in a similar fashion, by comparing the bag of words model of the social network
member with the model of each course, counting the frequencies of overlapping
terms for each course, and then ranking accordingly. Due to space restrictions,
and to avoid information overload, we only show the top-10 courses. For each
social network user, and for each course that is suggested, Buddy Study displays
the common terms, thereby providing the reasons for the course suggestion.</p>
          <p>If for a moment we assume a scenario where Linked Data is not provided by
the OU, then the function of Buddy Study could, in theory continue, by
consuming information provided in an alternative form. However, this application
forms the prototype upon which for future work { explained in greater detail
within the conclusions of this paper { is to be based. Such advancements will
utilise concepts for study partner recommendation rather than merely terms,
the reasoning behind this extension is to alleviate the noisy form that terms
take. By leveraging concepts from collections of terms, recommendations would
be generated that are more accurate and better suited to the user in question.
Without Linked Data, this is not possible.
The application is live and available online at the previously cited URL. It is built
using PHP, and uses the Facebook PHP Software Development Kit (SDK)28.
Authentication is provided via Facebook Connect,29 enabling access to Facebook
information via the Graph API. The ARC2 framework30 is implemented to query
the remote SPARQL endpoint containing The Open University's Linked Data,
and parse the returned information accordingly.
5.3</p>
          <p>Example and Screenshots
To ground the use of Buddy Study, Figure 3 shows an example screenshot from
the application when recommending study partners for Matthew Rowe { one
of the authors of this paper. At this rank position in the results, the possible
study mate is shown together with the courses that could be studied together.
The courses are hyperlinked to their resource within the OU Linked Open Data
service, and in the proceeding brackets the terms that correlate with the courses
are shown. In this instance the top-ranked course is identi ed by the common
terms `API' and `Info'.
The Open University o ers a set of free learning material through the OpenLearn
website.31 Such material cover various topics ranging from Arts32, to Sciences
and Engineering.33 In addition to that, the OU has other learning resources
published in the form of Podcasts, along with courses o ered at speci c presentations
during the year. While all these resources are accessible online, connections are
28 https://github.com/facebook/php-sdk
29 http://developers.facebook.com/docs/authentication
30 http://arc.semsol.org
31 http://openlearn.open.ac.uk
32 OpenLearn unit example in Arts: http://data.open.ac.uk/page/openlearn/a216 1
33 A list of units and topics is available at: http://openlearn.open.ac.uk/course
not always explicitly available, making it hard for students to easily exploit all
the available resources. For example, while there exists a link between speci c
Podcasts and related courses, such links do not exist between OpenLearn units
and Podcasts. This leaves it to the user to infer and nd the appropriate and
relevant material to the topic of interest.</p>
          <p>Linked OpenLearn34 is an application that enables exploring linked courses,
Podcasts and tags to OpenLearn units. It aims to facilitate the browsing
experience for students, who can identify on the spot relevant material without
leaving the OpenLearn page. With this in place, students are able, for example,
to easily nd a linked Podcast, and play it directly without having to go through
the Podcast website.
6.1</p>
          <p>Consumed Data
Linked OpenLearn relies on The Open University's Linked Data to achieve what
was previously considered very costly to do. Within large organizations, it's very
common to have systems developed by di erent departments, creating a set of
disconnected data silos. This was the case of Podcasts and OpenLearn units at
the OU. While courses were initially linked to both Podcasts and OpenLearn in
their original repositories, it was practically hard to generate the links between
Podcasts and OpenLearn material. However, with the deployment of Linked
Data, such links are made possible through the use of coherent and common
URIs of represented entities.</p>
          <p>To achieve our goals of generating relevant learning material, we make use
of the courses, Podcasts, and OpenLearn datasets in data.open.ac.uk. As a rst
step, while the user is browsing an OpenLearn unit, the system identi es the
unique reference number of the unit from the URL. Then this unique
number is used in the query passed to the OU Linked Data SPARQL endpoint
(http://data.open.ac.uk/query), to generate the list of related courses including
their titles and links to the study at the OU pages.</p>
          <p>In the second step, another query is sent to retrieve the list of Podcasts related
to the courses fetched above. At this level we get the Podcasts' titles, as well
as their corresponding downloadable media material (e.g., video or audio les),
which enable users to play the content directly within the application. Finally
the list of related tags are fetched, along with an embedded query that generates
the set of related OpenLearn units, displayed in a separate window. The user at
this level has the option to explore a new unit, and the corresponding related
entities will be updated accordingly. The application is still a prototype, and
there is surely room for further data to extract. For example, once the library
catalogue is made available, a much richer interface can be explored by students
with related books, recordings, computer les, etc.
34 http://fouad.zablith.org/apps/openlearnlinkeddata
We implemented the Linked OpenLearn application in PHP, and used the ARC2
library to query the OU Linked Data endpoint. To visualise the data on top of
the web page, we relied on the jQuery User Interface library,35 and used the
dialog windows for displaying the parsed SPARQL results. The application is
operational at present, and is launched through a Javascript bookmarklet, which
detects the OpenLearn unit that the user is currently browsing, and opens it in
a new iFrame, along with the linked entities visualised in the jQuery boxes.
6.3</p>
          <p>Example and Screenshot
To install the application, the user has to drag the applications' bookmarklet36
to the browser's toolbar. Then, whenever viewing an OpenLearn unit, the user
clicks on the bookmarklet to have the related entities displayed on top of the unit
page. Figure 4 illustrates one arts related OpenLearn unit, with the connected
entities displayed on the right, and a running Podcast selected from the \Linked
Podcasts" window. The user has the option to click on the related course to
go directly to the course described in the Study at the OU webpage, or click
on linked tags to see the list of other related OpenLearn units, which can be
browsed within the same window.</p>
          <p>Fig. 4. Linked OpenLearn Screenshot
35 http://www.jqueryui.com
36 The bookmarklet is available at: http://fouad.zablith.org/apps/openlearnlinkeddata,
and has been tested in Firefox, Safari and Google Chrome</p>
          <p>Conclusions
In this section we report on our experiences when generating and exploiting LOD
within the context of an educational institution. Regarding our experience on
transforming information distributed in several OU repositories and exposing it
as LOD, the process complexity was mainly dependent on the datasets in terms
of type, structure and cleanliness. Initially, before any data transformation can
be done, it was required to decide on the vocabulary to use. This is where the
type of data to model plays a major role. With the goal to reuse, as much as
possible, already existing ontologies, it was challenging to nd the adequate ones
for all our data. While some vocabularies are already available, for example to
represent courses, it required more e ort to model OU speci c terminologies
(e.g., at the quali cations level). To assure maximum interoperability, we chose
to use multiple terminologies (when available) to represent the same entities.
For example, courses are represented as modules from the AIISO ontology, and
at the same time as courses from the Courseware ontology. Other factors that
a ected the transformation of the data are the structure and cleanliness of the
data sources. During the transformation process, we faced many cases where
duplication, and information not abiding to the imposed data structure, hampered
the transformation stage. However, this initiated the need to generate the data
following well-de ned patterns and standards, in order to get easily processable
data to add to the LOD.</p>
          <p>Regarding our experiences exploiting the data, we have identi ed three main
advantages of relying on the LOD platform within the context of education.
Firstly the exposure of all these material as free Web resources have open
opportunities for the development of novel and interesting applications like the three
presented in this paper. The second main advantage is the structure provided by
the data. This is apparent in the OU Expert Search system, where the di erent
properties of articles are exploited to generate di erent ranking criteria, which
when combined, provide much stronger support when nding the appropriate
expertise. Finally, the links generated across the di erent educational resources
have provided a new dimension to the way users can access, browse and use the
provided educational resources. A clear example of this is the exploitation of
LOD technology within the OpenLearn system, where OpenLearn units are now
linked to courses and Podcasts, allowing students to easily nd in a single site,
all the information they are looking for.</p>
          <p>We believe that universities need to evolve the way they expose knowledge,
share content and engage with learners. We see LOD as an exciting opportunity
that can be exploited within the education community, especially by interlinking
people and educational resources within and across institutions. This
interlinking of information will facilitate the learning and investigation process of
students and research sta , enhancing the global productivity and satisfaction of
the academic community. We hope that, in the near future, more researchers
and developers will embrace LOD approach, by creating new applications and
learning from previous experiences to expose more and more educational data
in a way that is directly linkable and reusable.
The application of Linked Data within the OU has opened multiple research
paths. Regarding the production of Linked Data, in addition to transforming
the library records to LOD, the LUCERO team is currently working on
connecting the OU's Reading Experience Database (RED)37 to the Web of Data.
Such database aims to provide access and information about reading experiences
around the world. It helps the readership for books issued in new editions for
new audiences in di erent countries to be tracked. Its publication as LOD is an
interesting example about how the integration of Linked Data technology can
open new investigation paths to di erent research areas, in this case humanities.</p>
          <p>Regarding the consumption of LOD, we envision, on the one hand, to
enhance the three previously mentioned applications and, on the other hand to
generate new applications as soon as more information is available and
interconnected. As example of the former, for the Buddy Study application we plan to
extend the current approach for identifying common terms between social
network members and courses to instead utilise common concepts. At present the
use of online messages results in the inclusion of abbreviated and slang terms,
resulting in recommendations that are generated from noise. By instead using
concepts, we believe that the suggested courses would be more accurate and
suitable for studying. As an example of the latter, we aim to generate a search
application over the RED database, able to display search results on an
interactive map and link them not just to relevant records within the RED database,
but also with relevant objects of the LOD cloud.</p>
          <p>37 http://www.open.ac.uk/Arts/reading
Q3 - Who succeeded to { Charles VII the Victorious } as ruler of France ?
SELECT DISTINCT ?kingHR ?successorHR
WHERE {
?x &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&gt; &lt;http://dbpedia.org/class/yago/KingsOfFrance&gt; .
?x &lt;http://dbpedia.org/property/name&gt; ?kingHR .
?x &lt;http://dbpedia.org/ontology/successor&gt; ?z .
?z &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&gt; &lt;http://dbpedia.org/class/yago/KingsOfFrance&gt; .
?z &lt;http://dbpedia.org/property/name&gt; ?successorHR
}
LIMIT 30</p>
          <p>Q3 uses the YAGO ontology to ensure that the resource retrieved is indeed a king
of France. Out of 30 results, one was incorrect (The three Musketeers). The query
generated duplicates because of the multiple labels associated to each king. The same
king was named for instance Louis IX, Saint Louis, Saint Louis IX. Whereas
deduplication is a straight forward process in this case, the risk of inconsistent naming
patterns among options of the same item is more difficult to tackle. An item was
indeed generated with the following 3 options: Charles VII the Victorious, Charles 09
Of France, Louis VII. They all use a different naming pattern, with or without the
king’s nickname and with a different numbering pattern.</p>
          <p>Q4 - What is the capital of { Argentina }? With feedback
SELECT ?countryHR ?capitalHR ?pictureCollection
WHERE {
?country &lt;http://dbpedia.org/property/commonName&gt; ?countryHR .
?country &lt;http://dbpedia.org/property/capital&gt; ?capitalHR .
?country &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&gt;
&lt;http://dbpedia.org/class/yago/EuropeanCountries&gt; .
?country &lt;http://dbpedia.org/property/hasPhotoCollection&gt; ?pictureCollection
}
LIMIT 30</p>
          <p>The above question is a variation of Q1. It adds a picture collection from a distinct
dataset in the response feedback. It uses the YAGO ontology to exclude countries
outside Europe and resources which are not countries. A feedback section is added.
When the candidate answers the item, he then receives a feedback if the platform
allows it. In the feedback, additional information or formative resources can be
suggested. Q4 uses the linkage of the DBpedia dataset with the Flickr wrapper
dataset. However the Flickr wrapper data source was unavailable when we performed
the experiment.</p>
          <p>Q5 - Which category does { Asthma } belong to?
SELECT DISTINCT ?diseaseName ?category
WHERE {
?x &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&gt; &lt;http://dbpedia.org/ontology/Disease&gt; .
?x &lt;http://dbpedia.org/property/meshname&gt; ?diseaseName .
?x &lt;http://purl.org/dc/terms/subject&gt; ?y .
?y &lt;http://www.w3.org/2004/02/skos/core#prefLabel&gt; ?category
}
LIMIT 30</p>
          <p>Q5 aims to retrieve diseases and their categories. It uses SKOS and Dublin Core
properties. The Infobox dataset is only used to find labels. Labels from the MESH
vocabularies are even available. Nevertheless, the SKOS concepts are not related to a
specific SKOS scheme. Categories retrieved range from Skeletal disorders to
childhood. For instance, the correct answer to the question on Obesity is childhood.
4.2 The publication of items on the TAO platform
The TAO platform3 is an open source semantic platform for the creation and delivery
of assessment tests and items. It has been used in multiple assessment contexts,
including large scale assessment in the PIAAC and PISA surveys of the OECD,
diagnostic assessment and formative assessment.</p>
          <p>We imported QTI items generated for the different item models in the platform, in
order to validate the overall Linked Data based item creation streamline. Figure 4
presents an item generated from Q1 (Figure 3) imported in the TAO platform.</p>
          <p>Figure 4 - Item preview on the TAO platform
3 http://www.tao.lu</p>
          <p>The experimentation of the streamline was therefore tested with SPARQL queries
which use various ontologies and which collect various types of variables. It raised
two types of issues for which future work should find relevant solutions: the quality
of the data and the relevance of particular statements for the creation of an assessment
item.
5.1 Data quality challenges</p>
          <p>In our experiment, the chance that an item will have a defective prompt or a
defective correct answer is equal to the number of defective variables used for the
item creation. Q1 uses the most challenging dataset in terms of data quality. 7 out of
30 questions had a defective prompt or a defective correct answer (23,33%).</p>
          <p>The chance that an item will have defective distractors is represented by the
following formula, where D is the total number of distractors, d(V) is the number of
defective variables and V is the total number of variables:</p>
          <p>
            We used 2 distractors. Among the items generated from Q1, 10 items had a
defective distractor (33,33%). Overall, 16 out of 30 items had neither a defective
prompt nor a defective correct answer nor a defective distractor (53,33%).
As a comparison, the items generated from unstructured content (text) that are
deemed usable without edit were measured between 3,5% and 5% by Mitkov et al.
[18] and between 12% and 21% by Karamanis et al. [
            <xref ref-type="bibr" rid="ref20 ref7">7</xref>
            ]. The difficulty of generating
items from structured sources should be lower. Although a manual selection is
necessary in any case, the mechanisms we have implemented can be improved.
The ontology
Q1 used properties from the Infobox dataset, which has no proper underlying
ontology. Q1 can therefore be improved by using ontologies provided by DBpedia, as
demonstrated by Q2 for which no distractor issue was identified. We present Q1 and
Q2 to illustrate this improvement but it should be noted that there is not always a
straight equivalent to the properties extracted from the Infobox dataset.
Q5 could be improved either if the dataset would be linked to a more structured
knowledge organization system (KOS) or through an algorithm which would verify
the nature of the literals provided as a result of the SPARQL query.
          </p>
          <p>The labels
The choice of the label for each concept to be represented in an item is a challenge
when concepts are represented by multiple labels (Q4). The selection of labels and
their consistency can be ensured by defining representation patterns or by using
datasets with consistent labeling practices.</p>
          <p>Inaccurate statements
Most statements provided for the experiment are not inaccurate in their original
context but they sometimes use properties which are not sufficiently precise for the
usage envisioned (e.g., administrative capital). In other cases, the context of validity
of the statement is missing (e.g., Leopoldville used to be the capital of a country
called Congo). The choice of DBpedia as a starting point can increase this risk in
comparison to domain specific data sources provided by scientific institutions for
instance. Nevertheless, the Semantic Web raises similar quality challenges as the ones
encountered in heterogeneous and distributed data sources [19]. Web 2.0 approaches,
as well as the automatic reprocessing of data can help improve the usability of the
Semantic Web statements. This requires setting up a traceability mechanism between
the RDF paths used for the generation of items and the items generated.
Data linkage
Data linkage clearly raises an issue because of the reliability of the mechanism on
different data sources. Q3 provided 6 problematic URIs out of 30 (i.e., 20%). Q4
generated items for which no URI from the linked data set was resolvable since the
whole Flickr wrapper data source was unavailable. This clearly makes the generated
items unusable. The creation of infrastructure components such as the SPARQL
Endpoint status for CKAN4 registered data sets5 can help provide solutions to this
quality issue over the longer run.</p>
          <p>Missing inferences
Finally, the SPARQL endpoint does not provide access to inferred triples. Our
streamline does not tackle transitive closures on the data consumer side (e.g., through
repeated queries), as illustrated with Q3. Further consideration should be given to the
provision of data including inferred statements. Alternatively, full datasets could be
imported. Inferences could then be performed in order to support the item generation
process.</p>
          <p>Different strategies can therefore be implemented to cope with data quality issues we
encountered. Data publishers can improve the usability of the data, for instance with
the implementation of an upper ontology in DBpedia. However, other data quality
issues require data consumers to improve their data collection strategy, for instance to
collect as much information as possible on the context of validity of the data,
whenever it is available.
5.2 Data selection
The experiment also showed that the Linked Data statements should be selected. The
suitability of an assessment item for a test delivered to a candidate or a group of
candidates is measured in particular through such information as the item difficulty.
4 http://www.ckan.net
5 http://labs.mondeca.com/sparqlEndpointsStatus/index.html
The difficulty can be assessed through a thorough calibration process in which the
item is given to beta candidates for extracting psychometric indicators. In low stake
assessment, however, the evaluation of the difficulty is often manual (candidate or
teacher evaluation) or implicit (the performance of previous candidates who took the
same item). In the item generation models we have used, each item has a different
construct (i.e., it assesses a different knowledge). In this case, the psychometric
variables are more difficult to predict [20]. A particular model is necessary to assess
the difficulty of items generated from Semantic Web sources. For instance, it is likely
that for a European audience, the capital of the Cook Islands will raise a higher rate of
failure than the capital of Belgium. There is no information in the datasets, which can
support the idea of a higher or lower difficulty. Moreover, the difficulty of the item
also depends on the distractors, which in this experiment were generated on a random
basis from a set of equivalent instances. As the generation of items from structured
Web data sources will become more elaborated, it will therefore be necessary to
design a model for predicting the difficulty of generated items.
6 Conclusion and future work
The present experimentation shows the process for generating assessment items
and/or assessment variables from Linked Data. The performance of the system in
comparison with other approaches shows its potential as a strategy for assessment
item generation. It is expected that data linkage can provide relevant content for
instance to propose formative resources to candidates who failed an item or to
illustrate a concept with a picture published as part of a distinct dataset.
The experimentation shows the quality issues related to the generation of items based
on such a resource as DBpedia. It should be noted that the measurements were made
with a question which raises particular quality issues. It can be easily improved as
shown with other questions. Nevertheless the Linked Data Cloud also contains
datasets published by scientific institutions, which may therefore raise less data
accuracy concerns. In addition, the usage model we are proposing is centered on low
stake assessment, for which we believe that the time saved makes it worthwhile
having to clean some of the data, while the overall process remains valuable.</p>
          <p>
            Nevertheless, additional work is necessary both on the data and on the assessment
items. The items created demonstrate the complexity of generating item variables for
simple assessment items. We aim to investigate the creation of more complex items
and the relevance of formative resources which can be included in the item as
feedback. Moreover, the Semantic Web can provide knowledge models from which
items could be generated. Our work is focused on semi-automatic item generation,
where users create item models, while the system aims to generate the variables.
Nevertheless, the generation of the items from a knowledge model as in [
            <xref ref-type="bibr" rid="ref11 ref24">11</xref>
            ] requires
that more complex knowledge is encoded in the data (e.g., what happens to water
when the temperature decreases). The type and nature of data published as Linked
Data need therefore to be further analyzed in order to support the development of
such models for the fully automated generation items based on knowledge models.
          </p>
          <p>We will focus our future work on the creation of an authoring interface for item
models with the use of data sources from the Semantic Web, on the assessment of
item quality, on the creation of different types of assessment items from Linked Data
sources, on the traceability of items created, including the path on the Semantic Web
datasets which were used to generate the item, and on the improvement of data
selection from semantic datasets.</p>
          <p>Acknowledgments. This work was carried out in the scope of the iCase project on
computer-based assessment. It has benefited from the TAO semantic platform for
eassessment (https://www.tao.lu/) which is jointly developed by the Tudor Research
Centre and the University of Luxembourg, with the support of the Fonds National de
la Recherche in Luxembourg, the DIPF (Bildungsforschung und
Bildungsinformation), the Bundesministerium für Bildung und Forschung, the
Luxemburgish ministry of higher education and research, as well as OECD.
Presented at the 2009 Ninth IEEE International Conference on Advanced Learning
Technologies (ICALT), Riga, Latvia. (2009)
13. Xu, Y., Seneff, S. Speech-Based Interactive Games for Language Learning: Reading,
Translation, and Question-Answering. Computational Linguistics and Chinese
Language Processing Vol. 14, No. 2, pp. 133-160. (2009)
14. Lai, H., Alves, C., &amp; Gierl, M. J. Using automatic item generation to address item
demands for CAT. In Proceedings of the 2009 GMAC Conference on Computerized
Adaptive Testing. (2009)
15. Gierl, M.J., Zhou, J., Alves, C. Developing a Taxonomy of Item Model Types to
Promote Assessment Engineering. Journal of Technology, Learning, and Assessment,
7(2). (2008)
16. Sarre, S., Foulonneau, M. Reusability in e-assessment: Towards a multifaceted
approach for managing metadata of e-assessment resources. Fifth International
Conference on Internet and Web Applications and Services. (2010)
17. Suchanek, F. M., Kasneci, G., &amp; Weikum, G. Yago: a core of semantic knowledge. In
Proceedings of the 16th international conference on World Wide Web (pp. 697–706).
(2007)
18. Mitkov, R., An Ha, L., &amp; Karamanis, N. A computer-aided environment for
generating multiple-choice test items. Natural Language Engineering, 12(02), 177–
194. (2006)
19. Foulonneau, Muriel, Cole, Timothy W. Strategies for reprocessing aggregated
metadata. European Conference on Digital Libraries. Lecture notes in computer
science 3652 , 290-301 (2005)
20. Bejar, I. I., Lawless, R. R., Morley, M. E., Wagner, M. E., Bennett, R. E., &amp;
Revuelta, J. A feasibility study of on-the-fly item generation in adaptive testing.</p>
          <p>Educational Testing Service. (2002)
A Mobile and Adaptive Language Learning
Environment based on Linked Data</p>
          <p>Davy Van Deursen1, Igor Jacques2,
Stefan De Wannemacker2, Steven Torrelle1, Wim Van Lancker1,
Maribel Montero Perez2, Erik Mannens1, and Rik Van de Walle1
Abstract. The possibilities within e-learning environments increased
dramatically the last couple of years. They are more and more deployed
on the Web, allow various types of tasks and ne-grained feedback, and
they can make use of audiovisual material. On the other hand, we are
confronted with an increasing heterogeneity in terms of end-user
devices (smartphones, tablet PCs, etc.) that are able to render advanced
Web-based applications and consume multimedia content. Therefore, the
major contribution of this paper is an adaptive, Web-based e-learning
environment that is able to provide rich, personalized e-learning
experiences to a wide range of devices. We discuss the global architecture
and data models, as well as how the integration with media delivery can
be realized. Further, we give a detailed description of a reasoner, which
is responsible for the adaptive selection of learning items, based on the
usage environment and the user pro le.</p>
          <p>Keywords: Adaptive, Language Learning, Mobile, Web-based
1
The last years, the use of e-learning environments has increased spectacularly,
not only in formal educational settings, but also in working and private
environments. At the same time, the possibilities within these e-learning environments
increased dramatically: learning environments for instance have become easier
and more pleasant to use, they allow various types of tasks and ne-grained
feedback, and they can make use of audiovisual material. Moreover, while e-learning
environments were traditionally o ered as applications on stand-alone
computers, nowadays they are more and more being rendered over the Internet. It is
clear that these evolutions are related to technological evolutions, and the wide
availability of fast multimedia computers and internet access.</p>
          <p>Next to the fact that e-learning environments are more and more deployed
over the Web, we are confronted with an increasing heterogeneity in terms of
end-user devices that are able to connect to the Web and consume multimedia
content. Therefore, personal devices such as tablet PCs and smartphones could
be used as learning devices, next to traditional desktop and laptop devices. Also,
the role of personalization within e-learning environments has become more and
more important. Personalization can be applied both at the learning level (i.e.,
adjust learning sessions according to the learner's capabilities) and at the
environmental level (i.e., adjust the rendering of the learning environment according
to the characteristics of the usage environment).</p>
          <p>The above described challenges are exactly the ones that are currently
tackled in the IBBT MAPLE project (Mobile, Adaptive &amp; Personalized Learning
Experience3), which aims to make adaptive mobile e-learning possible.
Therefore, in this paper, we present a Web-enabled e-learning environment that is
able to o er personalized learning sessions on any device, primarily focused on
language learning making optimal use of digital multimedia. In order to realize
such an environment, we need the following key components:
{ a common, machine-understandable data model that is independent of usage
environments and is able to express both learning content and metadata
about the learning content;
{ a logging framework that allows to capture the behaviour and performance
of the learner on a detailed level;
{ a reasoner that is able to select learning items based on the learner's
capabilities and behaviour;
{ a media delivery platform taking into account usage environment
characteristics and restrictions.</p>
          <p>In the remainder of this paper, we provide an overview of the architecture
of our adaptive e-learning platform. Further, we discuss the above described key
components in more detail. Finally, we discuss related work, draw a number of
conclusions, and discuss some future work.
2</p>
          <p>MAPLE platform
In order to o er a highly adaptive e-learning platform that can also deal with
(mobile) multimedia delivery, we designed the architecture that is depicted in
Fig. 1. Two major parts can be distinguished: the e-learning platform and the
media delivery platform. The e-learning platform relies on two RDF stores, i.e.,
a store for learning exercises and a store for learner pro les. The learning items
store is lled through the learning item ingest service. More details regarding
the creation of learning items and the data model according to which they are
modeled are provided in Section 3. Further, the learner pro le store is build up,
based on the learners' actions and preferences (see Section 3.5). The reasoner
is responsible for selecting the most adequate exercise, based on the learner's
pro le and environment and the available learning items. Detailed information
3 http://www.ibbt.be/en/projects/overview-projects/p/detail/maple-2
media
ingest
service
learning
item ingest
service
media store
learning item</p>
          <p>DB
regarding the reasoner is provided in Section 4. Finally, the learning endpoint is
the communication point between learner devices and the e-learning platform.</p>
          <p>
            The media delivery platform corresponds to NinSuna4, which is a
metadatadriven media adaptation and delivery platform [25]. At its core, format-independent
modules for temporal selection and packaging of media content are present.
Almost all existing media delivery channels are supported by NinSuna: RTSP,
RTMP, HTTP progressive download, and HTTP adaptive streaming. Moreover,
native support for Media Fragments 1.0 [24] is provided, which enables the
delivery of media fragments (i.e., temporal or track fragments) in a standardized
way [
            <xref ref-type="bibr" rid="ref28">15</xref>
            ]. Finally, NinSuna comes with an Adaptation Decision-Taking Engine
(ADTE), which is able to 1) detect the capabilities of the device issuing the
request and 2) take a decision regarding which quality version of the requested
media resource is the most adequate for the detected device. A more detailed
description of the NinSuna platform can be found in [25].
          </p>
          <p>The presented e-learning platform exposes its data (i.e., learning content and
accompanying media resources) as linked data. More speci cally, it follows the
guidelines regarding the publication of linked data5: use dereferencable HTTP
URIs as names for things, provide useful information using the standards (RDF,
SPARQL), and include links to other URIs. Hence, within our platform, the
learning items and learner pro les are available through a SPARQL endpoint,
while the metadata of the media resources are published as RDF URIs. This
way, services such as the reasoner and the ADTE can rely on the linked data
and can start reasoning over it.
4 http://ninsuna.elis.ugent.be
5 http://www.w3.org/DesignIssues/LinkedData.html</p>
          <p>A typical e-learning scenario using this architecture is then as follows:
(1) the learner logs in into the Web-based e-learning application using its mobile
device, which contacts the learning endpoint of the e-learning platform; the
end point approaches the reasoner which provides a personalized overview
of the available courses;
(2) based on the course selected by the learner, the reasoner selects an exercise
from the learning item store, taking into account the learner pro le and the
available exercises within that course;
(3) when the selected exercise contains media content (audio, video, or images),
the ADTE of NinSuna is contacted in order to select the media resource
version that ts best for the current device;
(4) the learning endpoint renders the selected exercise in HTML and sends the
response to the learner;
(5) when the learner is solving the selected exercises, his/her answers and his/her
behaviour in terms of clicks and timing is logged and sent back to the
elearning platform;
(6) the received answers and behaviour information are used to update the
learner's pro le.</p>
          <p>In the next sections, more detailed information regarding a number of
components in the architecture is provided.
3</p>
          <p>Data Models and Instance Generation
A number of di erent data models need to be developed in order to structure
and de ne the content used on the e-learning platform. More speci cally, we
need the following data models:
{ model for the learning items and their metadata (e.g., question, possible
answers, di culty level);
{ model for the learning domain;
{ model for the metadata of the media resources (e.g., bit rate);
{ model for the learner pro le;
{ model for the logging.</p>
          <p>In the following subsections, we provide more information regarding these
different models and how they are populated. Note that all ontologies are modelled
in OWL and published online.</p>
          <p>Model for learning items and their metadata
The model for learning items consists of two ontologies: one for the learning items
themselves6 and one for their metadata7. An example instance of a learning
6 http://multimedialab.elis.ugent.be/organon/ontologies/maple/content
7 http://multimedialab.elis.ugent.be/organon/ontologies/maple/llomp</p>
          <p>Listing 1.1. Representing a learning item and its metadata in RDF (in Turtle).
1 @prefix mplc : &lt;http :// multimedialab . elis . ugent . be / organon / ontologies /
maple / content # &gt;.
@prefix llomp : &lt;http :// multimedialab . elis . ugent . be / organon / ontologies /
maple / llomp # &gt;.
@prefix xsd : &lt;http :// www . w3 . org /2001/ XMLSchema # &gt;.</p>
          <p>
            @prefix dc : &lt;http :// purl . org / dc / terms /&gt; .
15
20
25
30
35
40
item modelled according to our model is shown in Listing 1.1. We explain and
illustrate both ontologies based on this example. The model is heavily based
on the Learning Object Metadata (LOM, [
            <xref ref-type="bibr" rid="ref13 ref15 ref2">2</xref>
            ]). LOM speci es a conceptual data
scheme and the corresponding XML-binding for metadata of learning items.
We started from LOM and de ned a number of extensions in order to provide
improved support for learning subject, feedback and scoring, as well as better
integration with media resources. Further, as mentioned before, we split our
model between learning items and their metadata.
          </p>
          <p>We describe not only the metadata of learning items, but also the
exercises themselves. This way, they are formally represented, independent of any
rendering. Moreover, they can be easily integrated with their metadata and
corresponding media resources. Also, the reasoner (Section 4) will not only rely on
the learning item metadata, but also on the items themselves (e.g., this type of
exercise is preferred by the learner). For the moment, six mplc:exerciseTypes
are supported (focussed on language learning):
{ Multiple Choice: given a number of answers, the learner has to choose exactly
one answer;
{ Multiple Response: given a number of answers, the learner has to choose one
or more answers;
{ Fill Gaps: given a text with some gaps, the learner needs to ll in missing
text in text boxes;
{ Dropdown: same as Fill Gaps, but instead of free text elds, the learner can
choose between a number of prede ned answers;
{ Click on Text: given a text, the learner needs to click/tab on one or more
words;
{ Click on Zone: given an image or video, the learner needs to click/tab at one
or more regions within the image or video.</p>
          <p>Note that media elements can also occur within the rst ve types of exercises.
For instance, a movie can be played followed by the question to solve. Only the
last type (Click on Zone) uses multimedia in an interactive way as described
in [19].</p>
          <p>In Listing 1.1, a multiple choice exercise is used as example (line 9). A link
to a movie fragment is provided via the mplc:media property (line 10), which
takes as value a Media Fragment URI (see Section 3.3). The mplc:task
description (line 11) provides the question or task in multiple languages (based on the
level of the learner, the reasoner can choose if the language is presented in the
native language of the learner or not). Further, the mplc:answerSpace (line 13)
corresponds to the zone where the learner can enter its answers. Within such
an answer space, mplc:input is provided (line 14), where each mplc:answer
corresponds to one possible answer. In case of a multiple choice type, each
answer corresponds to a mplc:Choice. It contains information such as `is this
possible answer the correct one?', `how much does the learner score when (s)he
selects this one?', and the possible answer itself. LOM-speci c elements such
as llomp:lifeCycle (line 40) and llomp:educational (line 34) are present as
well.</p>
          <p>As a part of the aforementioned LOM extensions, we added the learning
component property to the educational component. Since the MAPLE project
focusses on language learning, we extended this learning component property
with speci c support for language learning. The learning component is split up
into three separate subcomponents: target language, theme and language
component. The latter component can have one or more of the following subproperties:
{ knowledge property: vocabulary, pronunciation, etc.;
{ skill property: reading, listening, writing or speaking
We also de ned a hierarchical structure for the range of the knowledge property
based on which the exact knowledge URIs can be deduced. This was done in
a language-independent way extendable with language-speci c elements. As the</p>
          <p>Listing 1.2. Representing a learning component in RDF (in Turtle).
1 @prefix lang : &lt;http :// kuleuven - kortrijk . be / itec / ext / ontologies /
itec_elearning_ontology / languagecomponent /# &gt;.
@prefix llomp : &lt;http :// multimedialab . elis . ugent . be / organon / ontologies /
maple / llomp # &gt;.
5
10
&lt;http :// ninsuna . elis . ugent . be / rdf / resource / maple / learningComponent_40001 &gt;
a llomp : LearningComponent ;
llomp : theme " agriculture " ;
llomp : targetLanguage "en - UK " ;
llomp : languageComponent [
a llomp : LanguageComponent ;
llomp : knowledge &lt;http :// kuleuven - kortrijk . be / itec / ext / ontologies /
itec_elearning_ontology / languagecomponent / grammar / partsOfSpeech /
substantive &gt; ;
llomp : skill lang : writing .</p>
          <p>] .
skill and knowledge property exists next to each other, it is possible to specify
the subject of an exercise very accurately. In Listing 1.2 an example instance
of a learning component can be found. The exercise in this instance trains the
writing skill of substantives related to agriculture.</p>
          <p>Within the MAPLE project, we use learning items from Televic Education
(TEDU)8. Currently, TEDU stores their learning items and accompanying
metadata in a SQL store. Through XML feeds, the store can be accessed from outside.
Hence, we implemented a converter taking as input the XML feeds and
producing RDF learning items according to the above described model.</p>
          <p>Model for the learning domain
The learning items are not physically arranged into courses. Which learning
objects belong together is determined by the metadata, namely the learning
component within the educational component of each item. The domain model consists
of two type of relations: prerequisite and hierarchical relations. In the project,
the domain model is supposed to be simple. It is a three level hierarchical model
in which the items are rst distinguished by their target language, secondly by
their theme, and thirdly by their language component. Additionally, there exist
prerequisite requirements between the language components, expressing one
language component depends on the knowledge of another. The reasoner will take
into account these prerequisites when determining what courses are available for
the learner.
3.3</p>
          <p>
            Model for media metadata
To model media resources, we rely on the W3C Media Annotations ontology [
            <xref ref-type="bibr" rid="ref11 ref24">11</xref>
            ],
which is supposed to foster the interoperability among various kinds of metadata
formats currently used to describe media resources on the Web. Moreover, it
8 http://www.televic-education.com/en/
          </p>
          <p>Listing 1.3. Representing a learner pro le in RDF (in Turtle).
1 @prefix itec : &lt;http :// kuleuven - kortrijk . be / itec / ext / ontologies /
itec_elearning_ontology # &gt;.
@prefix foaf : &lt;http :// xmlns . com / foaf /0.1/ &gt;.
@prefix mplc : &lt;http :// multimedialab . elis . ugent . be / organon / ontologies /
maple / content # &gt;.
10
15
20
25
already contains mappings to many other existing metadata formats. Further,
the ontology also provides support for Media Fragment URIs.</p>
          <p>
            Model for the learner pro le
In order to steer the decision making of the reasoner, an up-to-date learner
pro le is required for each of the learners in the learning system. This pro le
holds pro ciency score estimations for each of the appropriate learning subjects.
Each of these values is supplemented with a reliability parameter, namely the
variance of the estimator. As we focus on language learning, the pro ciency scores
are expressed on a continuous scale based on the discrete European Language
Levels [
            <xref ref-type="bibr" rid="ref17 ref4">4</xref>
            ]. The level of A1 conforms to a score of 0, A2 to 1, B1 to 2, etc. Also,
the pro le keeps a list of the learning goals which were set for that learner. An
example of such a learning goal could be \Achieve the B2 level for the French
verb form imparfait". The type of learning items the learner prefers can also be
saved in the pro le. An example instance can be found in Listing 1.3.
          </p>
          <p>The properties in the model will be caught either automatically either by
means of preference setting. The learner's favourite learning item types can be
edited through a preference menu and the learner's pro ciency scores will be
updated by a module of the reasoner. Additionally, the ontological model
sup</p>
          <p>Listing 1.4. Representing a logging abstract in RDF (in Turtle).
1 @prefix itec : &lt;http :// kuleuven - kortrijk . be / itec / ext / ontologies /
itec_elearning_ontology # &gt;.
@prefix learners : &lt;http :// kuleuven - kortrijk . be / itec / instances / maple /
learners # &gt;.
@prefix log : &lt;http :// kuleuven - kortrijk . be / itec / instances / maple / logging # &gt;.
@prefix maple : &lt;http :// ninsuna . elis . ugent . be / rdf / resource / maple / &gt;.
10
15
20
25
ports properties like motivation, learning style, learner strategy, and cognitive
ability's, but currently these are not used in the MAPLE e-learning platform.</p>
          <p>Model for logging the learner's activity
Finally, we developed a model for describing logging information. For instance,
the model is able to express information such as the start and stop of a learner
session or the learner's course selection. Once the learner has chosen a course, a
learning session is initiated in which the reasoner successively selects a new
learning item, each time resulting in a learning item session which lasts for the time
the learner interacts with the item. During such an item session a learner can give
an answer, request a hint, or change his mind by changing his answer. All these
interactions are logged by the system. This results in a huge amount of
information which is consumed in two ways. Firstly, a part of the logging information is
used at run-time by the reasoner. For instance, a score attained by the learner
will a ect the pro ciency score of a learner's pro le through the functionality of
the reasoner's pro ciency manager. Secondly, after runtime, the logged
information will be used as input for statistical research tracing how certain interactions
of the learner give information about the learning process. In Listing 1.4 an
example instance can be found. The learner and learning session, and the session
of the item are respectively interconnected by the itec:hasSubSession and the
itec:hasItemObjectSession relation.</p>
          <p>These resulting triples are partially generated in the core of the reasoner,
e.g. the start en stop of the learner and the learning sessions. The low level
interactions concerning one speci c exercise are generated at the client and sent
back to the reasoner which processes the logging and stores it in the learner
pro le RDF store.
4
The reasoner, introduced in Section 2, is a crucial component within the MAPLE
learning system architecture as it is responsible for the adaptive learning item
selection. If a learner logs in, the reasoner will rst of all provide a short list
of courses from which the learner can choose. As the reasoner is aware of the
learning goals for each learner through the learner pro le model, only courses
that contribute to the not yet attained learning goals can be selected. Next, once
the learner has chosen a course, the reasoner will start up a learning session and
will successively decide on the exact exercise to deliver to the learner.</p>
          <p>
            The reasoner takes into account the learner pro le as well as some real time
environmental properties. For the environmental adaptivity, both the screen
capacity and connection quality of the user's device are sources of adaptivity. In
case the screen size is too small, the reasoner will avoid the use of exercises with
media. A slow network connection will also result in avoiding media exercises.
For the learner pro le adaptivity, there are two main policies which can steer
the decision process. The rst one is based on a theory stating that the
exercise di culty needs to be increased each time a learner has answered a series of
four exercises correctly. Similarly, when four consecutive exercises are answered
incorrectly, it should go down [
            <xref ref-type="bibr" rid="ref12 ref25">12</xref>
            ]. The second policy is based on a pedagogical
theory which tries to keep the learner's motivation high by chasing a prede ned
(e.g. 70 %) correct-answer probability. This probability can be estimated based
on the IRT theory ([
            <xref ref-type="bibr" rid="ref18 ref5">5</xref>
            ]) by combining the current pro ciency estimation with the
level and di culty of the exercise [
            <xref ref-type="bibr" rid="ref20 ref7">28, 7</xref>
            ]. The aforementioned policies are
supplemented with an event-driven feedback system. The system allows the sequencer
to shift in a feedback item (instead of an exercise) to explain a learning subject
once a speci c and prede ned condition is met. For instance, \the learner made
ve errors in a row against the same learning subject". This feedback item is
chosen based on the learning component property which both the feedback and
the exercise item have in their metadata. For both policies, also the preferred
exercise types of the learner are taken into account by favouring them though
not completely cold-shouldering the other exercise types.
          </p>
          <p>To ful l the aforementioned tasks, the architecture of the reasoner (shown
in Fig. 2) consists of six modules, supplemented by a facade for communicating
with the learning endpoint. The six reasoner modules are the Learner manager,
Environment manager, Learning task decision manager, Sequence manager,
Logging manager and Pro ciency manager. We elucidate the functionality of these
modules by means of the following example.</p>
          <p>Learner profile</p>
          <p>DB
Learning item</p>
          <p>DB</p>
          <p>Environment
manager
Sequence
manager
Learner
manager</p>
          <p>Learning task
decision
manager
Proficiency
manager</p>
          <p>Logging
mananger</p>
          <p>Reasoner
Fig. 2. The reasoner architecture</p>
          <p>Facade</p>
          <p>Learning
endpoint</p>
          <p>Suppose a learner's initial pro le was set by a teacher thereby providing the
learning goal \Achieve the B2 level for the French verb form imparfait" and also
providing an estimation for the learner's initial level, namely A2, for \the French
verb form imparfait". When the learner logs in, the Learner manager produces
a learner session. Consequently, the Learning task decision manager loads the
learner's learning goals in order to compose a three-level tree representation of all
courses relevant for this learner, as explained in Section 3.2. This tree is sent to
the learning endpoint which produces a representation such that the learner can
navigate through the tree. Let us assume that the learner rst selects `French'
followed by the theme `General' and nally the language component `Imparfait'.
Besides, the learner opens the preferences menu and sets the dropdown exercise
type as his favourite one.</p>
          <p>Next, the Learning task decision manager composes a learning task object
which is send to the Sequence manager. Here the learning task is sequencing the
items (exercises and feedback) with the rst policy of adaptivity, starting from
level A2, having as a stop criterion the achievement of the level B2, and taking
into account the learner's preferred exercise types and environmental
properties. Subsequently, the Sequence manager loads the sequencer necessary for the
learning task. To this end, the sequencer makes use of the Environment manager,
which is an access point for information on the current connection quality and
the screen size of the device of the learner. At this point, the sequencer can
successively decide on the id of the next item and passes its choice to the learning
endpoint, which automatically generates a visual representation and makes use
of the delivery platform in case media are present.</p>
          <p>Once the learner nishes the exercise or has read the feedback in case of a
feedback item, the logging information about the interactions of the learner with
the item are sent back to the Logging manager of the reasoner. The latter sends
this information as a speci c logging object to a couple of observer objects which
all have di erent functionalities. For instance, there is an observer writing these
logs to the learner pro le RDF store. Another observer warns the sequencer when
for example four exercises have been consecutively answered correctly and yet
another sends the learner's score to the Pro ciency manager together with the
level, di culty and the learning subject of the answered exercise. The Pro ciency
manager keeps the pro ciency scores up to date. Prior to every decision of the
sequencer, the stop criterion is tested based on a pro ciency that is retrieved from
the Pro ciency manager. If this criterion is reached, the sequencer sequences a
special concluding feedback item announcing the end of the learning session to
the learner.
5</p>
          <p>
            Related Work
The architecture of the reasoner builds further on existing proposals for generic
learning system architectures such as in [20]. These architectures however have
mostly been designed having an adaptive hypermedia learning system in mind.
Even though most systems currently developed are based on providing learner
control based on adaptive links, e.g. [
            <xref ref-type="bibr" rid="ref16 ref3">3</xref>
            ], our system is specialized in adaptive
curriculum sequencing, meaning that the learning objects are sequenced in an
automated way. To create an adaptive learning system the method of using
ontologies has often been proposed in literature, e.g., in [
            <xref ref-type="bibr" rid="ref21 ref8">23, 17, 8</xref>
            ]. We partially rely
on existing ontologies and data models, and introduced new data models such as
a model for describing learning exercises and language-learning speci c
information. The latter were all done in collaboration with educationalists. Additionally,
both the delivery platform and the reasoner take into account connection quality
and screen size either to choose the right video format either to avoid sending
any media to a device if they cannot be delivered in an optimal way. This way,
our system implements a part of the context-awareness which has been claimed
to be crucial in mobile learning [23, 27].
          </p>
          <p>
            The ontology for the learner pro le is a compact non-exhaustive synopsis of
the most common learner characteristics found in literature [
            <xref ref-type="bibr" rid="ref10 ref23 ref26">21, 13, 10</xref>
            ] which
can be used in steering an adaptive learning system. For the preservation of
the learner's knowledge we used what is classi ed as an overlay model in [
            <xref ref-type="bibr" rid="ref26">13</xref>
            ].
Until now, the IEEE Learning Object Model standard LOM is considered to be
the standard for many repositories storing thousands of learning objects with
metadata. There have been attempts to transform the LOM metadata model
into an RDF version (e.g., [18]). However, the model provided by LOM was not
su cient. Hence, we adopted part of the LOM model (by relying on previous
LOM RDF e orts) and extended it with our own needs.
          </p>
          <p>
            Our realizations in this project largely replace the functionality of the
restrictive SCORM standard [
            <xref ref-type="bibr" rid="ref1 ref14">1</xref>
            ]. SCORM, an abbreviation for Sharable Content Object
Reference Model, is a collection of speci cations imposing a format for bundling
Web-based exercises into courses, thereby imposing LOM for the metadata, as
well as a data model for communicating learning scores between server and client.
The standard was updated in 2004, now supporting a limited set of instructions
for adaptive behavior. In practise however, the imposed syntax for adaptivity
had low expressivity but remaining very complicated [
            <xref ref-type="bibr" rid="ref27">14</xref>
            ]. Although in the past
SCORM had an important impact on the sharing of bundled learning courses on
the web and although many tried to improve the SCORM standard [16, 22, 29],
we think its starting point has become outdated. After all, we believe grouping
learning objects in a container format con icts with the principle of the Semantic
Web of data in which objects are scattered over the web. Additionally, its
extensibility pointed out to be low [
            <xref ref-type="bibr" rid="ref19 ref27 ref6">14, 6</xref>
            ] and the data model for exchanging learning
results is limited to the exchange of a single score, thereby not ful lling our
needs of more advanced reporting of a learner's interactions with the exercises.
Our formalized representation model for recording scores and interactions with
exercises makes it possible to develop true interoperable exercises that are able
to report learning results in a universal way. Until now, the importance for
adaptive learning systems having an extendible although universally understandable
learning result reporting system was largely ignored.
          </p>
          <p>
            Gang et al. proposed a framework for mobile learning in [
            <xref ref-type="bibr" rid="ref22 ref9">9</xref>
            ] that approaches
the challenges similarly as we did here. More speci cally, a media delivery
system was developed, as well as an adaptive module for learning item selection.
However, they relied on MPEG-21 technology while we use the NinSuna
platform, which is based on MPEG-21 principles but proven to be more e cient
and generic [26]. Further, learning item selection is not based on educational
properties such as skills or experience, but solely on environmental properties.
6
          </p>
          <p>Conclusions and Future Work
In order to exploit the possibilities of Web-based e-learning environments, we
proposed an e-learning architecture that is able to provide rich, personalized
e-learning experiences to a wide range of devices. We discussed the various data
models used within the e-learning framework. Moreover, we provided details of
the reasoner, a crucial component allowing to select learning items based on the
usage environment and the learner pro le.</p>
          <p>Future work consists of exploiting the possibilities of the Semantic Web even
more by linking learning items to the Linked Open Data cloud. Further, data
models could be optimized and linked to upcoming e orts (e.g., how to represent
the life cycle of a learning item as provenance information on the Web). Also,
more detailed domain models should be investigated. Regarding the reasoner,
future work consists of taking into account more information obtained from the
logging framework, as well as investigating how error-speci c feedback could be
generated (e.g., link frequently occurring errors to answers).</p>
          <p>Acknowledgments
The research activities as described in this paper were funded by Ghent
University, the Interdisciplinary Institute for Broadband Technology (IBBT, 50%
co-funded by industrial partners), the Institute for the Promotion of Innovation
by Science and Technology in Flanders (IWT), the Fund for Scienti c
ResearchFlanders (FWO-Flanders), and the European Union.</p>
          <p>References</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Belluck</surname>
          </string-name>
          , P. To Really Learn,
          <source>Quit Studying and Take a Test. New York Times. January 20th</source>
          ,
          <year>2011</year>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Karpicke</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Blunt</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          <string-name>
            <surname>Retrieval</surname>
          </string-name>
          <article-title>Practice Produces More Learning than Elaborative Studying with Concept Mapping</article-title>
          . Science. (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gale</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Warburton</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wills</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <article-title>Report on Summative E-Assessment Quality (REAQ)</article-title>
          .
          <source>Joint Information Systems Committee</source>
          , Southampton. (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Aldabe</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , Lopez de Lacalle,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Maritxalar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Uria</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          <article-title>Arikiturri: an Automatic Question Generator Based on Corpora and NLP techniques</article-title>
          ,
          <source>ser. Lecture Notes in computer science</source>
          , vol.
          <volume>4053</volume>
          , pp.
          <fpage>584</fpage>
          -
          <lpage>594</lpage>
          . Springer, Heidelberg (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J. S. Y.</given-names>
          </string-name>
          <article-title>Automatic correction of grammatical errors in non-native English text</article-title>
          .
          <source>PhD dissertation</source>
          at The Massachussets Institute of Technology. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Goto</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kojiri</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Watanabe</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iwata</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Yamada</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Automatic Generation System of Multiple-Choice Cloze Questions and its Evaluation. Knowledge Management &amp; E-Learning:</article-title>
          <source>An International Journal (KM&amp;EL)</source>
          ,
          <volume>2</volume>
          (
          <issue>3</issue>
          ),
          <fpage>210</fpage>
          . (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Karamanis</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ha</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mitkov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>Generating multiple-choice test items from medical text: a pilot study</article-title>
          .
          <source>In Proceedings of the Fourth International Natural Language Generation Conference</source>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>113</lpage>
          . (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>Y.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sung</surname>
            ,
            <given-names>L.C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          <article-title>An Automatic Multiple-Choice Question Generation Scheme for English Adjective Understanding</article-title>
          . Workshop on Modeling, Management and Generation of Problems/Questions in eLearning,
          <source>the 15th International Conference on Computers in Education (ICCE</source>
          <year>2007</year>
          ), pages
          <fpage>137</fpage>
          -
          <lpage>142</lpage>
          . (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Brown</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frishkoff</surname>
            ,
            <given-names>G. A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Eskenazi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Automatic question generation for vocabulary assessment</article-title>
          .
          <source>In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing</source>
          (pp.
          <fpage>819</fpage>
          -
          <lpage>826</lpage>
          ). (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Sung</surname>
          </string-name>
          , L.-
          <string-name>
            <surname>C. Lin</surname>
            ,
            <given-names>Y.-C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>M. C.</given-names>
          </string-name>
          <article-title>The Design of Automatic Quiz Generation for Ubiquitous English E-Learning System</article-title>
          .
          <source>Technology Enhanced Learning Conference (TELearn</source>
          <year>2007</year>
          ), pp.
          <fpage>161</fpage>
          -
          <lpage>168</lpage>
          , Jhongli,
          <string-name>
            <surname>Taiwan.</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Linnebank</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liem</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bredeweg</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <article-title>Question generation and answering</article-title>
          .
          <source>DynaLearn, EC FP7 STREP project 231526, Deliverable D3</source>
          .
          <fpage>3</fpage>
          . (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <article-title>SARAC: A Framework for Automatic Item Generation</article-title>
          .
          <source>In 2009 Ninth IEEE International Conference on Advanced Learning Technologies</source>
          (pp.
          <fpage>556</fpage>
          -
          <lpage>558</lpage>
          ). 1 Ghent University - IBBT, Gaston Crommenlaan 8/201, B-9050
          <string-name>
            <surname>Ledeberg-Ghent</surname>
          </string-name>
          ,
          <article-title>Belgium firstname</article-title>
          .lastname@ugent.be
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          2 ITEC - Interdisciplinary research on Technology, Education and Communication,
          <string-name>
            <given-names>K.U.</given-names>
            <surname>Leuven Campus</surname>
          </string-name>
          <string-name>
            <surname>Kortrijk</surname>
            , Etienne Sabbelaan 53, B-8500
            <given-names>Kortrijk</given-names>
          </string-name>
          ,
          <article-title>Belgium firstname.lastname@kuleuven-kortrijk</article-title>
          .be
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          1.
          <source>Scorm 2004 4th edition version 1</source>
          .1 overview, http://www.adlnet.gov/ Technologies/scorm/SCORMSDocuments/2004%204th%20Edition/Overview.aspx
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <article-title>2. Standard for learning object metadata</article-title>
          , http://standards.ieee.org/findstds/ standard/1484.12.
          <fpage>1</fpage>
          -
          <lpage>2002</lpage>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          3. Grapple,
          <article-title>a generic responsive adaptive personalized learning environment</article-title>
          . http: //www.grapple-project.
          <source>org (Jun</source>
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          4.
          <article-title>European language levels - self assessment grid (</article-title>
          <year>2011</year>
          ), http://europass.cedefop. europa.eu/LanguageSelfAssessmentGrid/en
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          5.
          <string-name>
            <surname>Baker</surname>
            ,
            <given-names>F.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>S.H</given-names>
          </string-name>
          . (eds.):
          <article-title>Item Response Theory: Parameter Estimation Techniques, Second Edition (Statistics: A Series of Textbooks and Monographs)</article-title>
          . CRC Press,
          <volume>2</volume>
          <fpage>edn</fpage>
          .
          <source>(July</source>
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bohl</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scheuhase</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sengler</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winand</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>The sharable content object reference model (scorm) - a critical review</article-title>
          .
          <source>In: Computers in Education</source>
          ,
          <year>2002</year>
          . Proceedings. International Conference on. pp.
          <volume>950</volume>
          { 951 vol.
          <volume>2</volume>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          7.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.H.</given-names>
          </string-name>
          :
          <article-title>Personalized e-learning system using item response theory</article-title>
          .
          <source>Computers &amp; Education</source>
          <volume>44</volume>
          (
          <issue>3</issue>
          ),
          <volume>237</volume>
          {
          <fpage>255</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          8.
          <string-name>
            <surname>Chi</surname>
            ,
            <given-names>Y.L.</given-names>
          </string-name>
          :
          <article-title>Ontology-based curriculum content sequencing system with semantic rules</article-title>
          .
          <source>Expert Syst. Appl</source>
          .
          <volume>36</volume>
          ,
          <issue>7838</issue>
          {7847 (May
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zongkai</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Learning Resource Adaptation and Delivery Framework for Mobile Learning</article-title>
          . In: Frontiers in Education,
          <year>2005</year>
          .
          <source>FIE '05. Proceedings 35th Annual Conference (October</source>
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          10.
          <string-name>
            <surname>Jia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhong</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>The construction and evolution of learner model in adaptive learning system</article-title>
          .
          <source>Computer Technology and Development, International Conference on 1, 148{152</source>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Burger, T.,
          <string-name>
            <surname>Sasaki</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malaise</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stegmaier</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Soderberg, J. (eds.):
          <article-title>Ontology for Media Resource 1.0</article-title>
          . W3C Working Draft, World Wide Web Consortium (
          <year>June 2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          12.
          <string-name>
            <surname>Leutner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Instructional design principles for adaptivity in open learning environments</article-title>
          . Curriculum, Plans, and Processes in Instructional Design: International Perspectives pp.
          <volume>289</volume>
          {
          <issue>307</issue>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          13.
          <string-name>
            <surname>Loc</surname>
            <given-names>Nguyen</given-names>
          </string-name>
          , P.D.:
          <article-title>Learner model in adaptive learning</article-title>
          .
          <source>World Academy of Science, Engineering and Technology</source>
          <volume>45</volume>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          14.
          <string-name>
            <surname>Mackenzie</surname>
          </string-name>
          , G.:
          <article-title>Scorm 2004 primer, a (mostly) painless introduction to scorm</article-title>
          .
          <source>Tech. rep. (</source>
          <year>2004</year>
          ), http://www.pro-ductivity.com/Compliance21CFR/ CTMW/scormintro.pdf
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          15.
          <string-name>
            <surname>Mannens</surname>
          </string-name>
          , E.,
          <string-name>
            <surname>Van Deursen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Troncy</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , Pfei er, S.,
          <string-name>
            <surname>Parker</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lafon</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jansen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hausenblas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Van de Walle, R.:
          <article-title>A URI-Based Approach for Addressing Fragments of Media Resources on the Web</article-title>
          . To appear in Multimedia Tools and Applications { Special Issue on Multimedia Data Semantics
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>