=Paper= {{Paper |id=Vol-495/paper-8 |storemode=property |title=Adaptive Evaluation Based on Competencies |pdfUrl=https://ceur-ws.org/Vol-495/paper8.pdf |volume=Vol-495 |dblpUrl=https://dblp.org/rec/conf/aied/GBG09 }} ==Adaptive Evaluation Based on Competencies== https://ceur-ws.org/Vol-495/paper8.pdf
                                                                                                   54




             Adaptive Evaluation Based on
                    Competencies
        Beatriz E. Florián G.a,b , Silvia M. Baldiris a and Ramón Fabregat Gesa a
     a
      Institute of Informatics and Applications (IIiA), University of Girona, Spain
 b
   Escuela de Ingeniería de Sistemas y Computación, Universidad del Valle, Colombia


          Abstract. The lifelong competence development and transfer of competencies
          acquisition is a tendency of global world and specifically for e-learning.. Generally,
          an instructional based-competence process consists in four single sub-processes,
          the competence definition, competence development, competence assessment
          process and certification. This paper presents a competence-based adaptive
          assessment process for judging competence of learners in the context of a virtual
          learning environment. The goal of our research is build repositories of items linked
          to competencies definitions and rules specifications in order to the generation of
          adaptive evaluation. Assessment process cover different evaluation types in the
          virtual learning environment, linking the repositories with the correct assessment
          engine tools. The approach provides more accurate estimations of student’s
          competencies level and a stronger relation between knowledge, activities, learning
          resources and type of evaluation tools, supporting in this way the automatic
          assessment and learning design generation. The process is supported by usage of
          educational standards and specifications and for an integral user modeling.

          Keywords. Competencies, Adaptive Evaluation, Competence Assessment Process,
          Assessment Repositories, Virtual learning Environment



Introduction

The lifelong competence development is a global tendency and e-learning process is
used with the purpose to eliminate the space and time barriers. In this context, new
pedagogical models supported for new assessment process models are necessary.
     In order to integrate properly assessment within learning process, some proposals
claim as main ideas: 1) Introduce assessment as another key element of leaning process
and 2) Link each learning objective or competence with one or many kind of
assessments. In this way, assessment becomes a way of spiral measuring for student’s
learning achievement. Consequently, assessment turns into a good source for feedback
to learners, for generation of recommendations and for drive adaptations in the learning
environment.
     In this context we have analysed some proposals of new competence assessment
process models and software tools. In this paper, a characterization of these models and
software tools is presented.
     We propose two different approaches in order to improve competence e-
assessment process. The first approach is the generation of adaptive assessment
structure in the learning design based in the competence element definition. The second
                                                                                                    55




is to introduce the concept of new meta-information on the evaluation items in order to
support information of competencies within the assessment repositories.
     An Adaptive Evaluation Engine Architecture (AEEA) is proposed too to supports
the improved competence e-assessment process completely.
     Both approaches and the AEEA take into account different methods of assessment
for monitor the student’s competencies knowledge evolution and produce adaptive
changes in assessment and learning design, and also it is our goal to integrate both
approaches upon the open source learning management system dotLRN.
     This paper is structured as follows: In section 1, the context and background of the
proposal is described. In section 2, an extension for assessment process based on
competencies evidence definition is proposed. In section 3, the data model for
competencies assessment based on item’s meta-data is exposed. In section 4, the
assessment process model adopted and the AEEA proposal is presented. In section 5,
we outline some concluding remarks and future work.


1. Context and Background

Competencies are complex processes that people put into play in order to solve
problems and to carry out activities (both at everyday life and at the workplace) [1].
Users and their characteristics are key elements in a competence-based learning process
development, especially considering that the very evolution over time of those
characteristics reflect the expected acquisition of users’ competencies.
    There are different standard and specifications to support competence definitions;
some of them include elements about the competence development process and the
associated actors. Table 1 describes three of the most important approaches.

                              Table 1. Competence Definition Models
           NAME                                             DESCRIPTION
IMS Reusable Definition of    Minimalist, but extensible competence and educational objectives
Competency or Educational     description. It considers basic elements such as competence title,
Objectives                    description and also it offer the possibility of extend the competence
                              information adding a general element  in which can be added
Author:                       specific elements in the competence definition. The RDCEO Schema can
IMS Learning Consortium       be used in both academic and business contexts. It focus is to offer.
HR-XML           Consortium   The objective of this project is the creation of an XML schema to provide
Competence Definition         trading partners standardized and practical means to exchange information
                              about competencies within a variety of business contexts [2].
Author:                       Additionally to the general information in the RDECO specification, this
HR-XML Consortium             approach define explicitly two specifics elements in the competence
                              definition, the evidence used to capture information to substantiate the
                              existence, sufficiency, or level of a Competence and Weight element to
                              capture of information on the relative importance of the Competency in
                              different aspects.
Ontology-Based Competency     Above approaches are focused only in the information about the
Management: Infrastructures   competence definition. This approach was created to support competence
for the Knowledge Intensive   management, for this reason take into account elements such as the actors
Learning Organization.        in the business process and job situations in which competence should be
                              demonstrated.
Author:                       It principal purpose is to offer a complete framework to support decisions
University of Alcalá          in human management in the business context.
     In some countries, the trend is build and evaluates higher education academic
curriculums driving by competencies definition. For example, in Colombia the
                                                                                                           56




National Education Ministry has been develop, through the Superior Education Foment
Colombian Institute (ICFES), a quality standard measuring reference for higher
education checking the degree of competencies development in students attending the
final year of undergraduate. This standard is called ECAES (for his Spanish acronym,
Exámenes de Calidad de Educación Superior en Colombia) [3]. In ECAES for each
academic curriculum a series of competencies are defined within his specific
knowledge context.
     Interoperability, reusability, efficiency and abstract modeling have always been the
main characteristics in e-learning design and e-assessment standards and specifications.
In particular IMS Question and Test Interoperability (QTI) [4] is an open technical e-
learning specification to support the interoperability of systems and reusability of
assessment resources. With QTI assessment items and test can be expressed and
interchanged. IMS Learning Design (LD) [5] is a specification for a meta-language
which enables the modeling of learning processes, designed to express many different
pedagogies. The activities to develop in a learning design can be expressed with LD.
     The current need of evaluation for competencies in long life learning process have
exhibited some shortcomings of these standards and specifications mentioned. IMS
QTI is just a specification about question definitions and response processing, and has
nothing to do with teaching and learning activities [6]. Conversely, LD is used to
support teaching-learning processes, but cannot explicitly support assessment [6].
     In order to support the measuring of competencies development within an e-
assessment process new assessment types are required. Table 2 presents taxonomy of
new assessment types.
          Table 2. Taxonomy of new assessment types required in competence e-learning process
   Assessment Name                                              Definition
Summative Assessment        After a period of work, the learner takes a test and then the teacher marks the
                            test and assigns a score. The test aims to summarize learning up to that point.
Formative Assessment        Consider an assessment ‘formative’ when the feedback from learning
                            activities is used to adapt the teaching to meet the learner's needs or to
                            students take control of their own learning.
Portfolio Assessment        Portfolio assessment is that it emphasizes and evidences the learning process
                            as an active demonstration of knowledge. It is used for evaluating learning
                            processes and learning outcomes. It is used to encourage student involvement
                            in their assessment, their interaction with other students, teachers, parents and
                            the larger community.
Self Assessment             Assessment where students making judgments about their own work. Students
                            critique their own work, and form judgments about its strengths and
                            weaknesses.
Peer Assessment             Student assessment of other students' work, both formative and summative.
360 Degree Feedback         Is feedback that comes from all around the student. The name refers to the
                            360 degrees in a circle, with the student in the center of the circle. Feedback is
                            provided by subordinates, peers, and teachers. It also includes a self
                            assessment and, in some cases, feedback from external sources.
Specific   Competencies     Specific competencies are directly related to a specific occupation and are
Assessment                  focused on the "know" and "do”. The individual competencies are a particular
                            type of specific competencies.
Transversal Competencies    These affect various fields and are transferable to a multitude of functions or
Assessment                  training programs. They are focused on the “to be". Special types of
                            transversal competencies are the collaborative competencies. They allow a
                            group of individuals to carry out a job as the result of joint effort and cohesion
                            towards achieving a common goal.
     Some researches have been produced software tools to support new specific types
of assessment in e-learning as [7] - [16] and so on. Table 3 shows a summary of some
                                                                                                     57




tools with support for new types of assessment in e-learning environments.
Nevertheless his data models are not based on standards producing loss of
interoperability and reusability.
     Therefore, in order to support a competence e-assessment process preserving
interoperability, reusability, efficiency and abstract modeling, new models to extend
the current specifications are required. First approaches in this sense [17] [18] propose
to realize extensions providing insight into gaps between these different specifications,
a UML model is proposed to extend and to combine QTI and LD specifications. Then,
other research clarify the technical mechanism to do it, for example, it is possible to
combine QTI and LD specifying how an outcome variable of QTI can be coupled to an
LD property and integrating assessment applications tools to LD as services [19] - [24].
     The most recent proposal over the initial idea of extend the current specifications
promotes to create a new layer over QTI an LD establishing a new specification
although building high-level assessment process modeling meta-language [3] [25].
     Other kind of proposal has arrived with the LAMS project [26] in which LD and
QTI specifications are the basis, but a totally new specification is being built in order to
support whole range of possibilities in e-assessment.
     Table 4 shows a summary of most important new models for e-assessment process
focused in use of specifications and performing of traditional and new types of
assessment.
          Table 3. E-assessment tools based in his own data models for new types of assessment
                 Tool Name                                       Type of assessment
Peers [7]                                      Peer Assessment
Peer Grader [8]                                Peer Assessment
Net Peas [9]                                   Peer Assessment
eSPARK [10]                                    Peer Assessment
Espace [11]                                    Peer Assessment
Turnitin Peer Review [12]                      Peer Assessment
SEUV [13]                                      ECAES [3]
TELOS [14]                                     Portfolio Assessment, Specific Competencies Assessment
Coala [15]                                     Specific Competencies Assessment in programming
Middleware to connect APIS QTI engine and      Specific Competencies Assessment in manage of maps
Google Maps [16]
                  Table 4. New models for e-assessment process for extend QTI and LD
       Model Name                        Type of model                    New types of assessments
                                                                                  validated
OUNL/CITO        Assessment   * UML Model                              Peer Assessment.
Model [17] [18]
TENCompetence Assesment       * UML Model                              360 Degree Feedback, Portfolio
Model [19] – [24]             * Data-centric model using XML           assessment and Peer Assessment
APS [3] [25]                  High-level         assessment-specific   Peer Assessment
                              process modeling language adopting a
                              domain-specific modeling approach
                              * Aggregation model
                              * Conceptual Structure model
                              * Process structure model
LAMS Model [26]               * UML Model                              Peer Assessment,      Summative
                              * Database Model                         assessment  and        Formative
                              * Data-centric model using XML           Assessment
                                                                                                  58




                     Table 5. E-assessment tools based on new models for e-assessment
             Tool Name                   Type of assessment         E-assessment Process Meta-Model
360 degree editor/runtime [19] [21]   360 Degree Feedback          TENCompetence Assessment Model
Portfolio assessment tool [19] [21]   Portfolio Assessment         TENCompetence Assessment Model
LAMS [26]                             Peer           Assessment,   LAMS Model
                                      Summative assessment and
                                      Formative Assessment.

     Table 5 shows a summary of new tools for e-assessment based on these new
models. Tools have been developed only for TENCompetence Assessment Model and
LAMS Model. For OUNL/CITO assessment model were not developed Tools perhaps
because TENCompetence Assessment Model is a reduce version of it and the research
is concentrated in this small version.
     According with the analysis of the state of the art in the competence e-assessment
process there are different open questions in this research area such as: How it can
express all types of assessment task in a standard learning design? What types of
assessment are more appropriate for the educational objectives of a learning
experience? How can these types of assessments to be customized to a specific learning
context and to the expected benefits of a particular learning experience? What are the
strategies for monitoring, assessment and evaluation? What are the adaptive strategies
to provide in e-assessment process?. In particular, we are interesting in the automatic
generation of adaptive assessment structures in a learning design, the support of whole
e-assessment process and support of all kind of assessment types.
     We propose two different ways to address the problem, the first use the
Competence Element Definition, specifically, the evidence definition to decide how
assessment structure can be generated. Our second approach describes a Data Model
for Competencies Assessment based on item’s Meta-data which are the input to an
adaptive retrieval process. Finally the AEEA proposal incorporates the two approach
mentioned above in a competence e-assessment process.


2. Assessment Process Based on Competencies Evidence Definition

Our interest in this first approach was define a particular model for competence
definition that permit us to specify the necessary elements in order to generate adaptive
learning designs in the context of learning management systems [27].
     IMS-RDCEO was the specification selected because it offers the possibility to
define completely, the necessary elements for the learning design generation, which
were identified by analyzing different curricular design methodologies.
     Competence definition consists of elements such as learning results, essential
knowledge, evidences, and competence context. Each of them has a specific identifier
in the definition.
     The proposal to extend [27] is to make that each learning resource have a
competence element associated, in particular, those which their type is assessment,
have a reference to the identifier of the evidence element in the competence definition.
This association support the assessment structure generation in the leaning design.
     Figure 1 shows an example of a learning design generated, performance evidence
activities are the assessment structures generated.
                                                                                                      59




                          Figure 1. Learning Design with Assessment Structure
    This first approach is based on the existence of repositories of different types of
evidence and in the association of this evidence to the competence definition.


3. Data Model for Competencies Assessment based on Item’s Meta-data

In this second approach we begin proposing a modification on dotLRN Assessment
Package. This Package is an implementation of QTI Light Specification.
     Assessment offers to users the possibilities to add general information about the
items such as the item description, if the item is required or not, the feedback for the
student, associated points and the description of question type.
     Our interest is to improve the assessment package in order to support retrieval
process based in the item meta-data.
     In this way, we propose to associate information about the competence definition
in the item meta-data. In the Table 6, the information proposed to add is described.
     With this extra information and the data existing now we are testing some vectorial
algorithms to support assessment construction step.

                               Table 6. Competence Definitions Models
 Element of Information                   Description                              Objective
Competencies knowledge      Describes the main needed content to be    Implement retrieval process based
                            addressed in order to be included in the   in the knowledge domain.
                            adapted learning design for supporting
                            competence acquisition.
Competence Context          Environment in which the competence        Implement retrieval process based
                            should be demonstrated.                    in the business associated context.



4. Adaptive Evaluation Engine Architecture (AEEA)

The proposal of adaptive assessment process is based on e-assessment process model
proposed on [17] and [29] which define six steps. We group the steps in two big stages:
Design time and Run time. Design time involves the first three process steps and Run
time involves the last three process steps. We also propose that the adaptive decision
could affect not only the feedback of the first step but also the feedback of the fourth
step. Figure 2 show the e-assessment process model adopted.

     Design Time Process Steps:
                                                                                     60




    1.  Competencies Assessment Plan Design: To select the sequence of assessment
       types that are appropriate for yield student’s competencies. Construction and
       definition of decision rules and assessment policies for adaptation.
    2. Items Construction: To prepare items of evaluation in different assessment
       authoring software tools.
    3. Tests Construction: To build units of assessment for each type of assessment
       propose in the assessment plan. The unit must assure the type and value of
       expected response in the plan.

    Run Time Steps:
    4. Assessment Execution: To display tests according to assessment plan and
       manage the student’s answers.
    5. Qualification, Classification and Response: To calculate rubric score for tests
       and calculate the indicator score of competence assessment for each student.
    6. Adaptive Decision Making: To follow the assessment plan rules for adaptive
       changes for each student. In some cases adaptations impact the execution of
       next tests, in other cases implies actualizations of the assessment plan.

    In our previous work [28] a first approach of AEEA was proposal. In accord of the
new e-assessment process model adopted, a second version has been produced. The
AEEA is composed of two packages: Author Assessment Package and Monitoring
Assessment Package. Figure 3 shows the new AEEA proposal.




              Figure 2. E-assessment process model adopted. Based on [15] and [17]
                                                                                  61




                      Figure 3. Adaptive Evaluation Engine Architecture


     The Author Assessment Package supports the three first steps of e-assessment
process in design time. The Monitoring Assessment Package supports the three last
steps of e-assessment process in run time.
     In the first step, Competencies Assessment Plan Design, an LD Editor Software
Tool is used for configure the LD assessment plan where outcome variables of QTI can
be coupled to LD properties. The result is the Competencies Assessment Plan supported
over LD specification and XML meta-data for competencies information. Additionally,
the Competencies Data Model and the Student’s User Model are design too inside this
step.
     In the second and third steps, Items Construction and Tests Construction, items
and tests are designed using Assessment Editor Software Tools and communication
with External Test/Item Repositories. The complete result is the Competencies
Assessment Data Model which is composed by four elements: Teacher’s Formative
Assessment Model, Summative Assessment Model, Self Assessment Model and a Peer
Assessment Model. This data model is based in specifications as QTI and XML meta-
data to keep relation between competencies and assessment items.
     The Monitoring Assessment Package provides Assessment Software Tools as
services to LD for monitoring user’s assessment tasks and update Student’s User Model,
executes adaptive transformations according the LD assessment plan and deliver
recommendations. In order to produce adaptive transformations, Competencies
Assessment Plan rules are checked and Student’s User Model is modified.
     The AEEA has been conceived to support new types of assessment, in particular:
Summative Assessment, Self Assessment, Teacher’s Formative Assessment and Peer
Assessment. Also, the most important, assessment objectives are integrated with the
other key elements of learning design through the Competencies Assessment Plan and
the monitoring process for delivering feedback to learners in all assessment tasks.
                                                                                                     62




5. Conclusions

Assessments play a significant role in the competence development process, and
consequently there is a clear need for run interoperable and adaptive assessment test in
the e-learning systems.
     In this paper, we have looked at the problems associated with adaptive e-
assessment systems. Through an analysis of QTI and LD, we found that a combination
and extension of both and service-oriented approach can meet technical requirements
for supporting new forms of e-assessment.
     We have proposed two different approaches in order to support competence e-
assessment process. First, the generation of adaptive assessment structure in the
learning design based on the competence element definition. Second, the concept of
new meta-data on the evaluation items for maintains information of competencies. The
AEEA proposal is based on new models for e-assessment process which extend LD and
QTI specifications. The AEEA proposed can give direction to the use of the LD and
QTI specification to align teaching, learning and assessment. This educational model
has been constructed to match the new approach of assessment, and can be used to
describe new assessment types. Our approach has advantages in supporting
interoperability, flexibility, and seamless integration with learning activities.
     Our working now is focused on the first part of the AEEA implementation, in
particular, develop of the Assessment Editor Software Tools and also prepare items of
evaluation in different repositories and testing some vectorial algorithms.
     As future work, the implementation of the Assessment Software Tools as services
of dotLRN and proof of the architecture for design time and run time are projected.


Acknowledgments

Authors would like to thank to LASPAU Program, The Academic and Professional
Programs for the Americas, (scholarship No. 20080847), to the Alban Program and the
European Union Program of High Level Scholarships for Latin America (scholarship
No. E06D103680CO). Also, thanks to Universidad del Valle, Colombia for its support
and for the Spanish Science and Education Ministry for the financial support of
A2UN@ project (TIN2008-06862-C04-02/TSI).


References

[1] S. Tobón. Formación basada en competencias, ECOE Ediciones, Colombia, 2005
[2] C. Allen, Competencies 1.1 (measurable characteristics). HR-XML Recommendation. 2003.
[3] Antecedentes        y       Marco        Legal       ECAES,       retrieved     February       2009,
      http://200.26.128.174/web/index.php?option=com_docman&task=doc_view&gid=693&Itemid=59
[4] IMS QTI, retrieved April 2009, http://www.imsglobal.org/question/
[5] IMS LD, retrieved April 2009, http://www.imsglobal.org/learningdesign/
[6] Y. Miao, P. B. Sloep, R. Koper, Modeling Units of Assessment for Sharing Assessment Process
      Information: towards an Assessment Process Specification, Advances in Web Based Learning -
      Proceedings of the 7th International Conference on Web-based Learning (ICWL 2008)). Jinhua, China,
      2008, 132-144
[7] A.H.H. Ngu, J. Shepherdm, Engineering the ‘Peers’ system: the development of a computer-assisted
      approach to peer assessment. Research and Development in Higher Education 18 (1995), 582-587.
                                                                                                      63




[8] E. F. Gehringer. Electronic peer review and peer Trading in computer-science courses. Proceedings of
     the 32nd SIGSE Technical Sympposium on Computer Science Education, Charlotte, North Carolina.
     (2001)
[9] Z. Liu, S. Lin, S. Yuan. (2001) Experiencing NetPeas: Another way of learning, Lecture notes in
     computer science, Vol. 2198, 584-588
[10] J. Lockyer, Multisource feedback in the assessment of physician competencies. Journal Contin Educ.
     Health Prof., 23(1): 4-12 (2003)
[11] M. D. Volder, M. Rutjens, A. Slootmaker, H. Kurvers, M. Bitter, R. Kappe, H. Roossink, J. Goeijen, H.
     Reitzema. Espace: A new web-tool for peer assessment with in-built feedback quality system, in
     Proceedings of ABR &TLC Conference, Hawaii, USA, 2007.
[12] Turnitin, retrieved April 2009, http://turnitin.com /static/peerreview.html0
[13] Bustos, J.J., Uribe F., “Aplicación Web para realizar exámenes tipo ECAES de las asignaturas y planes
     de pregrado de la Universidad del Valle”. Work of degree, Universidad del Valle, Cali, Colombia, Jun.
     2007.
[14] G. Paquette, “An Ontology and a Software Framework for Competency Modeling and Management.”
     Educational Technology & Society (IFETS), 10 (2), 1-21, 2007
[15] F. Jurado, O. C. Santos, M. A. Redondo, J. G. Boticario, M. Ortega, Providing Dynamic Instructional
     Adaptation in Programming Learning, Lecture Notes in Artificial Intelligence 5271, pp. 329-336,
     2008. Berling, Germany: Springer, 2008.
[16] Bouzo, J., Batlle, H., & Blat, J. (2007). Enhancing IMS QTI assessment with web maps. Paper
     presented at the International workshop on Current research on IMS Learning Design and Lifelong
     Competence Development Infrastructures: The 3rd TENCompetence workshop, Barcelona, Spain.
[17] H. Hermans, J. Burgers, I. Latour, D. Joosten-ten Brinke, B. Giesbers, J. Van Bruggen, R. Koper,
     Educational model for Assessment. Retrieved Feb 2009 from: http://dspace.ou.nl/handle/1820/559
[18] D. Joosten-ten Brinke, J. Van Bruggen, H. Hermans, J. Burgers, B. Giesbers, R. Koper, I. Latour,,
     Modeling Assessment for Re-use of Traditional and New Types of Assessment, Computers in Human
     Behavior 23 (2007), 2721-2741.
[19] M. Petrov, A. Aleksieva-Petrova, K. Stefanov, J. Schoonenboom, Y. Miao, TENCompetence
     Assessment Model and Related Tools for Non Traditional Methods of Assessment. In H. W. Sligte & R.
     Koper (Eds). Proceedings of the 4th TENCompetence Open Workshop. Empowering Learners for
     Lifelong Competence Development: pedagogical, organisational and technological issues. April, 10-11,
     2008, Madrid, Spain: SCO-Kohnstamm Instituut, Amsterdam, The Netherlands. 2008. 91-96
[20] M. Petrov, A. Aleksieva-Petrova, K. Stefanov, J. Schoonenboom,, Y. Miao, Evaluation of
     TENCompetence proof of concept assessment tools. In H. W. Sligte & R. Koper (Eds.). Proceedings of
     the 4th TENCompetence Open Workshop. Empowering Learners for Lifelong Competence
     Development: pedagogical, organisational and technological issues. April, 10-11, 2008, Madrid, Spain:
     SCO-Kohnstamm Instituut, Amsterdam, The Netherlands, 2008, 97-101
[21] M. Petrov, A. Aleksieva-Petrova, Developing a Software Tools for Nontraditional Methods of
     Assessment, International Scientific Conference Computer Science, Vol. 2 (2008), 490-495
[22] Miao, Y., & Koper, R. An Efficient and Flexible Technical Approach to Develop and Deliver Online
     Peer Assessment. Paper presented at the CSCL. (2007)
[23] Y. Miao, C. Tattersall, J. Schoonenboom, K. Stefanov, A. Aleksieva-Petrova. Using Open Technical e-
     Learning Standards and Service Orientation to Support New Forms of e-assessment. Paper presented at
     the International Workshop on Service Oriented Approaches and Lifelong Competence Development
     Infrastructures: The 2nd TENCompetence workshop Manchester. UK. (2007)
[24] Y. Miao, H. Vogten, H. Martens, R. Koper, The Complementary Roles of IMS LD and IMS QTI in
     Supporting Effective Web-based Formative Assessment. Paper presented at the Computers and
     Advanced Technology in Education 2007, Beijin, China, (2007).
[25] Y. Miao, T. Sodhi,, F. Brouns, P. B. Sloep, R. Koper, Bridging the Gap between Practitioners and E-
     learning Standards: A Domain-Specific Modeling Approach. In P. Dillenbourg & M. Specht (Eds.),
     Times of Convergence. Technologies Across Learning Contexts - Proceedings of the Third European
     Conference on Technology Enhanced Learning, EC-TEL 2008. September, 16-19, 2008, Maastricht,
     The Netherlands, (2008), 284-289
[26] LAMS, retrieved April 2009, http://www.lamsinternational.com/
[27] S. Baldiris, O. C. Santos, R. Fabregat, J. G. Boticario, Definición de Competencias basada en IMS –
     RDCEO para apoyar Procesos de Aprendizaje Adaptativos, ACOFI 2008. XXVIII Reunión Nacional.
     Cartagena de Indias (Colombia). September 2008
[28] B. Florián, S. M. Baldiris, R. Fabregat, Adaptive Integral Assessment Package for the A2UN@ Project,
     Proceedings of EAEEIE 2009, June 22-24, 2009, Valencia, Spain.