<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>What's Inside the Box? An Open Student Modeling Approach in a Museum Context</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Diego Zapata-Rivera Educational Testing Service</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rosedale Road MS</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>-R Princeton</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dzapata@ets.org</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2009</year>
      </pub-date>
      <fpage>2</fpage>
      <lpage>3</lpage>
      <abstract>
        <p>Adaptive learning environments and technology-rich assessments capture evidence of students' skills, knowledge, and other attributes and use it to adapt their interaction or support assessment claims. Data captured to support assessment claims or implement adaptive behavior can include responses to predefined questions and process data. However, students are not always aware of the type of data being captured and how these data are used by these systems. An open student modeling system, implemented as a museum exhibit called “What's Inside the Box” has been designed to provide students with information about how a technology-rich assessment system makes use of both response and process data to support assessment claims. In this paper we describe the “What's Inside the Box” system and report on the results of a small-scale study aimed at evaluating system usability and perceived value issues.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Open student models</kwd>
        <kwd>technology-rich</kwd>
        <kwd>adaptive assessment systems</kwd>
        <kwd>museum exhibits</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Open student models can be used to share response and process
data with students, teachers, or parents/guardians in informal
learning environments such as museums. In fact, student/user
models have been used to generate personalized museum tours,
predict users’ location, and provide additional information based
on user’s interests, background information, and path history
[27]. Stock et al. [
        <xref ref-type="bibr" rid="ref7">8</xref>
        ] describe a framework for implementing user
modeling applications in museums. This framework includes
animated agents that motivate visitors and provide
recommendations, adaptive video documentaries and visit
summaries. Visitors' user models can be made available to users
to make adjustments which may result on better recommendations
and an enhanced visitors' experience [
        <xref ref-type="bibr" rid="ref5">6</xref>
        ]. Cramer et al. [
        <xref ref-type="bibr" rid="ref6">7</xref>
        ] showed
that user/student model transparency increases user understanding
and acceptance of the system’s recommendations.
      </p>
      <p>
        The open student modeling approach presented in this paper has
been implemented as a museum exhibit (“What’s Inside the
Box?).” Data for the student model were collected using a
technology-rich assessment system (the Technology-Rich
Environment; TRE) [
        <xref ref-type="bibr" rid="ref8">9</xref>
        ]. The “What’s inside the Box” system,
was designed to show students how their response and process
data are used by the system to make assessment claims.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. OPEN STUDENT MODELING IN</title>
    </sec>
    <sec id="sec-3">
      <title>MUSEUMS</title>
      <p>
        Informal education contexts such as museums impose particular
measurement challenges that can hinder the creation, maintenance
and interaction with student/user models. Some of these
measurement challenges include [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ]: (a) a high degree of
freedom and flexibility, which makes it difficult to isolate, keep
track and measure individual learning, (b) interactions can vary in
duration, type of activity, number of people involved; and (c)
interactions may include emerging behavior and unpredictable
interactions with other visitors and facilitators.
      </p>
      <p>
        Several strategies can be used to deal with some of these issues.
For example, it is possible to initialize the student model with data
from other visitors that share some of the characteristics of the
intended audience or borrowing information from existing student
models that were created in other contexts [
        <xref ref-type="bibr" rid="ref5">6</xref>
        ]. Once a
student/user model is available, this model can be used to
integrate additional evidence of students’ skills, knowledge and
other attributes based on their interactions with the exhibits at the
museum using a variety of sensors and tracking mechanisms
[28].
      </p>
      <p>To the extent to which a student/user model is available, different
types of recommendations could be implemented. Also, exhibits
can use information on the student/user model to adapt their
interaction to the particular individual. This can result in an
improved user experience. Explaining to individuals why
particular recommendations are offered or how exhibits adapt
their interaction becomes an interesting challenge, since the
adaptation can involve data gathered before or during the visit to
the museum.</p>
      <p>
        By keeping track of individual interactions in the museum, it is
possible to gather data about how successful particular exhibits
are at adapting their interaction and keeping individuals engaged.
Also, information in the student/user model can potentially be
used to assess student learning at the museum [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ]. Students can
use this information to plan future visits or follow-up on particular
topics. Teachers could receive a report or explore on how their
students interacted with the exhibits and use this information to
plan instructional and debriefing activities in the classroom.
An interactive museum exhibit featuring an open student model
approach serves as a testbed for exploring some of the challenges
of implementing student/user models in informal environments.
The “What’s Inside the Box?" was designed to show students how
their response and process data are used by the system to make
assessment claims (levels on student model variables and
supporting evidence).
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. WHAT’S INSIDE THE BOX?</title>
      <p>
        Recommendations for designing museum exhibits include [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ]:
(a) design them with a specific learning goal in mind; (b) make
them interactive; (c) provide multiple ways of experience
concepts, practices, and phenomena; (d) provide support for
participants to interpret their learning experience; (e) build on
participants’ prior learning and interests; and (e) encourage
participants to extend their learning outside the museum
experience. Following these recommendations, we have
implemented “What’s Inside the Box?” system.
      </p>
      <p>
        The contents and data used in the "What's Inside the Box?"
system are based on one of three simulation problems used in the
TRE project that is intended to elicit evidence for two scientific
inquiry skills: Scientific Exploration and Scientific Synthesis [
        <xref ref-type="bibr" rid="ref8">9</xref>
        ].
In the TRE assessment system, students are scored based on both
responses to particular questions and the process (actions) used to
arrive at the answers.
      </p>
      <p>The “What’s Inside the Box?” system is intended to be used as a
standalone museum exhibit offering the public a view of how
student problem-solving in science can be measured using
computer-based simulation tasks. Students are asked to solve a
scientific problem. Students can witness how the computer (the
"Box") takes into account both their interactions with the
simulation and their responses to particular questions to make
evidence-based claims about what they know and are able to do.
Students take from 5 to 10 minutes to complete the activity.</p>
      <p>Figure 1 shows a screenshot of the system. The screen is divided
into two basic areas; the experiment area on the left side of the
screen that is used to select payload mass values, set parameters
for a table and a graph that will show the data collected, and run
the simulation, and the “What’s Inside the Box” area on the right
that dynamically updates its content based on how students
interact with the system. The “Box” starts closed but students can
click on it to see its contents at any time. The contents of this area
are shown to students once they complete the experiments and
after answering data interpretation questions. A glossary of
relevant terms is available for students to inspect at any time.
Several hidden, sound effects and funny remarks were added to
encourage students to explore different areas of the screen.
At the beginning, students view a short introduction describing the
different parts of the system. Students are told that they can click
on the “What’s Inside of the Box?” area at any time to see how
the system measures their problem solving skills as they solve the
problem. To solve the simulation problem (“How do different
payload masses affect the altitude of a helium balloon?”), students
can try up to 5 experiments. Students can make use of a table and
a graph to record their data. On each experiment, students may
choose a payload mass value (one of nine possible payload mass
values that go from 10 lbs to 90 lbs in increments of 10 lbs), and
select variables to include on the table as columns and or as axes
of the graph. After the students selects a payload mass value and
clicks on “try it,” they see an animation of the balloon moving
upwards while values for the variables at the bottom of the
balloon area are calculated. The contents of both the table and the
graph are updated after each experiment. If, after having run two
experiments, the student has not selected variables for the table or
the graph, a hint is presented (“Here’s a hint. You may want to
make a table and a graph”).</p>
      <p>.</p>
      <p>Figures 2 and 3 show student model information in the “What’s
Inside the Box” area. This information includes skill level ranges
for two student model variables: Scientific Exploration and
Scientific Synthesis and evidence used to support the skill level
ranges. The evidence is represented by levels for relevant
observables (i.e., student actions) that are linked to particular
student model variables and their corresponding explanations.
As mentioned earlier, the system automatically shows the student
model at these two particular moments. However, students are
free to open the "Box" at any time to see the status of the student
model. By showing the contents of the student model (skill level
ranges, observable levels and explanations) at these two particular
moments, the system provides students with the opportunity to
reflect on how their recent actions are used by the system to assess
their performance so far.</p>
      <p>
        Relevant observables, their levels and explanations were
determined through a study with nationally representative sample
involving 2,134 8th grade students [
        <xref ref-type="bibr" rid="ref8">9</xref>
        ]. In this study, several
process data features were extracted and evaluated for their
correctness using scoring criteria called “evaluation rules.”
Summary scores were created using Bayesian networks.
Based on the student’s actions, skill level information and
corresponding explanations are determined and presented to the
student in the “Box” area. It is worth noticing that each skill and
observable has a “Not Enough Info” option. This option was
included since it is possible that at some point during the
interaction with the system, there may not be enough student data
for the system to determine an observable level or skill level
range.
      </p>
      <p>This is the information used to explain the meaning of scores to
the students (see Figures 2 and 3). Students can use this
information to improve on their performance during the current
interaction with the system or the next time they visit the exhibit.</p>
      <p>Value</p>
      <sec id="sec-4-1">
        <title>Not Enough Info. 1 2</title>
        <p>3
4</p>
      </sec>
      <sec id="sec-4-2">
        <title>A score of 3 indicates that you did not gather</title>
        <p>enough information. Although you ran enough
experiments with a wide range of payload
values, you did not choose all of the essential
payload values needed.</p>
      </sec>
      <sec id="sec-4-3">
        <title>A score of 4 indicates that you were successful in gathering enough information to answer the question.</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. USABILITY STUDY</title>
      <p>A usability study was carried out to identify major accessibility,
readability, and navigation problems as well as to gather feedback
on the perceived value of this type of tool.</p>
    </sec>
    <sec id="sec-6">
      <title>4.1 Participants</title>
      <p>Participants were 11, 6th-10th grade students (7 female and 4
male). Participants received a $15 gift card for their participation
in the study. All participants were familiar with museum exhibits
and had taken computer-based tests in the past.</p>
    </sec>
    <sec id="sec-7">
      <title>4.2 Procedure</title>
      <p>Students completed a brief background questionnaire about their
experience with museum exhibits and use of computers for
learning and testing. Participants interacted with the system on a
40-inch, touch-screen monitor. Participants were asked to “think
aloud” while interacting with the system.</p>
      <p>The interaction with the system involved the following activities:
going through the initial short introduction describing the different
parts of the system, working on the helium balloon problem (by
choosing payload values, running simulation, selecting variables
for the table and graph), exploring the student model on-demand
(by clicking on the "Box") or when the system made it available
(after the experiments phase and after answering multiple choice
questions about the experiments), interacting with the glossary (if
needed), listening to sound effects and funny remarks when
clicking on some areas of the interface, receiving hints and
responding to open questions about the experiments.</p>
      <p>A facilitator stayed with the student, took notes, and answered
clarifying questions. One additional observer took notes through a
two-way mirror. At the end on the interaction with the system,
students were given the option to try again.</p>
      <p>Finally, students completed a usability survey about their
experiences with the system. After the students completed the
survey, observers had the opportunity to ask students clarification
questions.</p>
    </sec>
    <sec id="sec-8">
      <title>5. RESULTS</title>
      <p>Participants generally enjoyed the activity and found the system
informative and easy to use. Nine or more students agreed or
strongly agreed with the following statements: “I liked creating
and running the experiments with the balloon,” “The demo was
entertaining,” “The introduction at the beginning helped me
understand what I would be doing,” “The directions on the screen
were easy to understand,” “The vocabulary was easy to
understand,” “The touch-screen was easy to use.”
Some issues students thought could be improved or were not
useful include the sound effects and voices: “I liked the sounds
and voices in the demo” (5 disagreed or strongly disagreed), and
the glossary, “The glossary of definitions helped me better
understand the demo” (6 disagreed or strongly disagreed”).
Students seemed to understand and found the student model
information useful: “By using this demonstration, I learned about
how a computer-based test measures a student’s skills” (9 agreed
or strongly agreed), “I understand why I received the scores I did”
(10 agreed or strongly agreed), and “I understand how the
computer calculated my scientific skill range levels” (9 agreed or
strongly agreed).</p>
      <p>Students provided some suggestions for improving the system
including: reducing the length of the introduction, adding hints to
encourage students to open the "Box," adding words to the
glossary and making hidden sounds easy to find, and placing them
in areas relevant to the task.</p>
      <p>Additional observations include: Only one student opened the
“Box” before completing the experiments. This student used the
information in the student model to help him choose the variables
for the table and graph. All the students left it open after the
experiments. Although students were informed about the
availability of tools such as a table and a graph, they were allowed
to proceed with the experiments without using them. Results
showed that all students created a table. One student created it
before the first experiment, 4 after the first experiment before the
hint, and 6 after receiving the hint. All students made a graph.
However, most of them made it after they heard the hint.
When asked “Do you understand the relationship between what
you did in the experiment and how that was reflected in your skill
ranges on the right side of the screen?” most of the students
responded affirmatively. Some of the explanations provided
include: “I understand that every answer I got wrong or right was
recorded and deciphered and matched to form my skill range,”
“Yes, the results were explained clearly, although I’m not sure
that a younger child might know that the independent variable
goes on the x-axis and dependent on the y-axis,” “Yes, I
understand how my scores and skill levels were determined based
on the choices of my experiments and my answers to the final
questions,” “It has a scale from high to low and it shows how you
do and how you could have done, and you see explanations after
you see score ranges,” and “I was measured based upon relevancy
of the topic of the tables and graphs I made and also on the
accuracy of my answers when questions were asked.”
Three of the students spontaneously decided to try again citing the
following reasons: “I hope to improve my score,” “I would try
again because I didn’t get the highest score,” and “this is kind of
fun. I would continue until I get a perfect score.”
Some students' reactions to the information found in the “What’s
Inside the Box?” area include: “OK, that makes me feel better”
(this student received perfect scores), “lots of room for
improvement” (this student tried twice), “4 is good for the
experimental choices. [Table] Oh, I didn’t choose 2 headings.
[Graph] only one of the two” (this student used this feedback to
do a better job during the second round).</p>
      <p>Finally, although the student model was produced based on data
collected from 8th graders using the TRE system, we decided to
open the usability study to 6th-10th grade students since the
content of simulation problem implemented in the exhibit was
accessible to all of them. We did not observe any major
differences in the way these students used the system.</p>
    </sec>
    <sec id="sec-9">
      <title>6. DISCUSSION AND FUTURE WORK</title>
      <p>
        The results of the study provide initial evidence on how students
interact with an open student model museum exhibit based on a
technology-rich assessment system. Students seemed to
understand and value the features included in the “What’s Inside
the Box?” system. We believe that open student models and
embedded assessments have potential to support student learning
and reflection in informal educational contexts [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ].
      </p>
      <p>Explanations provided by students about how their actions were
used to update the skill level ranges and observable levels indicate
that the information in the student model was understood as
intended. We argue that these students may be better prepared to
interact with adaptive learning and assessment systems that make
use of response and process data.</p>
      <p>Results showed the desire of some students to try again and use
the student model information to improve their scores. This is
interesting since museum exhibits compete with other exhibits for
visitors' attention. Also, students may have more opportunities to
practice their science inquiry skills and appreciate the open
student model.</p>
      <p>Even though only one student opened the “Box” before the system
made it available after completing the experiments, all of them
left it open after that. This seems to indicate that once students are
aware of the availability of this information, they want to keep it
on the screen.</p>
      <p>Suggestions provided by students indicate that they would
welcome more hints encouraging the use of the student model.
These hints could be implemented as system alerts or
implemented as an artificial agent that will accompany the student
during the interaction with the system and alert the student about
important changes to the student model.</p>
      <p>
        The approach described here demonstrates how open student
models could be used to create museum exhibits to help students
become aware of how some technology-rich assessment systems
use their response and process data to support assessment claims.
By gathering student model information before students interact
with the exhibit (in this case by building upon the results of the
TRE study), it is possible to create user/student models that can be
used to create adaptive museum exhibits. Open student modeling
applications in this contexts can provide information about why
particular recommendations are made, what data are used to
support these recommendations, and how museum exhibits adapt
their interaction based on the information in the user/student
model. By keeping this information in a generalized/life-long
student model [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ], the benefits of the museum visit can be
transferred to other contexts such as the classroom and vice versa.
Teachers can also benefit from understanding how these types of
systems use response and process data. This information can be
useful in understanding how their students solve problems and the
types of responses they provide to particular questions. Making
teachers and students participants can also contribute to the
acceptance and adoption of adaptive, technology-rich learning and
assessment systems in the classroom.
      </p>
      <p>This work may inform related research in areas such as exploring
approaches for helping people understand computer science ideas
in museum contexts and designing and evaluating interactive
reporting tools and other materials to explain assessment concepts
to various audiences.</p>
      <p>Additional work in this area involves: exploring the types of
graphical representations/guidance mechanisms that should be
used to externalize the student model information in museums and
other contexts (e.g., after school programs and school events such
as school assemblies and science fairs); investigating whether the
benefits of open student models in general, and feedback provided
by this museum exhibit in particular, transfer to other
technologyrich assessment adaptive systems; and exploring how other
audiences (e.g., teachers and parents) interact with this types of
systems.</p>
    </sec>
    <sec id="sec-10">
      <title>7. ACKNOWLEDGMENTS</title>
      <p>We thank Margaret Vezzu, Margaret Redman, Debbie Pisacreta
and Tom Florek for their work in this project, and Randy Bennett,
Eric Hansen, Irv R. Katz, Lei Liu, Hilary Persky, James Carlson,
Lucia Sanin and two anonymous reviewers for their comments
and suggestions.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Bull</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Student models that invite the learner in: the SMILI open learner modelling framework</article-title>
          .
          <source>International Journal of Artificial Intelligence in Education</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ),
          <fpage>89</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bohnert</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zukerman</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berkovsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baldwin</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Sonenberg</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Using interest and transition models to predict visitor locations in museums</article-title>
          .
          <source>AI Communications</source>
          ,
          <volume>21</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>195</fpage>
          -
          <lpage>202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Bright</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ler</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ngo</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Nuguid</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Adaptively recommending museum tours</article-title>
          .
          <source>In Proceedings of the Ubicomp Workshop Smart Environments and Their</source>
          Applications to Cultural Heritage.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Lane</surname>
            ,
            <given-names>H.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noren</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Auerbach</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Birch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Swartout</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Intelligent Tutoring Goes to the Museum in the Big City: A Pedagogical Agent for Informal Science Education</article-title>
          . In G. Biswas,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          &amp; A.
          <string-name>
            <surname>Mitrovic</surname>
          </string-name>
          (Eds.),
          <source>Artificial Intelligence in Education: 15th International Conference</source>
          (Vol.
          <volume>6738</volume>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>162</lpage>
          ): Springer Berlin / Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lum</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Niu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>A scrutable museum tour guide system</article-title>
          .
          <source>In Proceedings of the 2nd Workshop on Multi-User and Ubiquitous User Interfaces</source>
          (pp.
          <fpage>19</fpage>
          -
          <lpage>20</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Cramer</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Evers</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramlal</surname>
            , S., van Someren,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rutledge</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stash</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Wielinga</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>The effects of transparency on trust in and acceptance of a content-based art recommender</article-title>
          .
          <source>User Model. User-Adapt. Interact</source>
          .
          <volume>18</volume>
          (
          <issue>5</issue>
          ):
          <fpage>455</fpage>
          -
          <lpage>496</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Stock</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zancanaro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Busetta</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Callaway</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krüger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kruppa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuflik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Not</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Rocchi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Adaptive, intelligent presentation of information for the museum visitor in PEACH. User-Model. User-Adapt</article-title>
          .
          <year>Interact</year>
          .
          <volume>17</volume>
          (
          <issue>3</issue>
          ),
          <fpage>257</fpage>
          -
          <lpage>304</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Bennett</surname>
            ,
            <given-names>R. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Persky</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weiss</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Jenkins</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Problem-Solving in technology rich environments: A report from the NAEP technology-based assessment project</article-title>
          .
          <source>NCES</source>
          <year>2007</year>
          -466, U.S. Department of Education, National Center for Educational Statistics, U.S. Government Printing Office, Washington, DC.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Zapata</given-names>
            <surname>Rivera</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2012</year>
          .
          <article-title>Embedded Assessment of Informal and Afterschool Science Learning. Summit on Assessment of Informal and After-School Science Learning</article-title>
          . Retrieved from http://www7.nationalacademies.org/bose/1Informal_Ed_Zap ataRivera_
          <year>2012</year>
          _Paper.pdf
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11] National Research Council.
          <year>2009</year>
          .
          <article-title>Learning Science in Informal Environments: People, Places, and</article-title>
          <string-name>
            <surname>Pursuits.</surname>
          </string-name>
          <article-title>Committee on Learning Science in Informal Environments</article-title>
          . Philip Bell, Bruce Lewenstein,
          <string-name>
            <given-names>Andrew W.</given-names>
            <surname>Shouse</surname>
          </string-name>
          , and Michale A. Feder, Editors.
          <source>Board on Science Education</source>
          , Center for Education.
          <source>Division of Behavioral and Social Sciences and Education</source>
          . Washington, DC: The National Academies Press.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2008</year>
          )
          <article-title>Lifelong Learner Modeling for Lifelong Personalized Pervasive Learning</article-title>
          .
          <source>IEEE Transactions on Learning Technologies</source>
          , Vol.
          <volume>1</volume>
          , No.
          <issue>4</issue>
          , pp.
          <fpage>215</fpage>
          -
          <lpage>227</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>