<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Adapting Smart Graphics' Behaviour to Users' Characteristics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christophe Piombo</string-name>
          <email>christophe.piombo@enseeiht.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Romulus Grigoras</string-name>
          <email>romulus.grigoras@enseeiht.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincent Charvillat</string-name>
          <email>vincent.charvillat@enseeiht.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IRIT - University of Toulouse</institution>
          ,
          <addr-line>2 rue Charles Camichel, 31071 Toulouse Cedex 7</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Many existing-web based systems aim at making interfaces more user-friendly. Web content designers commonly use graphical components to illustrate concepts or to present numerical data. Adapting dynamically these components to the context in which they are used, has lead to the development of smart graphics. Some common context features are encountered such as platforms and network capabilities. Few systems consider users characteristics in order to provide more interactivity and flexibility. The objective of our work is to investigate this latter issue. We are currently developing a user model based on several characteristics that include preferences and motivation factors. To structure the user model data and support knowledge retrieval, we propose an ontology-based smart graphics framework. The methodology includes validation of this model through experimental study and developing an adaptive hypermedia e-commerce system that automatically learns users' characteristics and adapts graphical content accordingly. This paper presents an overview of the objectives and the methodology of this work.</p>
      </abstract>
      <kwd-group>
        <kwd>adaptation</kwd>
        <kwd>user model</kwd>
        <kwd>ontology</kwd>
        <kwd>framework</kwd>
        <kwd>smart graphics</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Web designers have used rich graphical components for such purposes as illustrating concepts in a web site,
visually depicting numerical data, or making interfaces more user-friendly. However, the graphics themselves
were static, which has limited their usefulness. A convergence of computer graphics and artificial intelligence
technologies is leading to the development of smart graphics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which recognize some basic user environment
characteristics such as platforms and network capabilities to adapt themselves accordingly.
      </p>
      <p>
        Today, the smart graphics community enriched of researchers and practitioners from the fields of cognitive
sciences, graphic design and user interface, have raised a new challenge: framing their investigations in
humancentred way, presenting content that engages the user, effectively supports human cognition [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and is
aesthetically satisfying [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The ultimate objective is to prove the utility of adapting graphical object behaviours
and visual display to individual users. For example, in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], authors discussed about the usefulness of
considering sequence and timing for improving the effectiveness of ad banners on a commercial web site.
Results show that varying the format of banner and its display in a session has an impact to the level of users’
interest and session duration.
      </p>
      <p>
        The advent of the Internet has improved delivery and management issues. Considering the evolution of the
web technology, powerful CPUs and graphics accelerators, as well as abundant memory, it becomes possible to
envisage adaptive hypermedia systems that allow web content designers to develop graphical components that
can be personalised to users’ profiles. User adaptive systems have been largely studied by the user modelling
community in the field of adaptive hypermedia [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and traditional [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] web site. Some researches have
considered the problem of adapting Web 3D content and presentation [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] in virtual environment context [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to
different web application areas [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], such as education and training [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], e-commerce [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], architecture and
tourism, virtual communities and virtual museum [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Today, smart graphics based web systems inherit of user
model representation techniques used in 2D web site and 3D worlds [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] improving organization and presentation
of the content to the end-user. Therefore, implementing smart graphics facilitate users’ understanding and
assimilation.
      </p>
      <p>
        Such smart components have inherited architectures of agent and smart object which are composed of many
parts like action model [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] or domain model [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. A standardisation effort has been started to develop marketable
and interoperable smart graphics systems [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>This paper is composed of two parts. The first one presents an overview that shows the different use cases of
smart graphics and a second one in which we will describe the objectives and the methodology of our approach.</p>
    </sec>
    <sec id="sec-2">
      <title>Using Smart Graphics</title>
      <p>
        Smart graphics are used in different domains but have the same objective: offer to the end-user the best way to
accomplish a task with a tool (Fig. 1). In data intensive decision-making processes, end-users have to make
effort to craft a meaningful visualization. The users are usually domain experts with marginal knowledge of
visualization techniques. When exploring data, they typically know what questions they want to ask, but often do
not know how to express these questions in a form that is suitable for a given analysis tool, such as specifying a
desired graph type for a given dataset, or assigning proper data fields to certain visual parameters. In [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ],
authors proposed a semi-automated visual analytic model: Articulate. This smart graphics-based system is
guided by a conversational user interface to allow users to verbally describe and then manipulate what they want
to see. Natural language processing and machine learning methods are used to translate the imprecise sentences
into explicit expression. Heuristic graph generation algorithm is then used to create a suitable visualization.
      </p>
      <p>
        In other applications like tutoring or e-commerce, smart graphics aim to increase user satisfaction and to build
customer loyalty, addressing the interests and preferences of each individual user. We find in the literature
systems with different levels of adaptation. Customisable systems offer basic forms of personalization. Users
were limited to setting user interface parameters and some other preferences such as platforms and network
capabilities. This type of adaptation requires explicit choices from the user which are considered as a user
profile or model. They are stored within the system and used to adapt its environment. This technique assumes
that all adaptable aspects are understandable to the user who can clearly identify his/her preferences, and that all
preferences can be derived from a questionnaire [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Obviously, this approach cannot cope with complex user
models and systems in which behaviours must be embedded within each component distributed by the web.
      </p>
      <p>
        Consequently, a new generation of adaptive systems, based on the use of smart components, is being
developed. These systems have the ability to adapt the behaviours of each component to every individual user
needs by analysing logs or by monitoring user interactions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. 3D content is increasingly employed in these
systems that authors in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] divided into two broad categories:
 sites that display interactive 3D models of objects embedded into web pages, such as e-commerce sites
allowing customers to examine 3D models of products,
 sites that are mainly based on a 3D Virtual environment which is displayed inside the web browser, such
as tourism sites allowing users to navigate inside a 3D virtual city.
      </p>
      <p>
        They use essentially two adaptation techniques: adaptive navigation support and adaptive presentation [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
Systems that support adaptive navigation structure their contents to allow the user to navigate through 3D objects
that are most suitable. The system therefore grabs users’ attention by visually highlighting those 3D objects. Two
techniques inherited from adaptive hypermedia systems are used to implement adaptive navigation: adaptive
annotation and curriculum sequencing. The first technique changes the order or availability of objects inside a
3D scene. Whereas, the second makes decision about which object (or details of an object) to display next
depending on prerequisites and achievement. For example, in the Educational Virtual Environment proposed by
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], the student is assessed against learning objectives which evaluate the level of knowledge of an X3D
language feature. Failing to pass the test, the user is not allowed to browse 3D objects with more complicated
features. The results of such assessment are also used to update the student’s profile. Most of these approaches
focus exclusively on the level of knowledge of the student. They do not consider other factors, especially
cognitive, that differentiate learners. Systems that support adaptive presentation offer often choices between
different media when presenting materials (such as text and audio), but related to 3D objects technology,
adaptive presentation consists to remove or add visual details and behaviours to an object.
      </p>
      <p>Most of these techniques are limited when applied to advanced smart-graphics-enabled systems. The
humancentred adaptation process is complex and requires taking into consideration various individual parameters that
go beyond the assessment of user’s achievements and simple user preferences.</p>
    </sec>
    <sec id="sec-3">
      <title>The Proposed Approach</title>
      <p>
        We address the problem of adapting smart graphics behaviours and visual display to the users’ profile.
Estimating user characteristics is essential for systems that require adaptation. For example, in adaptive tutoring
systems, the learning style influences the learning behaviour [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and in e-commerce the style of buying
influences the buying behaviour [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Therefore, we define users’ profile as being the way an individual tackles a
contextual task with a specific tool. This profile depends on various factors including cognitive, preferences,
motivations, interests, skill and social aspects. Three main aspects will be considered in this work: modelling the
users’ profile using ontology representation (see 3.1), developing a smart graphics framework that automatically
assesses and uses such profile (see 3.2), contribute to the standardisation effort started within the smart graphics
community by proposing smart graphics ontology to increase interoperability aspect (see 3.3).
3.1
      </p>
      <sec id="sec-3-1">
        <title>Users’ Profile Ontology</title>
        <p>Semantic web made it possible to have the necessary tools to handle computer-understandable semantics. These
tools, generally evolving from XML are used to enrich the description of web-pages, giving a deeper
understanding of the relations between the concepts. OWL (Ontology Web Language) and RDF (Resource
Description Framework) are some of the most widely used representations. Various definitions and models have
been proposed for users’ profile.</p>
        <p>
          The Digital Item Adaptation part of the MPEG-21 Multimedia Framework provides a rich set of standardized
tools such as the Usage Environment Description Tools to depict user characteristics. But usually, the users’
profile describes mainly preferences about the various properties of the usage environment, which originate from
users, to accommodate transmission, storage and consumption. For example, in [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ], authors consider that user
characteristics parameters represent the user’s quality preferences on graphics components of geometry, material
and animation as well as 3D to 2D conversion preference.
        </p>
        <p>
          Recently, some researchers have started using ontology formalism to investigate how user preferences,
interests, disinterests and personal information could be stored into a semantic user profile [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. They argue that
techniques like RDF and OWL together with ontology are the key elements in the development of the next
generation user profiles. In this approach, the user profile is divided into particular domain sub-models and
conditional sub-models, each containing particular information about the users’ behaviour or context where a set
of preferences should be applied. These kinds of models are named User-Profile Ontology with Situation –
Dependent Preferences Support (UPOS).
        </p>
        <p>Our objective is to develop a users’ profile ontology based on UPOS which integrates various individual
characteristics such as perception, thinking style, social aspects, and motivation factors associated to a context
(e.g. platforms, activity…). Using a context-aware semantic reasoning, we will be able to adapt some features of
the smart graphics. For example, when a user look at a camera inside a training activity on his laptop or inside a
trading activity on his smart phone, the smart graphic used does not offer the same features and functionalities.
In the first case, a user would like to learn to manipulate the device. In the second one, the user would like to
know the price and camera zoom compatible.</p>
        <p>The objective of this phase is to propose general user ontology for web site using smart graphics that can
dynamically author materials depending on the user characteristics (e.g. thinking style, preferences…) and some
context features such as web site domain area and activities (e.g. training, simulation, trading…) or material
capabilities (e.g. platforms, network…). This will lead to the creation of a semantic description of a user
environment model.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Smart Graphics Framework</title>
        <p>We will design a component architecture based on the concept of smart component that can adapt its behaviour
to individual users. Smart components are often represented as being able to interact with its environment
through sensors and actuators (Fig. 2). Sensors cause perceptions that update smart component’s beliefs
compliant with its environment model. The smart component can reason about its beliefs and plan its optimal
actions sequence to achieve a given goal. Based on its actions model, the smart component adapts the actions
sequence to play.</p>
        <sec id="sec-3-2-1">
          <title>Environment Model</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Actions Model</title>
        </sec>
        <sec id="sec-3-2-3">
          <title>Decision</title>
          <p>engine</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>Sensors</title>
        </sec>
        <sec id="sec-3-2-5">
          <title>Optimal Actions</title>
        </sec>
        <sec id="sec-3-2-6">
          <title>Sequence Context Perception Adaptation Engine</title>
          <p>
            The main advantage of this approach is that all the information needed to interact with the component is
located at the component level and not at the application level [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]. We argue that this solution could be used to
design the architecture of web site using smart graphics facilitating the reuse of the component to deal with
marketable aspects. In addition, we believe that defining a framework is needed to facilitate software
development by allowing designers and programmers to devote their time to meeting software requirements
rather than dealing with the more standard low-level details of providing a working system, thereby reducing
overall development time.
          </p>
          <p>
            In [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], authors propose an enhancement of MVC architecture for smart graphics. This approach enables
interactive systems to use different views of the same model at the same time and to keep them synchronously
updated. The visual display evolves from a simple presentation to an intelligent visualization that valuates data
and presents only the result relevant to the user. Today, 3D objects are often used as visual display of a smart
component. 3D computer graphic description languages (e.g. X3D) are used to describe their characteristics (e.g.
shape, position, orientation, appearance…). Encoding X3D content using a XML-based syntax offers the
possibility to transform them into smart graphics more suitable for visualization using XSL transformation [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ].
          </p>
          <p>
            A smart visualization framework, called IMPROVISE has been proposed to tailor system visual responses to
a user interaction context [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ]. The system catches a user request and dynamically decides the proper response
content. Using an example-based visualization sketch design, the proper visual metaphor for the given content is
decided. An adaptation layer transforms the display using constraints associated with a context model (user,
environment…).
          </p>
          <p>
            These approaches lack a high-level semantic description needed to enable smart graphics to interact with their
environment. Thus preventing the necessary interoperability used in smart web based system to share or to reuse
smart components. Some authors [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ] propose to use semantic web technology to create a formal specification of
smart components leading to increase the perception, understanding and interaction with their environment.
          </p>
          <p>The Fig. 3 presents our ontology based smart graphics framework. The main idea of the framework is to use
semantic web technology to semantically enrich the pure geometric data with information about how to interact
with the smart graphic based on the knowledge of the user environment model. We propose to consider smart
graphics component as an agent related to its virtual representation: an avatar. So, two parts will be designed. A
smart graphics core which encompasses the core functionality provided by an agent and a smart graphics avatar
which is its virtual representation defining a visual display and behaviours. The interface of the smart graphics to
the environment is realized by sensors and actuators. Sensors provide context perception from its current
environment. Actuators are behaviours offered by the component.</p>
          <p>Considering a web-site with smart graphics components embedded in web pages. When a user connects for
the first time to the website, the decision engine retrieves semantic knowledge of the user environment model
(e.g. platform and network capabilities, user preferences…) and uses predefined rules maintaining by the
semantic knowledge component to define the optimal avatar display and behaviours. The adaptation engine
makes an adapted avatar of the original avatar stored in content database using adaptation rules.</p>
          <p>While manipulating the smart graphics, the user is monitored by the perception component of the decision
engine, that observes the usage and update the component’s beliefs. The component will be able to dynamically
learn the user preferences. The automatic learning process will be continuous and by reinforcement. During the
user activities, the semantic knowledge component maintains an historic of user usage and the perception
component updates the user environment model information such as user preferences.</p>
          <p>The decision engine will used an adaptation algorithm to match the user preferences to the web site objectives
(e-commerce, training, simulation) and environment. Among other aspects, basics interactions (e.g. zoom,
editing, querying, tutoring), the level of the object details, the control of camera path (e.g. freely, constraint,
predefined), lighting a region of interest, overall navigation to related object and the mode of presentation (e.g.
2D image, 3D object, 3D meshes, sound, video) will be decided as the optimal avatar. An adaptation engine will
generate dynamically the adapted avatar content compliant with original avatar content.
3.3</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>Smart Graphics Ontology</title>
        <p>
          Semantic representations are usually distinguished by the use of ontology, which aims at specifying concepts.
Some research has been conducted in the autonomous agents or avatars community to describe these smart
objects using regular vocabulary and simplified representation [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. Fig. 4 shows a restrictive view about a smart
object.
        </p>
        <p>The objective in this work is to find out how features of Virtual Humans considered as a kind of smart object,
can be “labeled” in computational systems in order to facilitate their interchange, scalability, and adaptability
according to specific needs. In addition, the authors demonstrated that it is possible to construct the graphical
representation of a Virtual Human from its semantic descriptors.</p>
        <p>Semantic description of multimedia items has been mainly developed for audio, video and still images. These
descriptions are defined in order to categorize, retrieve and reuse multimedia elements. The MPEG-7 standard,
formally named Multimedia Content Description Interface, provides a rich set of standardized tools to describe
multimedia content but only small attention has been given to interactive 3D items.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], authors propose a set of metadata to describe smart graphics in a standard way. The Smart Graphics
data model based on these metadata describe the configurations of a set of Smart Graphics, whether they are in a
single file or in multiple files. It includes some basics tags values such as ID, name, Description and highlights.
        </p>
        <p>This description is not rich enough to manage a smart adaptation of the graphics like a control on camera path,
light sources or behaviours.</p>
        <p>Our aim is to pursue and extend these works and then contribute to the upcoming standardisation effort that
aims to develop marketable and interoperable smart graphics systems. We propose to define ontology of smart
graphics (Fig. 5). The semantic description will consider several field of knowledge such as geometry,
behaviour, display and sensor among others. This semantic description of smart graphics will be compliant with
our smart graphics framework Fig. 3. It will contribute to a common understanding among different research
fields that aims at creating an advanced smart graphics model.</p>
        <p>
          The Fig. 6 shows a partial view of an OWL version of our smart graphics ontology. We can see that a smart
graphic is a subclass of a smart object defining by [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. The smart graphic class has several properties such as
behaviour controller which will be used to manage both object animations and interactive functionalities that are
offered to the user. The sensor will interact with the user environment model through an event model to adapt the
display of the 3D item. For example, the display controller will be associated with a camera path manager that
produces relevant camera paths around the target object (camera pose and zoom sequence). A good path may
chain good viewing positions learnt by crowdsourcing. Different user profiles might lead to learn and then select
different relevant camera paths. This principle will also be used to manage light sources and the object geometry
in order to highlight regions of interest strategically.
        </p>
        <p>&lt;Ontology xmlns="http://www.w3.org/2002/07/owl#"
xml:base="http://www.semanticweb.org/ontologies/2010/9/SmartGraphics.owl"
&lt;Declaration&gt;</p>
        <p>&lt;Class IRI="#BehaviourController"/&gt;
&lt;/Declaration&gt;
&lt;Declaration&gt;</p>
        <p>&lt;Class IRI="#SmartGraphic"/&gt;
&lt;/Declaration&gt;
&lt;Declaration&gt;</p>
        <p>&lt;Class IRI="#SmartObject"/&gt;
&lt;/Declaration&gt;
&lt;Declaration&gt;</p>
        <p>&lt;ObjectProperty IRI="#hasBehaviourController"/&gt;
&lt;/Declaration&gt;
&lt;SubClassOf&gt;
&lt;Class IRI="#SmartGraphic"/&gt;
&lt;Class IRI="#SmartObject"/&gt;
&lt;/SubClassOf&gt;
&lt;ObjectPropertyDomain&gt;
&lt;ObjectProperty IRI="#hasBehaviourController"/&gt;
&lt;Class IRI="#SmartGraphic"/&gt;
&lt;/ObjectPropertyDomain&gt;
&lt;ObjectPropertyRange&gt;
&lt;ObjectProperty IRI="#hasBehaviourController"/&gt;
&lt;Class IRI="#BehaviourController"/&gt;
&lt;/ObjectPropertyRange&gt;
&lt;/Ontology&gt;</p>
        <p>On today’s e-commerce sites, the integration of interactive 3D objects into web pages, rather full 3D store
environment is a common approach. Therefore, we will conduct an experimental study on e-commerce web sites
to evaluate the sale performance of our ontology based smart graphics framework.</p>
        <p>
          Our study will be conducted on a significant number of participants to help us:
 Develop and validate the user environment model based on the use of a questionnaire filled by each
participant. This questionnaire will measure user’s characteristics as perception, thinking style, social
aspects, motivation factors and purchasing behaviour,
 Assess the pertinence of our framework to detect users’ characteristics and to adapt the 3D objects’
visual display and behaviours during a shopping session. To support this experiment, we will use our
platform presented in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] that enables to conduct a multivariate tests on web site.
        </p>
        <p>The target population will be chosen to be as diverse as the audience of an e-commerce: wide age range,
males/females, socio-professional categories etc.</p>
        <p>To make our platform as interoperable as possible, we will base our work on standards whenever possible.
For example, we will use OWL to describe the semantics aspects of smart graphics and users’ profile using
ontology formalism and X3D to manage visual display and behaviours of a 3D objects. Web technologies will be
used to develop engine and ontology management system appearing in the framework architecture.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>This paper has first presented a survey about different use cases of smart graphics. We also introduced a new
framework to both describe and use smart graphics in many applications including e-commerce. This work
ultimately aims at adapting graphics to individual user profile by using web usage mining techniques. Three
complementary aspects are addressed. First we model the users using user profile ontology with
situationdependent preferences support. Second we defend a smart graphics framework that automatically learns the user
profile and adapt visual display and behaviours of the smart graphics. Last, but not least, this proposal could
contribute to an upcoming standardisation effort and bring an advanced smart graphics ontology that meets the
interoperability challenges.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Edwards</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dailey</surname>
            <given-names>Paulson</given-names>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          :
          <article-title>Smart graphics: a new approach to meeting user needs</article-title>
          ,
          <source>Computer</source>
          , vol.
          <volume>35</volume>
          , no.
          <issue>5</issue>
          ,
          <fpage>18</fpage>
          --
          <lpage>21</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Hammond</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prasad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dixon</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Art 101:
          <article-title>Learning to Draw through Sketch Recognition, Smart Graphics</article-title>
          , vol.
          <volume>6133</volume>
          ,
          <fpage>277</fpage>
          --
          <lpage>280</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Kairi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kenichi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shigeo</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masato</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Automatic Blending of Multiple Perspective Views for Aesthetic Composition</article-title>
          ,
          <source>Smart Graphics</source>
          , vol.
          <volume>6133</volume>
          ,
          <fpage>220</fpage>
          --
          <lpage>231</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Jorissen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Lamotte</surname>
            ,
            <given-names>W.:</given-names>
          </string-name>
          <article-title>A Framework Supporting General Object Interactions for Dynamic Virtual Worlds, Smart Graphics</article-title>
          , vol.
          <volume>3031</volume>
          ,
          <fpage>154</fpage>
          --
          <lpage>158</lpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Mahler</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fiedler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weber</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A Method for Smart Graphics in the Web, Smart Graphics</article-title>
          , vol.
          <volume>3031</volume>
          ,
          <fpage>146</fpage>
          --
          <lpage>153</lpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Jack</surname>
          </string-name>
          , H. :
          <article-title>Content &amp; Smart Graphic Communication, AICC Management</article-title>
          and Processes
          <string-name>
            <surname>Subcommittee</surname>
          </string-name>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Fraysse</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Designing Smart Graphics “simple scenarios” with IMS Simple Sequencing, AICC Management</article-title>
          and Processes
          <string-name>
            <surname>Subcommittee</surname>
          </string-name>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Piombo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Batatia</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ayache</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Réseau bayésien pour la modélisation de la dépendance entre complexité de la tâche, style d'apprentissage et approche pédagogique</article-title>
          ,
          <source>SETIT2005</source>
          ,
          <string-name>
            <surname>Tunisie</surname>
          </string-name>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Brusilovsky</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Adaptive hypermedia. User Modeling and User Adapted Interaction</article-title>
          , vol.
          <volume>11</volume>
          ,
          <fpage>87</fpage>
          --
          <lpage>110</lpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Perkowitz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Etzioni</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Adaptive Web Sites, Communication of the ACM</article-title>
          , vol.
          <volume>43</volume>
          ,
          <fpage>152</fpage>
          --
          <lpage>158</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <article-title>Dynamic Generation of Personalized VRML Content: a General Approach and its Application to 3D E-Commerce</article-title>
          ,
          <source>Proceedings of Web3D 2002: 7th International Conference on 3D Web Technology</source>
          , pp.
          <fpage>145</fpage>
          -
          <lpage>154</lpage>
          , ACM Press, New York (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ieronutti</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <article-title>Navigating 3D Virtual Environments by Following Embodied Agents: a Proposal and its Informal Evaluation on a Virtual Museum Application, PsychNology Journal (Special issue on Human-Computer Interaction)</article-title>
          , Vol.
          <volume>2</volume>
          , No 1.,
          <fpage>24</fpage>
          --
          <lpage>42</lpage>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ieronutti</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R</given-names>
          </string-name>
          .
          <article-title>Adaptable visual presentation of 2D and 3D learning materials in web-based cyberworlds</article-title>
          .
          <source>The Visual Computer</source>
          , Vol.
          <volume>22</volume>
          , No.
          <volume>12</volume>
          , pp.
          <fpage>1002</fpage>
          --
          <lpage>1014</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R</given-names>
          </string-name>
          .
          <article-title>Adaptive 3D Web Sites</article-title>
          . In Brusilovsky, P.,
          <string-name>
            <surname>Kobsa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nejdl</surname>
          </string-name>
          , W. (eds.):
          <source>The Adaptive Web: Methods and Strategies of Web Personalization, Lecture Notes in Computer Science</source>
          , Vol.
          <volume>4321</volume>
          . Springer-Verlag,
          <article-title>(</article-title>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R. Web3D</given-names>
          </string-name>
          <article-title>Technologies in Learning, Education</article-title>
          and Training: Motivations, Issues, Opportunities, Computers &amp;
          <source>Education Journal</source>
          , Vol.
          <volume>49</volume>
          , No 2,
          <fpage>3</fpage>
          --
          <lpage>18</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <article-title>New Directions for the Design of Virtual Reality Interfaces to</article-title>
          <string-name>
            <surname>E-Commerce</surname>
            <given-names>Sites</given-names>
          </string-name>
          ,
          <source>Proceedings of AVI 2002: 5th International Conference on Advanced Visual Interfaces</source>
          , ACM Press,
          <fpage>308</fpage>
          --
          <lpage>315</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Chittaro</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranon</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <source>Adaptive Hypermedia Techniques for 3D Educational Virtual Environments, IEEE Intelligent Systems</source>
          , vol.
          <volume>22</volume>
          , no.
          <issue>4</issue>
          ,
          <fpage>31</fpage>
          --
          <lpage>37</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leigh</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jonhson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Articulate: A Semi-Automated Model for Translating Natural Language Queries into Meaningful Visualizations, Smart Graphics</article-title>
          , vol.
          <volume>6133</volume>
          ,
          <fpage>184</fpage>
          --
          <lpage>195</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Baccot</surname>
            ,
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Choudary</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grigoras</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Charvillat</surname>
          </string-name>
          , V.:
          <article-title>On the impact of sequence and time in rich media advertising</article-title>
          ,
          <source>MM '09: Proceedings of the seventeen ACM international conference on Multimedia</source>
          ,
          <fpage>849</fpage>
          --
          <lpage>852</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Moebs</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piombo</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Batatia</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weibelzahl</surname>
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A Tool Set Combining Learning Styles Prediction, a Blended Learning Methodology and Facilitator Guidebooks - Towards a best mix in blended learning</article-title>
          , ICL (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Wen</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>X</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          : IBM Research Center, http://domino.research.ibm.com/comm/research_projects. nsf/pages/ria.Focused%20Areas.html
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Nesbigall</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Warwas</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kapahnke</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schubotz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klusch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Slusallek</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Intelligent Agents for Semantic Simulated Realities - The ISReal Platform</article-title>
          , ICAART, vol.
          <volume>2</volume>
          ,
          <fpage>72</fpage>
          --
          <lpage>79</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Stan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Egyed-Zsigmond</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maret</surname>
            ,
            <given-names>P.: A User</given-names>
          </string-name>
          <string-name>
            <surname>Profile Ontology For Situation-Aware Social</surname>
            <given-names>Networking</given-names>
          </string-name>
          ,
          <source>3rd Workshop on Artificial Intelligence Techniques for Ambient Intelligence (AITAmI)</source>
          ,(
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Garcia-Rojas Martinez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Semantics for virtual humans</article-title>
          , thèse n°
          <volume>4301</volume>
          , école polytechnique fédérale de Lausanne (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>N.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>J.W. :</given-names>
          </string-name>
          <article-title>3D Graphics Adaptation System on the Basis of MPEG-21 DIA, Smart Graphics</article-title>
          , vol.
          <volume>2733</volume>
          ,
          <fpage>283</fpage>
          --
          <lpage>313</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Vincent</surname>
            <given-names>Charvillat</given-names>
          </string-name>
          , Romulus Grigoras:
          <article-title>Reinforcement learning for dynamic multimedia adaptation</article-title>
          .
          <source>J. Network and Computer Applications</source>
          <volume>30</volume>
          (
          <issue>3</issue>
          ):
          <fpage>1034</fpage>
          -
          <lpage>1058</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>