=Paper=
{{Paper
|id=None
|storemode=property
|title=Adapting Smart Graphics Behaviour to Users Characteristics
|pdfUrl=https://ceur-ws.org/Vol-680/paper_5.pdf
|volume=Vol-680
}}
==Adapting Smart Graphics Behaviour to Users Characteristics==
Adapting Smart Graphics’ Behaviour to Users’ Characteristics
Christophe Piombo, Romulus Grigoras, Vincent Charvillat
IRIT - University of Toulouse, 2 rue Charles Camichel, 31071 Toulouse Cedex 7, France
{christophe.piombo, romulus.grigoras, vincent.charvillat}@enseeiht.fr
Abstract. Many existing-web based systems aim at making interfaces more user-friendly. Web content
designers commonly use graphical components to illustrate concepts or to present numerical data. Adapting
dynamically these components to the context in which they are used, has lead to the development of smart
graphics. Some common context features are encountered such as platforms and network capabilities. Few
systems consider users characteristics in order to provide more interactivity and flexibility. The objective of
our work is to investigate this latter issue. We are currently developing a user model based on several
characteristics that include preferences and motivation factors. To structure the user model data and support
knowledge retrieval, we propose an ontology-based smart graphics framework. The methodology includes
validation of this model through experimental study and developing an adaptive hypermedia e-commerce
system that automatically learns users’ characteristics and adapts graphical content accordingly. This paper
presents an overview of the objectives and the methodology of this work.
Keywords: adaptation, user model, ontology, framework, smart graphics
1 Introduction
Web designers have used rich graphical components for such purposes as illustrating concepts in a web site,
visually depicting numerical data, or making interfaces more user-friendly. However, the graphics themselves
were static, which has limited their usefulness. A convergence of computer graphics and artificial intelligence
technologies is leading to the development of smart graphics [1], which recognize some basic user environment
characteristics such as platforms and network capabilities to adapt themselves accordingly.
Today, the smart graphics community enriched of researchers and practitioners from the fields of cognitive
sciences, graphic design and user interface, have raised a new challenge: framing their investigations in human-
centred way, presenting content that engages the user, effectively supports human cognition [2], and is
aesthetically satisfying [3]. The ultimate objective is to prove the utility of adapting graphical object behaviours
and visual display to individual users. For example, in [19], authors discussed about the usefulness of
considering sequence and timing for improving the effectiveness of ad banners on a commercial web site.
Results show that varying the format of banner and its display in a session has an impact to the level of users’
interest and session duration.
The advent of the Internet has improved delivery and management issues. Considering the evolution of the
web technology, powerful CPUs and graphics accelerators, as well as abundant memory, it becomes possible to
envisage adaptive hypermedia systems that allow web content designers to develop graphical components that
can be personalised to users’ profiles. User adaptive systems have been largely studied by the user modelling
community in the field of adaptive hypermedia [9] and traditional [10] web site. Some researches have
considered the problem of adapting Web 3D content and presentation [11] in virtual environment context [13] to
different web application areas [14], such as education and training [15], e-commerce [16], architecture and
tourism, virtual communities and virtual museum [12]. Today, smart graphics based web systems inherit of user
model representation techniques used in 2D web site and 3D worlds [1] improving organization and presentation
of the content to the end-user. Therefore, implementing smart graphics facilitate users’ understanding and
assimilation.
Such smart components have inherited architectures of agent and smart object which are composed of many
parts like action model [4] or domain model [5]. A standardisation effort has been started to develop marketable
and interoperable smart graphics systems [6] [7].
This paper is composed of two parts. The first one presents an overview that shows the different use cases of
smart graphics and a second one in which we will describe the objectives and the methodology of our approach.
2 Using Smart Graphics
Smart graphics are used in different domains but have the same objective: offer to the end-user the best way to
accomplish a task with a tool (Fig. 1). In data intensive decision-making processes, end-users have to make
effort to craft a meaningful visualization. The users are usually domain experts with marginal knowledge of
visualization techniques. When exploring data, they typically know what questions they want to ask, but often do
not know how to express these questions in a form that is suitable for a given analysis tool, such as specifying a
desired graph type for a given dataset, or assigning proper data fields to certain visual parameters. In [18],
authors proposed a semi-automated visual analytic model: Articulate. This smart graphics-based system is
guided by a conversational user interface to allow users to verbally describe and then manipulate what they want
to see. Natural language processing and machine learning methods are used to translate the imprecise sentences
into explicit expression. Heuristic graph generation algorithm is then used to create a suitable visualization.
In other applications like tutoring or e-commerce, smart graphics aim to increase user satisfaction and to build
customer loyalty, addressing the interests and preferences of each individual user. We find in the literature
systems with different levels of adaptation. Customisable systems offer basic forms of personalization. Users
were limited to setting user interface parameters and some other preferences such as platforms and network
capabilities. This type of adaptation requires explicit choices from the user which are considered as a user
profile or model. They are stored within the system and used to adapt its environment. This technique assumes
that all adaptable aspects are understandable to the user who can clearly identify his/her preferences, and that all
preferences can be derived from a questionnaire [8]. Obviously, this approach cannot cope with complex user
models and systems in which behaviours must be embedded within each component distributed by the web.
Consequently, a new generation of adaptive systems, based on the use of smart components, is being
developed. These systems have the ability to adapt the behaviours of each component to every individual user
needs by analysing logs or by monitoring user interactions [5][26]. 3D content is increasingly employed in these
systems that authors in [14] divided into two broad categories:
sites that display interactive 3D models of objects embedded into web pages, such as e-commerce sites
allowing customers to examine 3D models of products,
sites that are mainly based on a 3D Virtual environment which is displayed inside the web browser, such
as tourism sites allowing users to navigate inside a 3D virtual city.
They use essentially two adaptation techniques: adaptive navigation support and adaptive presentation [9].
Systems that support adaptive navigation structure their contents to allow the user to navigate through 3D objects
that are most suitable. The system therefore grabs users’ attention by visually highlighting those 3D objects. Two
techniques inherited from adaptive hypermedia systems are used to implement adaptive navigation: adaptive
annotation and curriculum sequencing. The first technique changes the order or availability of objects inside a
3D scene. Whereas, the second makes decision about which object (or details of an object) to display next
depending on prerequisites and achievement. For example, in the Educational Virtual Environment proposed by
[17], the student is assessed against learning objectives which evaluate the level of knowledge of an X3D
language feature. Failing to pass the test, the user is not allowed to browse 3D objects with more complicated
features. The results of such assessment are also used to update the student’s profile. Most of these approaches
focus exclusively on the level of knowledge of the student. They do not consider other factors, especially
cognitive, that differentiate learners. Systems that support adaptive presentation offer often choices between
different media when presenting materials (such as text and audio), but related to 3D objects technology,
adaptive presentation consists to remove or add visual details and behaviours to an object.
Fig. 1. Using Smart Graphics
Most of these techniques are limited when applied to advanced smart-graphics-enabled systems. The human-
centred adaptation process is complex and requires taking into consideration various individual parameters that
go beyond the assessment of user’s achievements and simple user preferences.
3 The Proposed Approach
We address the problem of adapting smart graphics behaviours and visual display to the users’ profile.
Estimating user characteristics is essential for systems that require adaptation. For example, in adaptive tutoring
systems, the learning style influences the learning behaviour [20] and in e-commerce the style of buying
influences the buying behaviour [16]. Therefore, we define users’ profile as being the way an individual tackles a
contextual task with a specific tool. This profile depends on various factors including cognitive, preferences,
motivations, interests, skill and social aspects. Three main aspects will be considered in this work: modelling the
users’ profile using ontology representation (see 3.1), developing a smart graphics framework that automatically
assesses and uses such profile (see 3.2), contribute to the standardisation effort started within the smart graphics
community by proposing smart graphics ontology to increase interoperability aspect (see 3.3).
3.1 Users’ Profile Ontology
Semantic web made it possible to have the necessary tools to handle computer-understandable semantics. These
tools, generally evolving from XML are used to enrich the description of web-pages, giving a deeper
understanding of the relations between the concepts. OWL (Ontology Web Language) and RDF (Resource
Description Framework) are some of the most widely used representations. Various definitions and models have
been proposed for users’ profile.
The Digital Item Adaptation part of the MPEG-21 Multimedia Framework provides a rich set of standardized
tools such as the Usage Environment Description Tools to depict user characteristics. But usually, the users’
profile describes mainly preferences about the various properties of the usage environment, which originate from
users, to accommodate transmission, storage and consumption. For example, in [25], authors consider that user
characteristics parameters represent the user’s quality preferences on graphics components of geometry, material
and animation as well as 3D to 2D conversion preference.
Recently, some researchers have started using ontology formalism to investigate how user preferences,
interests, disinterests and personal information could be stored into a semantic user profile [23]. They argue that
techniques like RDF and OWL together with ontology are the key elements in the development of the next
generation user profiles. In this approach, the user profile is divided into particular domain sub-models and
conditional sub-models, each containing particular information about the users’ behaviour or context where a set
of preferences should be applied. These kinds of models are named User-Profile Ontology with Situation –
Dependent Preferences Support (UPOS).
Our objective is to develop a users’ profile ontology based on UPOS which integrates various individual
characteristics such as perception, thinking style, social aspects, and motivation factors associated to a context
(e.g. platforms, activity…). Using a context-aware semantic reasoning, we will be able to adapt some features of
the smart graphics. For example, when a user look at a camera inside a training activity on his laptop or inside a
trading activity on his smart phone, the smart graphic used does not offer the same features and functionalities.
In the first case, a user would like to learn to manipulate the device. In the second one, the user would like to
know the price and camera zoom compatible.
The objective of this phase is to propose general user ontology for web site using smart graphics that can
dynamically author materials depending on the user characteristics (e.g. thinking style, preferences…) and some
context features such as web site domain area and activities (e.g. training, simulation, trading…) or material
capabilities (e.g. platforms, network…). This will lead to the creation of a semantic description of a user
environment model.
3.2 Smart Graphics Framework
We will design a component architecture based on the concept of smart component that can adapt its behaviour
to individual users. Smart components are often represented as being able to interact with its environment
through sensors and actuators (Fig. 2). Sensors cause perceptions that update smart component’s beliefs
compliant with its environment model. The smart component can reason about its beliefs and plan its optimal
actions sequence to achieve a given goal. Based on its actions model, the smart component adapts the actions
sequence to play.
Environment
Model Decision Sensors
engine
Optimal Actions Sequence Context Perception
Actuators
Adaptation
Actions Engine
Model Adapted Actions Sequence
Fig. 2. Smart Component Schema
The main advantage of this approach is that all the information needed to interact with the component is
located at the component level and not at the application level [4]. We argue that this solution could be used to
design the architecture of web site using smart graphics facilitating the reuse of the component to deal with
marketable aspects. In addition, we believe that defining a framework is needed to facilitate software
development by allowing designers and programmers to devote their time to meeting software requirements
rather than dealing with the more standard low-level details of providing a working system, thereby reducing
overall development time.
In [5], authors propose an enhancement of MVC architecture for smart graphics. This approach enables
interactive systems to use different views of the same model at the same time and to keep them synchronously
updated. The visual display evolves from a simple presentation to an intelligent visualization that valuates data
and presents only the result relevant to the user. Today, 3D objects are often used as visual display of a smart
component. 3D computer graphic description languages (e.g. X3D) are used to describe their characteristics (e.g.
shape, position, orientation, appearance…). Encoding X3D content using a XML-based syntax offers the
possibility to transform them into smart graphics more suitable for visualization using XSL transformation [15].
A smart visualization framework, called IMPROVISE has been proposed to tailor system visual responses to
a user interaction context [21]. The system catches a user request and dynamically decides the proper response
content. Using an example-based visualization sketch design, the proper visual metaphor for the given content is
decided. An adaptation layer transforms the display using constraints associated with a context model (user,
environment…).
These approaches lack a high-level semantic description needed to enable smart graphics to interact with their
environment. Thus preventing the necessary interoperability used in smart web based system to share or to reuse
smart components. Some authors [22] propose to use semantic web technology to create a formal specification of
smart components leading to increase the perception, understanding and interaction with their environment.
The Fig. 3 presents our ontology based smart graphics framework. The main idea of the framework is to use
semantic web technology to semantically enrich the pure geometric data with information about how to interact
with the smart graphic based on the knowledge of the user environment model. We propose to consider smart
graphics component as an agent related to its virtual representation: an avatar. So, two parts will be designed. A
smart graphics core which encompasses the core functionality provided by an agent and a smart graphics avatar
which is its virtual representation defining a visual display and behaviours. The interface of the smart graphics to
the environment is realized by sensors and actuators. Sensors provide context perception from its current
environment. Actuators are behaviours offered by the component.
Smart Graphics Component
Smart Smart Graphics Core
Graphics Avatar
Decision engine
Context Perception
Semantic Knowledge
Perception component Component
Sensors Observes the usage Description data (colour, shape…)
Update the component’s beliefs Usage data
- historic
- predefined rules
Displays
Behaviours
Optimal Avatar display and behaviours
Adaptation engine
Behaviour component
Actuators
Original Avatar Content Database
Adapts visual display and X3D description
behaviours (sensors and
actuators)
Adapted Avatar
Semantic Semantic
Description Description
User Environment Model Advanced Smart Graphics Model
Fig. 3. Ontology based Smart Graphics Framework
Considering a web-site with smart graphics components embedded in web pages. When a user connects for
the first time to the website, the decision engine retrieves semantic knowledge of the user environment model
(e.g. platform and network capabilities, user preferences…) and uses predefined rules maintaining by the
semantic knowledge component to define the optimal avatar display and behaviours. The adaptation engine
makes an adapted avatar of the original avatar stored in content database using adaptation rules.
While manipulating the smart graphics, the user is monitored by the perception component of the decision
engine, that observes the usage and update the component’s beliefs. The component will be able to dynamically
learn the user preferences. The automatic learning process will be continuous and by reinforcement. During the
user activities, the semantic knowledge component maintains an historic of user usage and the perception
component updates the user environment model information such as user preferences.
The decision engine will used an adaptation algorithm to match the user preferences to the web site objectives
(e-commerce, training, simulation) and environment. Among other aspects, basics interactions (e.g. zoom,
editing, querying, tutoring), the level of the object details, the control of camera path (e.g. freely, constraint,
predefined), lighting a region of interest, overall navigation to related object and the mode of presentation (e.g.
2D image, 3D object, 3D meshes, sound, video) will be decided as the optimal avatar. An adaptation engine will
generate dynamically the adapted avatar content compliant with original avatar content.
3.3 Smart Graphics Ontology
Semantic representations are usually distinguished by the use of ontology, which aims at specifying concepts.
Some research has been conducted in the autonomous agents or avatars community to describe these smart
objects using regular vocabulary and simplified representation [24]. Fig. 4 shows a restrictive view about a smart
object.
The objective in this work is to find out how features of Virtual Humans considered as a kind of smart object,
can be “labeled” in computational systems in order to facilitate their interchange, scalability, and adaptability
according to specific needs. In addition, the authors demonstrated that it is possible to construct the graphical
representation of a Virtual Human from its semantic descriptors.
Fig. 4. Semantic for Smart Object
Semantic description of multimedia items has been mainly developed for audio, video and still images. These
descriptions are defined in order to categorize, retrieve and reuse multimedia elements. The MPEG-7 standard,
formally named Multimedia Content Description Interface, provides a rich set of standardized tools to describe
multimedia content but only small attention has been given to interactive 3D items.
In [6] [7], authors propose a set of metadata to describe smart graphics in a standard way. The Smart Graphics
data model based on these metadata describe the configurations of a set of Smart Graphics, whether they are in a
single file or in multiple files. It includes some basics tags values such as ID, name, Description and highlights.
This description is not rich enough to manage a smart adaptation of the graphics like a control on camera path,
light sources or behaviours.
Our aim is to pursue and extend these works and then contribute to the upcoming standardisation effort that
aims to develop marketable and interoperable smart graphics systems. We propose to define ontology of smart
graphics (Fig. 5). The semantic description will consider several field of knowledge such as geometry,
behaviour, display and sensor among others. This semantic description of smart graphics will be compliant with
our smart graphics framework Fig. 3. It will contribute to a common understanding among different research
fields that aims at creating an advanced smart graphics model.
Fig. 5. Smart Graphics Ontology
The Fig. 6 shows a partial view of an OWL version of our smart graphics ontology. We can see that a smart
graphic is a subclass of a smart object defining by [24]. The smart graphic class has several properties such as
behaviour controller which will be used to manage both object animations and interactive functionalities that are
offered to the user. The sensor will interact with the user environment model through an event model to adapt the
display of the 3D item. For example, the display controller will be associated with a camera path manager that
produces relevant camera paths around the target object (camera pose and zoom sequence). A good path may
chain good viewing positions learnt by crowdsourcing. Different user profiles might lead to learn and then select
different relevant camera paths. This principle will also be used to manage light sources and the object geometry
in order to highlight regions of interest strategically.
Fig. 6. Partial view of Smart Graphics ontology with OWL format
On today’s e-commerce sites, the integration of interactive 3D objects into web pages, rather full 3D store
environment is a common approach. Therefore, we will conduct an experimental study on e-commerce web sites
to evaluate the sale performance of our ontology based smart graphics framework.
Our study will be conducted on a significant number of participants to help us:
Develop and validate the user environment model based on the use of a questionnaire filled by each
participant. This questionnaire will measure user’s characteristics as perception, thinking style, social
aspects, motivation factors and purchasing behaviour,
Assess the pertinence of our framework to detect users’ characteristics and to adapt the 3D objects’
visual display and behaviours during a shopping session. To support this experiment, we will use our
platform presented in [19] that enables to conduct a multivariate tests on web site.
The target population will be chosen to be as diverse as the audience of an e-commerce: wide age range,
males/females, socio-professional categories etc.
To make our platform as interoperable as possible, we will base our work on standards whenever possible.
For example, we will use OWL to describe the semantics aspects of smart graphics and users’ profile using
ontology formalism and X3D to manage visual display and behaviours of a 3D objects. Web technologies will be
used to develop engine and ontology management system appearing in the framework architecture.
4 Conclusion
This paper has first presented a survey about different use cases of smart graphics. We also introduced a new
framework to both describe and use smart graphics in many applications including e-commerce. This work
ultimately aims at adapting graphics to individual user profile by using web usage mining techniques. Three
complementary aspects are addressed. First we model the users using user profile ontology with situation-
dependent preferences support. Second we defend a smart graphics framework that automatically learns the user
profile and adapt visual display and behaviours of the smart graphics. Last, but not least, this proposal could
contribute to an upcoming standardisation effort and bring an advanced smart graphics ontology that meets the
interoperability challenges.
References
1. Edwards, J., Dailey Paulson, L.: Smart graphics: a new approach to meeting user needs, Computer, vol.
35, no. 5, 18--21 (2002)
2. Hammond, T., Prasad, M., Dixon, D.: Art 101: Learning to Draw through Sketch Recognition, Smart
Graphics, vol. 6133, 277--280 (2010)
3. Kairi, M., Kenichi, Y., Shigeo, T., Masato, O.: Automatic Blending of Multiple Perspective Views for
Aesthetic Composition, Smart Graphics, vol. 6133, 220--231 (2010)
4. Jorissen, P. Lamotte, W.: A Framework Supporting General Object Interactions for Dynamic Virtual
Worlds, Smart Graphics, vol. 3031, 154--158 (2004)
5. Mahler, T., Fiedler, S.,Weber, M.: A Method for Smart Graphics in the Web, Smart Graphics, vol.
3031, 146--153 (2004)
6. Jack, H. : Content & Smart Graphic Communication, AICC Management and Processes Subcommittee
(2004)
7. Fraysse, S.: Designing Smart Graphics “simple scenarios” with IMS Simple Sequencing, AICC
Management and Processes Subcommittee (2006)
8. Piombo, C., Batatia, H., Ayache, A.: Réseau bayésien pour la modélisation de la dépendance entre
complexité de la tâche, style d’apprentissage et approche pédagogique, SETIT2005, Tunisie (2005).
9. Brusilovsky, P. : Adaptive hypermedia. User Modeling and User Adapted Interaction, vol. 11, 87--110
(2001)
10. Perkowitz, M., Etzioni, O.: Adaptive Web Sites, Communication of the ACM, vol. 43, 152--158 (2000)
11. Chittaro L., Ranon R., Dynamic Generation of Personalized VRML Content: a General Approach and
its Application to 3D E-Commerce, Proceedings of Web3D 2002: 7th International Conference on 3D
Web Technology, pp. 145-154, ACM Press, New York (2002)
12. Chittaro L., Ieronutti L., Ranon R., Navigating 3D Virtual Environments by Following Embodied
Agents: a Proposal and its Informal Evaluation on a Virtual Museum Application, PsychNology Journal
(Special issue on Human-Computer Interaction), Vol. 2, No 1., 24--42 (2004).
13. Chittaro L., Ieronutti L., Ranon R. Adaptable visual presentation of 2D and 3D learning materials in
web-based cyberworlds. The Visual Computer, Vol. 22, No. 12, pp. 1002--1014 (2006)
14. Chittaro L., Ranon R. Adaptive 3D Web Sites. In Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.): The
Adaptive Web: Methods and Strategies of Web Personalization, Lecture Notes in Computer Science,
Vol. 4321. Springer-Verlag, (2007)
15. Chittaro L., Ranon R. Web3D Technologies in Learning, Education and Training: Motivations, Issues,
Opportunities, Computers & Education Journal, Vol. 49, No 2, 3--18 (2007)
16. Chittaro L., Ranon R., New Directions for the Design of Virtual Reality Interfaces to E-Commerce
Sites, Proceedings of AVI 2002: 5th International Conference on Advanced Visual Interfaces, ACM
Press, 308--315 (2002)
17. Chittaro L., Ranon R., Adaptive Hypermedia Techniques for 3D Educational Virtual Environments,
IEEE Intelligent Systems, vol. 22, no. 4, 31--37 (2007)
18. Sun, Y., Leigh, J., Jonhson, A., Lee, S.: Articulate: A Semi-Automated Model for Translating Natural
Language Queries into Meaningful Visualizations, Smart Graphics, vol. 6133, 184--195 (2010)
19. Baccot, B, Choudary, O., Grigoras, R., Charvillat, V.: On the impact of sequence and time in rich
media advertising, MM '09: Proceedings of the seventeen ACM international conference on
Multimedia, 849--852 (2009)
20. Moebs S., Piombo C., Batatia H., Weibelzahl S.: A Tool Set Combining Learning Styles Prediction, a
Blended Learning Methodology and Facilitator Guidebooks – Towards a best mix in blended learning,
ICL (2007)
21. Wen, Z., X Zhou, M.: IBM Research Center, http://domino.research.ibm.com/comm/research_projects.
nsf/pages/ria.Focused%20Areas.html
22. Nesbigall, S., Warwas, S., Kapahnke, P., Schubotz, R., Klusch, M., Fischer, K., Slusallek, P.:
Intelligent Agents for Semantic Simulated Realities - The ISReal Platform, ICAART, vol. 2, 72--79
(2010)
23. Stan, J., Egyed-Zsigmond, E., Joly, A., Maret, P.: A User Profile Ontology For Situation-Aware Social
Networking, 3rd Workshop on Artificial Intelligence Techniques for Ambient Intelligence
(AITAmI),(2008)
24. Garcia-Rojas Martinez, A.: Semantics for virtual humans, thèse n° 4301, école polytechnique fédérale
de Lausanne (2009)
25. Kim, H.K., Lee, N.Y., Kim, J.W. : 3D Graphics Adaptation System on the Basis of MPEG-21 DIA,
Smart Graphics, vol. 2733, 283--313 (2003)
26. Vincent Charvillat, Romulus Grigoras: Reinforcement learning for dynamic multimedia adaptation. J.
Network and Computer Applications 30(3): 1034-1058 (2007)