<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Graph Object Oriented Database for Semantic Image Retrieval</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Eugen</forename><surname>Ganea</surname></persName>
							<email>ganea_eugen@software.ucv.ro</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Craiova</orgName>
								<address>
									<addrLine>Bd. Decebal 107</addrLine>
									<settlement>Craiova</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marius</forename><surname>Brezovan</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Craiova</orgName>
								<address>
									<addrLine>Bd. Decebal 107</addrLine>
									<settlement>Craiova</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Graph Object Oriented Database for Semantic Image Retrieval</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">3F02B218131DD767AC29677B9875AA21</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Graph oriented object</term>
					<term>object oriented database</term>
					<term>image processing</term>
					<term>image retrieval</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents a new method for image retrieval using a graph object oriented database for processing the information extracted from the image through the segmentation process and through the semantic interpretation of this information. The object oriented database schema is structured as a classes hierarchy based on graph data structure. A graph structure is used in all phases of the image processing: image segmentation, image annotation, image indexing and image retrieval. The experiments showed that the retrieval can be conducted with good results and the method has a good time complexity.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Image retrieval systems have been developed using a variety of technologies in various disciplines of computer science. In this paper, we use the concepts of object-oriented programming for object recognition applications. The object model used for storing images, is based on the complex and different structure for each image that does not allow a simple data model using predefined data structures such as those used in relational databases. Relational databases have several limitations in representing an image: from the perspective of data representation model, in the relational database, links between two records are achieved through attributes primary key and foreign key. The records have the same values for foreign keys, primary that are logically related, although they are not physically linked (logical references). In the Object-Oriented Databases (OODB), the relations is done by reference to an object identifier (OID) which is the key of association of the records. In addition, the object-oriented model, unlike the relational model, support structure that allows complex objects as sets, lists, trees or other advanced data structures. Also allows to define the methods by which messages are exchanged between objects and to implement the inheritance mechanism which offers classes which have new definitions based on existing definitions. An effective recognition of objects impose new requirements on the database structure, and it must exceed the role of a simple storage medium and provide the possibility of efficient retrievals and management capabilities for image information content; an image can be stored using a set of objects with attributes and characteristics that describe. In identifying an unknown object, object recognition system queries the database and checking the similarities based on characteristics, between unknown object and each of objects from the database. Interface between database and object recognition system is done by sending these messages, which have the advantage of high flexibility in terms of how messages are processed. In an object-oriented database, each object in the real world can be modeled directly as an instance of a class; each instance has an OID and is associated with a simple or complex object. The OID stored in the database is not changed, while other fields associated object can be modified. This identity provides a good support for object sharing updates and simplifies management. Inheritance offered by objectoriented paradigm provides a powerful mechanism for organizing data, it allows to the user to define classes in an incremental way by specializing existing classes. In <ref type="bibr" target="#b0">[1]</ref> introduced a model for OODB representation based on a graph type data structure (GOOD -Graph-Oriented Object Database), where the operations on database objects are translated into the transformation of the graph. Based on the approach used for image processing (segmentation and their annotation), database used have a scheme like in the GOOD model. In addition the topological relations between the simple objects present in the image are represented by a graph edges, while the objects are the graph nodes. The structure of paper is organized as follows: in Subsection 1.2, presents the graph model for image representation and the method for construction of hexagonal structure on pixels image; Section 2 describes the process of image annotation based on ontologies; Section 3 presents the graph object oriented database structure; Section 4 describes our experimental results and Section 5 concludes the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Related Work</head><p>In this section we briefly consider some of the related work that is most relevant to our approach. In the image segmentation area, the most graph-based segmentation methods attempt to search a certain structures in the associated edge weighted graph constructed on the image pixels, such as minimum spanning tree <ref type="bibr" target="#b1">[2]</ref>, or minimum cut <ref type="bibr" target="#b2">[3]</ref>. The major concept used in graph-based clustering algorithms is the concept of homogeneity of regions. For color segmentation algorithms, the homogeneity of regions is color-based, and thus the edge weights are based on color distance. For image annotation, an approach based on graph is presented in <ref type="bibr" target="#b3">[4]</ref>, where the learning model is constructed in a simple manner by exploring the relationship among all images in the feature space and among all annotated keywords. The Nearest Spanning Chain method is proposed to construct the similarity graph that can locally adapt to the complicated data distribution. A recent research <ref type="bibr" target="#b4">[5]</ref> associate the labels with regions detected in the training set, which poses a major challenge for learning strategy. They use a novel graph based semi-supervised learning approach to image annotation using multiple instances, which extends the conventional semi-supervised learning to multi-instance setting by introducing two level bag generator method. Object oriented database models <ref type="bibr" target="#b5">[6]</ref> are based on the object-oriented techniques and their goal is representing by data as a collection of objects that are organized in hierarchy of classes and have complex values associated with them. The graph database models are an alternative to the limitations of traditional database models for capturing the inherent graph structure of data appearing in applications such as geographic database systems, where the interconnectivity of data is an important aspect. The first object oriented database model based on a graph structure O2 was introduced by <ref type="bibr" target="#b6">[7</ref>]. An explicit model named GraphDB is presented in <ref type="bibr" target="#b7">[8]</ref> and allows a simple modeling of graphs in an object oriented environment. The model permits an explicit representation of graphs by defining object classes whose instances can be viewed as nodes, edges and explicitly stored paths of a graph. The <ref type="bibr" target="#b8">[9]</ref> describes the use of the OODB in content-based medical images retrieval and the proposed approach accelerates image retrieval processing by distributing the workload of the image processing methods in the storing time. In this paper we use, as the core of the proposed management system, the HyperGraphDB <ref type="bibr" target="#b9">[10]</ref>, which is a database based on hypergraph structure and was development on BerkleyDB <ref type="bibr" target="#b10">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">The Image Graph Model</head><p>The construction of the initial graph is based on a new utilization of pixels from the image that are integrated into a network type graph. We used a hexagonal network structure on the image pixels for representation of the graph G = (V, E) and we considered too edges of graph joining the pseudo-gravity centers of the hexagons belongs to hexagonal network as presented in Fig. <ref type="figure" target="#fig_0">1</ref>. As you can see, is achieved a triangulation of the image, which is in fact a decomposition of the image in a collection of triangles whose edges form the set V of nodes of the graph G. The condition for achieving a triangulation is satisfied, namely collection of triangles is mutually exclusive (no overlapping triangles) and fully exhaustive (all triangles meeting covers the original image). If it considered the edges which join the gravity pseudo-centers of the hexagons, it obtain a Delaunay triangulation <ref type="bibr" target="#b11">[12]</ref>; the grid-graph is a Delaunay graph and based on planarity graph condition (no edge ≤ (3 * no vertex − 6)) we demonstrated that the time complexity of segmentation algorithm is O(nlogn). The algorithms for segmentation and the demonstration for complexity are presented in <ref type="bibr" target="#b16">[18]</ref>. In the hexagonal structure, for each hexagon h in this structure there exist 6-hexagons that are neighbors in a 6-connected sense and the determination of indexes for 6-hexagons neighbors having as input the index of current hexagon is very simple. The main advantage when using hexagons instead of pixels as elementary piece of information is the reduction of the time complexity of the algorithms. The list of hexagons is stored such as a vector of integers 1 . . . N , where N , the number of hexagons, is determined based on the formula:</p><formula xml:id="formula_0">N = H − 1 2 × ( W − (W mod 4) 4 + W − (W mod 4) − 4 4 )<label>(1)</label></formula><p>where H represents the height of the image and W represents the width of the image. Each hexagon from the set of hexagons has associated two important attributes representing its dominant color and its pseudo-gravity center. For determining these attributes we use eight pixels: the six pixels of the hexagon frontier, and two interior pixels of the hexagon. The dominant color of a hexagon is the main vector color of all the eight colors of its associated pixels. We split the pixels of image into two sets, a set of pixels which represent the vertices of hexagons and a set of complementary pixels; the two lists will be used as inputs for the segmentation algorithm. The mapping of pixels network on the hexagons network is immediately and it is not time consuming in </p><formula xml:id="formula_1">f v = 2 × h + 2 × (h − 1) columnN b − 1<label>(2)</label></formula><p>where f v represent the index of the first vertex, h the index of the hexagon and columnN b represent the column number of the hexagon network. For representing the output of the image segmentation process we used the Attributed Relational Graph (ARG) <ref type="bibr" target="#b12">[13]</ref>.</p><p>The result of segmentation algorithm is stored as a graph where the nodes represent the regions and the edges represent the neighborhood relations: G = (V r, E r), where V r is the set of vertices corresponding regions detected and E r is the set of edges that describes the neighborhood relations. The spatial relations between regions are divided into 3 categories: distance relations, direction relations and topological relations. For determining these types of relations we choose for each region the following relevant geometric features: the pseudo-center of gravity; the distance between two neighboring regions; the length of common boundary of two regions and the angle which is formed by two regions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Image Annotation Based on Ontologies</head><p>In the image annotation process we use two types of ontologies: visual ontology (or object ontology) which refer to an intermediate level which connecting lower level features to high level concept, and domain ontologies <ref type="bibr" target="#b13">[14]</ref> which refers to image content annotation. The management of ontologies, used for annotate of images, has two hierarchical levels that are closely associated. On the one hand, the low-level image contains specific properties as color, texture, shape, and the second level, which contains sematic of image that can be perceived human user. An ontology management system should model the low level that supports retrieval and inference level of their content. One scenario of using such a system can be that a user loads all ontologies in a given area, and allows selection of different objects in images and their correlation with the concepts of ontologies. For annotate the simple objects we used learning algorithms based on decision trees -Decision Tree based Semantic Templates algorithm (DT-ST) <ref type="bibr" target="#b14">[15]</ref>. DT-ST induction method for learning image semantics is different from classical algorithms that use semantic templates for continuous values of features of the regions. A ST feature is provided by the representative of a concept, the set of features extracted from the regions of the training images. Built the decision tree to assign high-level concepts, which are attached to leaf nodes of the tree, to lower level features, thus each useful concept of ontology will meet at least one leaf node. In the system developed, each image is divided into a number of areas which can attach semantic meaning, and each extracted region have as member, an instance of class CF eatureV ector. For each concept we consider a vector with 7 components [H S V perimeter compactness eccentricity area], whose normalized values are used to construct the decision tree. The components H,S,V correspond to the HSV color space. The obtained decision tree is translated in a system of rules using Jess inference engine (Java Expert System Shell) <ref type="bibr" target="#b15">[16]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Ontologies and RDF Format</head><p>For specifying the ontologies and the corresponding graph structure of the images segmented and annotated format we used RDF (Resource Description Framework). The RDF is a specification defined metadata processing, providing interoperability between different applications such as an exchange of information, the purpose of understanding the semantics. To use the method of reasoning described in the ontology-based knowledge bases, we used RDF 2Jess model, a hybrid model that can be used to fill the gap between RDF and Jess. Based on domain knowledge and using Protégé [17] ontology editor, this method turns the RDF format in Jess facts using XSL transformations on XM L syntax and additional rules in Jess. For these rules redefined based on RDF semantics Jess inference system is used to implement the reasoning. Predefined rules are used to check consistency and to determine the characteristics of RDF vocabulary. Deducted Jess assertions are helpful for phase domain ontology modeling to assess and refine ontology. According to different levels of expressiveness, RDF 2Jess could be extended to new SW RL2Jess where SW RL extends the set of axioms to include Horn rules. The conversion of syntactic and semantic RDF in Jess, allows replacement of ontology in RDF format using Jess reasoning engine. Ontology conversion into Jess facts and rules is done in four steps:</p><p>first step is the ontology construction, ontology editor used P rotege, provides a plug-in RDF ontology to support development. The taxonomy of knowledges into classes, features, restrictions was accepted by experts in the field and by software developers, because this paradigm is very similar to object-oriented modeling (UML). Lately, most ontologies have been formalized using standardized RDF (S) and can be reused to extend the rules, if necessary; the second step is to represent the transformation of RDF syntax in Jess syntax using XSLT , the output file consists of Jess facts. If support ontology language semantics is specified as Jess rules, matching specific keywords is no longer necessary in the transformation; the third step is to combine file Jess, including XSLT transformation result, and RDF predefined rules. Moreover, external queries and Jess rules can also be added to the composition similar properties;</p><p>the last step is running the system of rules Jess inference system. The rules defined, is the classification and consistency checking characteristics. The output containing erroneous messages indicating the presence of incorrect syntax in RDF ontology processed.</p><p>To implement the RDF , semantics are defined to represent the additional facts and their relations with RDF S primitives such as rdf s : Class.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Using Object-Oriented Graph Grammars for Learning Complex Concepts</head><p>The graph grammars were first used for image representation in <ref type="bibr" target="#b17">[19]</ref>. There are several areas that have been used in graph grammars, such as recognition of musical notation and biology. In recent research, the type graph grammars have been used to define visual languages <ref type="bibr" target="#b18">[20]</ref>. By using graph grammars the syntactic representation and analysis of images are defined the spatial relationships between regions of an image, in Fig. <ref type="figure" target="#fig_1">2</ref> is an example of a graph grammar production rule. Specify model involves determining the syntactic graph grammar appropriate for a particular domain. The inference algorithm is described below and uses the graphs obtained by automatic segmentation and annotation of simple regions as input. Grammar induction system for a type graph, SubdueGL <ref type="bibr" target="#b19">[21]</ref>, developed within the scope of use, with little change in terms of performance and added variants of generally optional features to the original system. SubdueGL algorithm was developed based on Subdue <ref type="bibr" target="#b20">[22]</ref> and uses a breakthrough approach to subgraphs, which focuses on a set of data compression of the graph, contrary to finding the most frequent sub-graphs. Although approaches based on compression and frequency sub-graphs are closely related, they may produce different results as a sub-graph less frequently overall compression can produce a good set of data. Using a growth process graph,SubdueGL generate candidate sub-structures can be used to compress the data set of the graph. Original method uses the value of compression, minimum description length (M DL) to compare the compression of each sub data set of candidate structures. Sub-structure with the highest value for M DL is used to stage data compression crowd. The process is repeated until the set of data is being compressed all -remains a single node or any sub-structure can not be found or when it reached a specified number of iterations. Compression process causes a reduction in hierarchical sub-graphs connected directly corresponding grammar production rules. In Fig. <ref type="figure" target="#fig_2">3</ref> is presented an example of the graph grammar determined by applying the algorithm SubdueGL. For a grammar, so determined, has developed a syntactic analyzer that attaches each production, semantic actions that are used to determine the semantic concepts. For implementing the syntactic analyzer has utilized the CUP Parser Generator for Java -JavaCup <ref type="bibr" target="#b21">[23]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Graph Object Oriented Database</head><p>The semantic information correspond to concepts of domain ontology on the one hand and to elements of visual ontology on the other hand. The visual concepts determined automatically in the phase of post-processing of segmentation results are stored implicitly in the ARG structure representing relations between regions of an image. Each semantic object will have an attribute that points to the object region interpreted (OID).</p><p>The OID of semantic object is the same with the identifier of the corresponding synset from W ordN et <ref type="bibr" target="#b22">[24]</ref>. In this way, we manage the uniqueness of OID attribute values and we provide the link for an annotation with different concepts, but there is a relationship between synonyms. This approach prevents duplication of semantic information in the database, and an overview of how to store links between visual concepts/concepts domain and related regions is given by Fig. <ref type="figure">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Indexing Graph Object Oriented Database</head><p>Indexing problem is approached using graph theory, the relationship is represented by indexing the index assign classes and forming a directed graph. Other approaches <ref type="bibr" target="#b23">[25]</ref> refer to the database schema, thus the optimal time for the selection indexes as high storing and linking indexes a hypergraph <ref type="bibr" target="#b25">[27]</ref>, implemented by the CHypergraph class, subject to the previous result that output algorithm (indexT his) is an instance of CHypergraph. Choosing hypergraph type structure to represent indexes was made because this type of structure is very good for browsing and retrieving images corresponding graph processed. The function HyperGraphGroup properly algorithm is presented in Algorithm 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 2: Function hyperGraphGroup</head><p>Input: The index of current instance, the current attribute For the index management of the OODB we build a system of indexes based on the geometric and the semantic attributes of the shapes. The object-oriented database allows creating an index via a specific field or group of fields. The using of indexes improves the query performance, but in the same time the indexes are stored also in the database and the growing of the size which can lead to a decreasing of the storage performance. As a result of our tests we consider two groups of indexes; the first group (geometric group) is used only for the training images in the off-line phase of system utilization -learning phase, and the second group (semantic group) is used for all other images in the online phase of system utilization -symbolic query phase. First the group of indexes belongs to the attributes extracted after the image segmentation: the perimeter, the gravity center, compactness of shape, eccentricity of shape, the list of gravity centers of hexagons from the contour and the syntactic characteristics of the boundary shape. This approach drives at a good optimization of the retrieval process for linking an image to a synset. In this stage the OODB contains only the information corresponding to the ontology so the space taken by the system of indexes has not influence concerning the storage performance. At the end of this phase the first group of indexes is deleted.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Retrieval Graph Object Oriented Database</head><p>The power of the native query is given by the flexibility of the object-oriented paradigm and by the possibility of using the dynamics queries, which are easily implemented based on the proprieties of the object-oriented languages. In this way the productivity corresponding to the object-oriented programming is not affected through the utilization of the standard SQL query. The query expressions written in symbolic language must be analyzed and converted to an equivalent native query format. In this process the relationships between the concepts of the ontology on one part and between concepts and classes on the other part are used. The translation supposes two stages: in the first step is used the W ordN et taxonomy; in the second stage the mapping of concepts on the classes is used. For all the words present in the query expression we search the correspondence with the synsets from the W ordN et taxonomy and mark these synsets. In the case when a word has not a synset, we use the synonym relation of the taxonomy to retrieve the synset. If it is not found a synset, the word it is returned to the user as no relevance for semantic query and it is extracted from expression. After this stage for the list of words from initial query we have a list with synsets. In the second step for each returned synset from the list after the first stage we determine the corresponding class and we make an instance for the class through the call of the constructor which receives the name of the synset. All of these instances and classes are used in the process of matching with the objects stored in the OODB. After the execution, a native query is obtained in this mode, and we have a list of objects corresponding to the images with semantic content according to the query expression. The name of the file which contains the physique image is formatted based on the unique identified attribute of each attribute. Through the graphical user interface these images are showed to the user and are grouped in clusters according to the input list of synsets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiments</head><p>A prototype system was designed and implemented in Java, and HyperGraphDB. We tested our system on P rinceton Event dataset <ref type="bibr" target="#b26">[28]</ref> and on MPEG7 CE Shape-1 Part B dataset <ref type="bibr" target="#b27">[29]</ref>. The M P EG7 database consists of 70 classes and 20 shapes per class. The retrieval process implies two categories of experiments: a) the retrieval process based on symbolic language queries, and b) the retrieval process based on queryby-example.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Retrieving with Symbolic Language Query</head><p>In this case there were taken into consideration the pseudo-natural queries based on the concepts from ontology. We use the P rinceton Event dataset which contains 8 sport activities with about 200 images per category: badminton, bocce, croquet, polo, rock-climbing, rowing, sailing and snowboarding. For each category we consider 25 representatives images for the learning phase. In the OODB "sports.hgdb" are stored initially the information extracted by the segmentation from the training images. Using the indexes as the perimeter, the pseudo-gravity center, compactness of shape, eccentricity of shape, the list of gravity centers of hexagons from the contour and the syntactic characteristics of the boundary shape we allocate and store in the sports.hgdb all images corresponding to the dataset. After the learning and storing phase the OODB is ready to be interrogated. The pseudo-natural query considered is: red ball one hoop Using the data from the ontology and the W ordnet information this query is translated in equivalent object oriented native query: E q u a l s ( im ag e . getName ( ) ) ; } } ) ; Figure <ref type="figure" target="#fig_5">5</ref> shows the results for this query applied on our "sports.hgdb" databases; we consider only the first 16 retrieval images. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Retrieving with Query-by-Example</head><p>In this case there were taken into consideration the query-by-example based on query image; we used images from 16 classes (MPEG7 CE Shape-1 Part B) for evaluate the performance of the shape recognition system based on the retrieval rate.In the OODB "shapes.hgdb" is stored initially the information extracted by segmentation from all the images; the stored data are the perimeter, the gravity center, compactness of shape, eccentricity of shape, the list of gravity centers of hexagons from the contour and the syntactic characteristics of the boundary shape. Fig. <ref type="figure" target="#fig_6">6</ref> shows the results for this query applied on our "shapes.hgdb" databases; we consider only first 14 retrieval images, where the first image represent the query image. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions</head><p>In this paper, we propose a method for image processing based on graph structure with the aim of good retrievals. The process have three phases: (I) an image segmentation stage based on graph structure and theory; (II) an adaptive visual feature object-oriented representation of image contents; and (III) a management of the ontologies uses for annotation of the objects from images. Using these tree stages and an object-oriented wrapper for HyperGraphDB, the system allows two types of queries: a query-byexample, but especially a query based on symbolic language. The experiments showed that the retrieval process can conducte to good results regardless of the area that the images come from. The future work implies the description and the using of the graph grammar with the goal of searching and retrieving complex images based on the complex query formulated in a symbolic language.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. The hexagonal structure on the image pixels</figDesc><graphic coords="4,156.22,74.24,173.00,118.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Example of production rule</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. SubdueGL application example: a) original graph b) graph production recovered after determining P1 c) the final graph obtained after applying productions obtained: P1, P2, P3</figDesc><graphic coords="7,138.82,74.33,207.48,193.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .Algorithm 1 :attribute ← attributeIndexSet do 10 if</head><label>4110</label><figDesc>Fig. 4. Triple link: visual concept/semantic concept/Region</figDesc><graphic coords="8,156.22,74.27,172.92,73.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>CImage im ag eQu er y = new CImage ( " c r o q u e t w i t h r e d b a l l and one hoop " ) L i s t &lt;CImage&gt; r e s u l t s I m a g e = db . q u e r y ( new P r e d i c a t e &lt;CImage &gt;() { p u b l i c b o o l e a n m atch ( CImage im ag e ) { r e t u r n im ag eQu er y . getName ( ) .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. The results images for the semantic query</figDesc><graphic coords="11,121.54,348.30,242.02,214.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. The results images for the query-by-example</figDesc><graphic coords="12,121.54,232.95,241.93,145.50" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A graph-oriented object database model</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gyssens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Paredaens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Van Den Bussche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Gucht</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on Knowl. and Data Eng</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Efficient Graph-Based Image Segmentation</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">F</forename><surname>Felzenszwalb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">D</forename><surname>Huttenlocher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Intl. Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="167" to="181" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Normalized cuts and image segmentation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Malik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="888" to="905" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An adaptive graph model for automatic image annotation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Information Retrieval</title>
		<imprint>
			<biblScope unit="page" from="61" to="70" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Novel Graph-based Image Annotation with Two Level Bag Generators</title>
		<author>
			<persName><forename type="first">X</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Qian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Intelligence and Security, International Conference on</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="71" to="75" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Object-Oriented Databases: Definition and Research Directions</title>
		<author>
			<persName><forename type="first">W</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="327" to="341" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">an Object-Oriented Data Model</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lécluse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Richard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Vélez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM SIGMOD Int. Conf. on Management of Data</title>
				<meeting>the ACM SIGMOD Int. Conf. on Management of Data</meeting>
		<imprint>
			<date type="published" when="1988">1988</date>
			<biblScope unit="page" from="424" to="433" />
		</imprint>
	</monogr>
	<note>O2</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">GraphDB: Modeling and Querying Graphs in Databases</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Guting</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 20th Int. Conf. on Very Large Data Bases</title>
				<meeting>20th Int. Conf. on Very Large Data Bases</meeting>
		<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="297" to="308" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Senzako Content-based Medical Images Retrieval in Object Oriented Database</title>
		<author>
			<persName><forename type="first">C</forename><surname>Traina</surname><genName>Jr</genName></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J M</forename><surname>Traina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Y</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 10th IEEE Symposium on Computer-Based Medical System -Part II</title>
				<meeting>10th IEEE Symposium on Computer-Based Medical System -Part II</meeting>
		<imprint>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="67" to="72" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title/>
		<author>
			<persName><surname>Hypergraphdb</surname></persName>
		</author>
		<ptr target="http://www.kobrix.com/hgdb.jsp(consulted01/02/2010" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Berkeley DB</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Olson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bostic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Seltzer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the FREENIX Track: USENIX Annual Technical Conference</title>
				<meeting>the FREENIX Track: USENIX Annual Technical Conference</meeting>
		<imprint>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Sur la sphére vide</title>
		<author>
			<persName><forename type="first">B</forename><surname>Delaunay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Izvestia Akademii Nauk SSSR, Otdelenie Matematicheskikh i Estestvennykh Nauk</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="793" to="800" />
			<date type="published" when="1934">1934</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Spatial pattern discovery by learning a probabilistic parametric relational graphs</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">S</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Discrete Applied Mathematics</title>
		<imprint>
			<biblScope unit="volume">139</biblScope>
			<biblScope unit="page" from="113" to="135" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Language-based querying of image collections on the basis of an extensible ontology</title>
		<author>
			<persName><forename type="first">C</forename><surname>Town</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sinclair</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Image Vision Comput</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="251" to="267" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Integrating Semantic Templates with Decision Tree for Image Semantic Learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<idno type="ISSN">0302-9743</idno>
		<imprint>
			<biblScope unit="volume">4352</biblScope>
			<biblScope unit="page" from="185" to="195" />
			<date type="published" when="2007">2007</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Friedman-Hill</surname></persName>
		</author>
		<title level="m">Jess in Action : Java Rule-Based Systems</title>
				<imprint>
			<publisher>Manning Publications</publisher>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">A New Method for Segmentation of Images Represented in a HSV Color Space, Advanced Concepts for Intelligent Vision Systems</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Burdescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brezovan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ganea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Stanescu</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<pubPlace>Bordeaux, France</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Rosenfeld Web grammars</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Pfaltz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Joint Conferences on Artificial Intelligence</title>
				<imprint>
			<date type="published" when="1969">1969</date>
			<biblScope unit="page" from="609" to="620" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">On a spatial graph grammar formalism Proceedings of the</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Symposium on Visual Languages -Human Centric Computing (VLHCC&apos;04)</title>
				<meeting><address><addrLine>Washington, DC, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004">2004. 2004</date>
			<biblScope unit="page" from="102" to="104" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Cook Concept Formation Using Graph Grammars</title>
		<author>
			<persName><forename type="first">I</forename><surname>Jonyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Holder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the KDD Workshop on Multi-Relational Data Mining</title>
				<meeting>the KDD Workshop on Multi-Relational Data Mining</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Holder Empirical Substructure Discovery</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">B</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth International Workshop on Machine Learning</title>
				<meeting>the Sixth International Workshop on Machine Learning</meeting>
		<imprint>
			<date type="published" when="1989">1989</date>
			<biblScope unit="page" from="133" to="136" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title/>
		<author>
			<persName><surname>Javacup</surname></persName>
		</author>
		<ptr target="http://www.cs.princeton.edu/appel/modern/java/CUP(consulted17/04/2009" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Nouns in WordNet: a Lexical Inheritance System</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Lexicography</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="245" to="264" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">An indexing model for object-oriented database systems, Advanced Computer Technology, Reliable Systems and Applications</title>
		<author>
			<persName><forename type="first">R</forename><surname>Gagliardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zezula</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">5th Annual European Computer Conference Proceedings</title>
				<imprint>
			<date type="published" when="1991">1991</date>
			<biblScope unit="page" from="287" to="289" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">A survey of indexing techniques for object-oriented databases</title>
		<author>
			<persName><forename type="first">E</forename><surname>Bertino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings Dagsthul Seminar Query Processing in Object-Oriented, Complex-Objectand Nested Relational Databases</title>
				<meeting>Dagsthul Seminar Query Processing in Object-Oriented, Complex-Objectand Nested Relational Databases</meeting>
		<imprint>
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Goertzel Patterns, Hypergraphs and Embodied General Intelligence</title>
		<author>
			<persName><forename type="first">B</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCNN, Neural Networks International Joint Conference on</title>
				<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="451" to="458" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">What, where and who? Classifying event by scene and object recognition</title>
		<author>
			<persName><forename type="first">L.-J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fei-Fei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference in Computer Vision (ICCV)</title>
				<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Analyzing Appearance and Contour Based Methods for Object Categorization</title>
		<author>
			<persName><forename type="first">B</forename><surname>Leibe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schiele</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Computer Vision and Pattern Recognition</title>
				<meeting><address><addrLine>Madison, Wisconsin</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003-06">June 2003</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
