<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. Woodruff, P. M. Aoki, E. Brewer, P. Gau-
thier, and L. A. Rowe. An investigation of doc-
uments from the World Wide Web. Computer
Networks and ISDN Systems</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Toward a Structured Information Retrieval System on the Web: Automatic Structure Extraction of Web Pages</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mathias Ge´ry</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jean-Pierre Chevallet</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Grenoble Cedex</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France E-mail : Mathias.Gery</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jean-Pierre.Chevallet @imag.fr</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>1997</year>
      </pub-date>
      <volume>28</volume>
      <issue>1996</issue>
      <fpage>6</fpage>
      <lpage>15</lpage>
      <abstract>
        <p>: The World Wide Web is a distributed, heterogeneous and semi-structured information space. With the growth of available data, retrieving interesting information is becoming quite difficult and classical search engines give often very poor results. The Web is changing very quickly, and search engines mainly use old and well-known IR techniques. One of the main problems is the lack of explicit HTML page structure, and more generally the lack of explicit Web sites structure. We show in this paper that it is possible to extract such a structure, which can be explicit or implicit: hypertext links between pages, the implicit relations between pages, the HTML tags describing structure, etc. We present some preliminary results of a Web sample analysis extracting several levels of structure (a hierarchical tree structure, a graphlike structure). The task of an Information Retrieval System (IRS) is to process a whole set of electronic documents (corpus), with an aim of making it possible to retrieve those matching with their information need. On the contrary of Databases Management Systems (DBMS), the user expresses with a query the semantic content of the documents that he seeks. We distinguish two principal tasks: Indexing: The extraction and storage of the documents semantic content. This phase requires a representation model of these contents, called document model.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>Web Information Retrieval</kwd>
        <kwd>Web Pages Analysis</kwd>
        <kwd>Structure Extraction</kwd>
        <kwd>Statistics</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>of million indexed pages. They are nevertheless very fast and are able to solve several thousands of queries
per second. In spite of all their efforts, the answers provided by these systems are generally not very
satisfactory. Preliminary results obtained with a test collection of the TREC conference Web Track has showed
the poor results quality of 5 well known engines of the Web, compared to those of 6 systems taking part to
TREC [17].</p>
      <p>In fact, most of existing search engines use well-known techniques like those described by Salton 30
years ago [30]. Most of them prefer a wide coverage of Web with a low indexing quality to a better indexing
on a smaller part of the Web. In particular, they consider generally HTML pages as atomic and independent
documents, without taking into account relations existing between them. The notion of document for a
search engine is reduced to its physical appearance, a HTML page. But Web’s structure is used in few of
Web search engines like Google [6].</p>
      <p>With an aim of Structured IR, we wanted to determine which structure exists on the Web, and which
structure it is possible to extract. This paper is organized as follow: after presentation of related works in
section 2 (i.e. IR with structured documents, hypertexts and Web), we will present our hypotheses about
what will be an ideal structure on the Web in section 3.1. Then we will propose our approach to validate
our hypothesis and check if this kind of structure exists on the Web in section 3.2. Finally we will introduce
the Web sample that we have analysed in section 4.1 and some preliminary results of our experimentations
in sections 4.2, 4.3 and 4.4, while section 6 gives a conclusion about this work-in-progress and some future
directions of our works.</p>
    </sec>
    <sec id="sec-2">
      <title>2 IR and structure on the Web</title>
      <p>The Web is not only a simple set of atomic documents. The HTML standard allows description of structured
multimedia documents, it is widely used to publish on Web. Furthermore, Web is an hypertext, with URL’s
use (Uniform Resource Locator) for the description of links. This structure was used for IR, as well in the
context of structured documents as in the context of classical hypertexts. We distinguish 3 main approaches
proposing techniques of information access using structure: navigation, DBMS and IR approach.
2.1</p>
      <sec id="sec-2-1">
        <title>Navigation approach</title>
        <p>Navigation is based on links, used for finding and consulting some interesting information. In the case of
a navigation within a hypertext composed by several hundreds of nodes, this solution can be useful. This
task is more difficult to achieve on larger hypertext, mainly because of disorientation and cognitive overload
problems. Furthermore, it is necessary to have the right links at the right place. A solution is proposed by
“Web Directories” as Yahoo or Open Directory Project 4 which propose an organized hierarchy of several
millions of sites. These hierarchies are built and verified manually, and thus it is expensive and difficult to
keep them up-to-date. Furthermore, exhaustiveness is impossible to reach.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>DBMS approach</title>
        <p>Documents are represented using a data schema encapsulated in a relational or Object Oriented [10] data
schema. It allows an interrogation using a declarative query language based on an exact matching and forced
by the data schema structure. The hypertext structure integration in the database schema has been much
studied, for example by [25], [3] (ARANEUS project), [16] (TSIMMIS project), etc. Integration attempts
at the level of query language can be found in hyperpaths [1] or POQL [10]. In fact, these approaches are
extensions of the proposed solution for the documents structure integration.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>IR approach</title>
        <p>
          IR approach deals with structured documents, promoting a hierarchical indexing : during the indexing
process, information is propagated from document sections to top of document, along composition relations.
4http://www.yahoo.com, http://www.dmoz.org
This method is refined by Lee [
          <xref ref-type="bibr" rid="ref5">22</xref>
          ] who distinguishes several strategies of ascent. Paradis in [28]
distinguishes several structure types, data ascent depending on different link types.
        </p>
        <p>
          Hypertext structure has been taken into account at the indexing step. For example, the hypertext graph
can be incorporated into a global indexing schema using conceptual graph model [
          <xref ref-type="bibr" rid="ref2">9</xref>
          ] or using inference
networks [
          <xref ref-type="bibr" rid="ref4">11</xref>
          ]. World Wide Web Worm [
          <xref ref-type="bibr" rid="ref7">24</xref>
          ] enables the indexing of multimedia documents by the use of
the anchor’s surrounding text. Amitay [2] promotes also document’s context use. Marchiori [23] adds the
“navigation cost notion” that expresses the navigation effort for reaching a given page.
        </p>
        <p>
          SmartWeb [
          <xref ref-type="bibr" rid="ref3">13</xref>
          ] considers the accessible information of a Web page at indexing step, so page relevance
is evaluated considering the page content but also the page’s neighbors content. Kleinberg (HITS [
          <xref ref-type="bibr" rid="ref8">18</xref>
          ])
promotes the use of both links directions: he introduces the hub page5 and authority page 6 concepts. For
automatic ressources compilation, the CLEVER system [
          <xref ref-type="bibr" rid="ref1">8</xref>
          ] based on the same idea, obtains good results
against manually generated compilation (Yahoo!7). Gurrin [
          <xref ref-type="bibr" rid="ref6">15</xref>
          ] has tried to improve Kleinberg’s approach.
He distinguishes 2 links types (structural and functional) and uses only structural ones. The well-known
Google search engine [6] uses textual anchors to describe pages referenced by links from these anchors.
2.4
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>Related Works : Discussion</title>
        <p>We think that navigation approach is well adapted to manually manageable collections, but the Web is too big
to be acceded only with navigation. Navigation can be an interesting help to other techniques, for example
to consult search results.</p>
        <p>About DBMS approaches, we think that a declarative query language is not adapted to the Web
heterogeneity. Moreover, these approaches rely on the underlying data base schema, and Web pages have to be
expressed following this schema or following predefined templates. According to Nestorov [27] we think
that even if some Web pages are strongly structured, this structure is too irregular to be modeled efficiently
with structured models like relational or object.</p>
        <p>IR approach enables natural language querying, and considers relevance in a hypertext context. At
present, most of the IR approaches are based on pages connectivity use, with the notion of relevance
propagation along links. The drawback is the bad use of this information because of the fact that relations (links)
and nodes (documents) are not typed on the Web.</p>
        <p>We think that these approaches are interesting and useful. The lack of explicit Web structure to improve
them encourages us to work on Web structure extraction. Several works have focused on statistics studies
[5], [4], [31], dealing with the use of HTML tags or with the links distribution which leads for example to the
notion of hub and authority pages. Pirolli [29] has categorized Web pages following 5 predefined categories
which are related to site structure, based on usage, site connectivity and content data. Broder [7] has studied
the Web connectivity and has extracted a macroscopic Web structure. But none of these works deals with
Web structure (structured documents and hypertexts) extraction related to IR objectives.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3 Is the Web well structured?</title>
      <p>The main objectives of our Web sample analysis are to identify the Web explicit structure, and to extract the
Web implicit structure. Obviously, the Web is clearly not structured in the DataBase sense of the term. But
HTML allows people to publish structured sites. Thus we will talk about hierarchically structured Web as
well as structured in the hypertext sense. The question is : “Is the Web sufficiently structured (especially
hierarchically) to index it following a structured IR model?”. This main objective leads us to some other
interesting questions like : “What is a Document on the Web” or “How can I classify a Web link?”.</p>
      <p>We present our approach to answer these questions. Firstly, in section 3.1 we present the kind of structure
that we wanted to identify/extract from the Web. We hypothesize that this ideal structure for the Web exists.
The underlying problematic is about a structured IR model: our final goal is to develop an IR model adapted
to Web.</p>
      <p>5A page that references a lot of authorities pages.
6A page that is referenced by a lot of hubs pages.
7www.yahoo.com</p>
      <p>Secondly we will present in section 3.2 our interpretation of HTML use related to our hypothesis. Web’s
structure depends mainly of the HTML use by sites authors, thus finally we will present in section 4 some
preliminary results of an Web sample analysis.
3.1</p>
      <sec id="sec-3-1">
        <title>Hypotheses</title>
        <p>We will try to check the following assumptions, directly related to the concept of information in a
semistructured and heterogeneous context:
Hypothesis 1: information granularity. We think that information units on the Web can have different
granularities. By assembling these various constituents, one can build entities of more important
size. We distinguish at least 6 main granularities, and hypothesis 1 is detailed following these 6
granularities:
H1.1: elementary constituent. We think that on the Web there is the notion of elementary
constituents, which can be composed using a morpheme, a word or a sentence. In our approach,
elementary constituent is at the sentence level.</p>
        <p>H1.2: paragraph. By assembling sentences, one can build paragraph-sized entities. This is our first
level of structure. This structure is a list, reflecting the logical sequence existing in order to
constitute an understandable unit.</p>
        <p>H1.3: document section. This second level includes all the elements that composes a “classical”
document, like sub-sections, sections, chapters, etc. All of them are built using paragraphs.</p>
        <p>They include also some other attributes like title, author, etc.</p>
        <p>H1.4: document. This third level is the first one that introduces a tree like structure, based on
document sections. Moreover, reader must follow a reading direction for a better understanding. For
example, people generally read “introduction” before “conclusion”.</p>
        <p>H1.5: hyper-document. This level loses the reading organization when gluing documents. This level
can be associated with parts of hypertext, where a reading direction is not obligatory any more.
H1.6: clusters of hyper-document. This last level is useful to glue the hyper-documents that have
some characteristics in common, like the theme or the authors. This can be seen as the library
shelf metaphor.</p>
        <p>Hypothesis 2: relations. There are various relations between documents, whatever their granularity. We
distinguish at least 3 main relations types, and hypothesis 2 is detailed following these 3 types:
H2.1: composition. This relation expresses the hierarchical (tree-like) build of higher granularity
entity. This relation is used in the five first levels of the previous granularity description (i.e.
paragraphs are composed by sentences). Composition deals with attributes shared along
composition relations, for example author name. It also deals with the compound element lifetime:
a paragraph doesn’t exist any more without its sentences. The composition can be split in weak
and strong composition according to the sharing status. The composition is weak if an element
can be shared. In this case the relation draws a lattice, otherwise we obtain a tree.</p>
        <p>H2.2: sequence. Certain documents parts can be organized by the author in an orderly way: part B
precedes part C and follows part A. This order suggests a reading direction to the reader, for
a best understanding. This relation only concerns H1.1 to H1.4, it can be modeled using the
probability that a part can be best understood after the reading of a part . This conditional
probability value can be the fuzzy value of the sequence from to .</p>
        <p>H2.3: reference. This relation is weak, in the sense that it can link elements at any granularity level
because they have something in common. For example, an author can refer to another
document for a complementary information or two documents can refer each other because of their
similarity.</p>
        <p>The next generation of Web search engines will have to consider all these granularities and relations. In
the next section, we interpret the HTML usage on the Web, in relation with these hypotheses.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Web analysis to validate assumptions</title>
        <p>Our objectives are to study different Web characteristics, with an aim of validating our hypotheses. Without
considering under-sentences granularities, we have made the hypothesis that it exists 6 main granularities on
the Web (cf section 3.1), from sentence level until cluster of hyper-documents level. To validate hypothesis
1.1, 1.2 and 1.3, we have chosen HTML tags as describing inside-page granularities.</p>
        <p>H1.1 It is possible with HTML to describe elementary constituents, with &lt;ADDRESS&gt; or &lt;CODE&gt;. Several
are at the presentation level, others at the semantics level. We place our analysis at the sentence level,
and we do not have found a lot of tags that explicitly isolate sentence like &lt;CITE&gt; do. All others tags
are internal sentence elements.</p>
        <p>H1.2 We propose to place at this level simple paragraphs and “blocs elements” like &lt;TABLE&gt; or &lt;FORM&gt;.</p>
        <p>It exists sub-blocs elements like &lt;PRE&gt; that we place also at this level. Of course we propose to use
paragraphs separators &lt;P&gt;, &lt;HR&gt;.</p>
        <p>H1.3 To express document sections, one can use HTML separators &lt;Hn&gt;. In fact, we could use the whole</p>
        <p>Web page as a section.</p>
        <p>H1.4 We propose to consider the physical HTML page as a document. But we could also take a set of
interconnected Web pages as document assuming that links between them represent composition.
H1.5 The first proposition we can do is to consider an hyper-document to be an Internet site which is defined
as a set of pages on the same site.</p>
        <p>H1.6 To represent our cluster of hyper-document, we propose the notion of Web domain (i.e. “.imag.fr”).</p>
        <p>To validate hypothesis 2, we have tried to identify composition and sequence links. All unidentified
links are categorized as reference links. Implicit similarity and reference relations are not extracted.
H2.1 Composition can be identified by inside-pages H1.3 tags, representing strong composition. Also,
inside-sites links can be identified as hierarchical, representing strong or weak composition.
H2.2 The sequence can be found by looking at the implicit position of a fragment relatively to the following
text segment (inside-pages). Also, some inside-sites links from a page to one of its sisters can be
considered.</p>
        <p>H2.3 All the remaining links are classified in this category. This type of relation can be represented on the</p>
        <p>Web using hypertext links. But it can also be implicit, like quotations for example.</p>
        <p>It is possible to describe a structure, but is it a reality on the Web? We have to verify if these sub-page
granularity tags are used by authors (H1.1 to H1.3), and we have to check if the concept of page, site and
domain are relevant on the Web (H1.4 to H1.6). For each page, we have to rebuild hierarchical tree structure,
and to identify a structured documents hierarchy between HTML pages.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiments results</title>
      <p>We will present in this section some preliminary results about a Web sample analysis, and particularly
statistics used to validate our assumptions.
4.1</p>
      <sec id="sec-4-1">
        <title>Web pages sample: IMAG collection</title>
        <p>We have collected an “October 5 2000 snapshot” Web sample, using our Web crawler “CLIPS-Index” (cf
section 5). We have chosen to restrict our experiment to the Web pages of the IMAG domain8, which are
browsable starting from URL “http://www.imag.fr”. These pages deal with several topics, but most of them
8Institut d’Informatique et de Mathe´matiques Applique´es de Grenoble : hosts which name is ended by .imag.fr
are scientific documents, particularly in computer science field. Main characteristics of this collection are
summarized in figure 1.</p>
        <p>Our spider has collected, taking less than 2 hours, almost 39.000 pages which are identified by their
URL from 39 hosts, for a size of 443 Mb. It is not surprising that most of the pages are in HTML format (72
% of .html and .htm, cf figure 2). After analysis and textual extraction, it remains about 140 Mb of textual
data containing more than 241.000 distinct terms.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Granularity analysis</title>
        <p>We have extracted statistics related to entities
granularities described in section 3.1. It appears
that HTML ability to represent different
insidepage granularities as described in section 3.2 is
widely used: each page contains on average 17
level 1 objects, 17 blocs elements, 29 paragraphs
separators and 3,3 section (cf figure 3).
Hypothesis 1.1, 1.2 and 1.3 seem to be correct, but need
manual experiments to be validated.</p>
        <sec id="sec-4-2-1">
          <title>Level</title>
          <p>H1.1
H1.2
H1.3
H1.4
H1.5
H1.6</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>Object</title>
          <p>Level 1 objects
Blocs elements
Paragraphs separators
HTML separators
Pages
Sites
Domains</p>
          <p>Pages average size is 3,3 sections or 11,65 Ko (cf figure 1). This is greater than other studies results
(almost 7 Ko [31], [5]). Textual pages size (pages without HTML tags) is on average of 3,69 Ko. But these
statistics are related to physical aspects of documents. We have to consider entities linkages to conclude
something about logical aspects. It exists on average 37 links per page in our collection (cf figure 4): if we
don’t consider redundant links (same source and same destination), it remains only 550’000 distinct links:
on average 14,11 per page, which is not far from other studies (13,9/page [31], 16,1/page [4]).</p>
          <p>Web pages are heavily linked together, but without link categorization it is difficult to distinguish which
pages are hyper-documents, which are structured documents or which are sections (Hypothesis 1.4).
Especially, we can’t confirm that a Web document is represented by an HTML page.</p>
          <p>There are a few outside-sites links: only 2,6% of
them, contained by 2,4% of pages. Thus, we think
that the site compactness validate Hypothesis 1.5:
hyper-documents are represented by sites. Only
5,4% of these outside-sites links are inside-domain
links: most of sites are connected with
outsidedomain sites, we conclude that a cluster of
hyperdocuments is not represented by a Web domain
(Hypothesis 1.6).</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>We have identified several internal pages levels (cf section 4.2):</title>
          <p>section, paragraph, sentence and even internal sentence. These
levels are defined by HTML pages writers. With these structure
elements (cf figure 3), we are able to rebuild hierarchical tree
structure which are relatively large (cf figure 6).</p>
          <p>Particularly, we have to extract most of the composition and
sequence relations. Composition relations are implicit, from page
to all its sections, but also from sections to all their paragraphs,
etc. Sequence relations are also implicit, from each page
element (except the last one) to its physical successor. Hypertext
links which do not correspond to an extracted composition or
sequence relation are supposed representing reference relations.
We make the assumption that Web site directory structure includes some semantics that can be automatically
extracted. This semantics is proposed “a priori”, because we suppose the manner that pages are placed
in the directory hierarchy follows the “principle of
least effort”. We assume the directory hierarchy
reflects the composition relation. It must of course Types #links %
be validated by experimentation using manual val- Internal 118’248 8,02
idation. We examine in this part all ways links Hierarchical 880’421 59,69
are joining pages across the directory hierarchy and Transversal 438’069 29,70
we propose the following links categories: internal Cross 2’093 0,14
(inside-page), hierarchical and transversal (inside- Out 36’265 2,46
site), cross (outside-site) and out (outside-domain) (cf Figure 7: Links types
figure 7).</p>
          <p>We are interested by categorizing the relations represented by these links. We have interpreted each link
type to type the relation that it expresses, in the following way:
Internal 8% of links stay in the same page. We have no proposition for their category.
Hierarchical We call hierarchical links those whose the source and target are in the same directory path.</p>
          <p>These links are the most common in our sample with 60%. If these links reflect the composition
structure, we can deduce that this sample is strongly structured.</p>
          <p>Transversal The target of the link is neither in the ascendant directories nor in the descendant directories
but is in the same site. There are 30% of links in this category. We can probably classify them in the
weak composition or in reference links.</p>
          <p>Cross site The target is on an other site: only 0,1% are concerned. They are candidates to be reference.
Outside IMAG Target is outside IMAG domain: only 2,5%, they are also candidates to be reference.</p>
          <p>We detail hierarchical links in 3 categories: horizontal, up and down.</p>
          <p>#up-levels
same directory
+1
+2
+3
+4
+5
+6
+7
Total</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Technical details: CLIPS-Index and Web pages analysis</title>
      <p>We have developed a robot called CLIPS-Index9 in collaboration with Dominique Vaufreydaz (from GEOD
team), with the aim of creating Web corpora. This spider crawl the Web, collecting and storing pages.
CLIPS-Index tries to collect the bigger amount of information in this heterogeneous context which is not
respectful of the existing standard. It is an interesting problem to collect the Web correctly. In spite of this,
our spider is quite efficient: for example, we have collected (October 5 2000) 38’994 pages on the .imag.fr
domain, comparatively to Altavista which index 24.859 pages (October 24 2000) on the same domain and
AllTheWeb which index 21.208 pages (October 24 2000). 3,5 millions pages from french-speaking Web
domains where collected during 4 days, using a 600Mhz PC with 1 Gb RAM. CLIPS-Index crawls this huge
hypertext without considering non-textual pages, and respects the robot exclusion protocol [19]. It does not
overload distant Web servers, despite the launching of several hundred HTTP queries simultaneous.
CLIPSIndex, running on an ordinary 333Mhz PC with 128Mo RAM which cost less than 1.000 dollars, is able to
find, load, analyze and stock something like millions pages per day.</p>
      <p>We have also developed several analysis tools (23’000 lines) using PERL (Practical Extraction and
Report Language) for HTML extraction, links analysis and typing, topology analysis, statistics extraction,
text indexing, language extraction, etc.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and future works</title>
      <p>We think that it is interesting and useful to use Web structure for IR. Because of the lack of Web explicit
structure, we have to identify explicit structure and extract implicit one. We have proposed a framework
composed by 6 entities granularities and 3 main relations types. We have proposed some rules to extract
these granularities and relations, based mainly on HTML possibilities to describe structured elements, and
on study of relations that exist between hypertext links and Web server directories hierarchy.</p>
      <p>Our first experiments show that, in a first hand the hypotheses H1.1, H1.2, H1.3 (internal structure level)
and H1.5 (site granularity) seem to be correct, and in a second hand that hypotheses H1.4 (page granularity)
and H1.6 (cluster of sites granularity as internet sub-domains) seem to be false.</p>
      <p>It is possible to identify and extract structure from the Web: several granularities and several types of
relations. But we have to continue these experiments. Firstly, we have to improve our relations
categorization and our hierarchical structure extraction. Secondly, we need to check extracted informations manually,
to validate our hypotheses. Thirdly, we have to analyze bigger collections: several domains, more
heterogeneous pages. IMAG collection is undoubtedly not very representative of the Web, because of its small
size compared to French Web. Moreover it represents only a single Web domain. And finally, our main
objective is to propose a structured IR model, based on these 6 granularity levels and 3 relations types. An
Information Retrieval System based on this model will use some IR methods used in the context of
structured documents and hypertexts [12]. It will actually use Web structure for IR, and thus will be able to help
facing the IR problem on the Web. We are also working on the use of DataMining techniques for extracting
useful knowledge for improving IR results [14].
Wide Web Conference (WWW’98), Brisbane,
Australia, 1998.
[2] E. Amitay. Using common hypertext links to
identify the best phrasal description of target
Web document. In Conference on Research
and Development in IR (SIGIR’98), Melbourne,</p>
      <p>Australia, 1998.
[1] B. Amann. Interrogation d’Hypertextes. PhD
thesis, Conservatoire National des Arts et [7] A. Z. Broder, R. Kumar, F. Maghoul, P.
RaghaMe´tiers de Paris, 1994. van, S. Rajagopalan, R. Stata, A. Tomkins, and
J. Wiener. Graph structure in the Web. In World
Wide Web Conference (WWW’00), Amsterdam,
Netherlands, 2000.
[3] P. Atzeni, G. Mecca, and P. Merialdo.</p>
      <p>Semistructured and structured data in the Web
: Going back and forth. In Workshop on
Management of Semistructured Data, Tucson, 1997.
[4] D. Beckett. 30 % accessible - a survey to the</p>
      <p>UK Wide Web. In World Wide Web Conference
(WWW’97), Santa Clara, California, 1997.
[5] T. Bray. Measuring the Web. In World Wide</p>
      <p>Web Conference (WWW’96), Paris, France, [10] V. Christophide`s and A. Rizk. Querying
strucMay 1996. tured documents with hypertext links using
OODBMS. In European Conference on
Hyper[6] S. Brin and L. Page. The anatomy of a large- text Technology (ECHT’94), Edinburgh,
Scotscale hypertextual Web search engine. In World land, 1994.
[12] F. Fourel, P. Mulhem, and M.-F. Bruandet. A
generic framework for structured document
access. In Database and Expert Systems
Applications (DEXA’98), LNCS 1460, Vienna, Austria, [23] M. Marchiori. The quest for correct information
1998. on the Web : Hyper search engines. In World
Wide Web Conference (WWW’97), Santa Clara,</p>
      <p>California, 1997.
[14] M. Ge´ry and M. H. Haddad. Knowledge
discovery for automatic query expansion on the World
Wide Web. In Workshop on the World-Wide [25] A. Mendelzon, G. Mihaila, and T. Milo.
QueryWeb and Conceptual Modeling (WWWCM’99), ing the World Wide Web. In Conference on
LNCS 1727, Paris, France, 1999. Parallel and Distributed Information Systems
(PDIS’96), 1996.
[17] D. Hawking, N. Craswell, P. Thistlewaite, and</p>
      <p>D. Harman. Results and challenges in Web
search evaluation. In World Wide Web Confer- [29] P. Pirolli, J. Pitkow, and R. Rao. Silk from a
ence (WWW’99), Toronto, Canada, May 1999. sow’s ear : extracting usable structures from the
Web. In Conference on Human Factors in
Computing Systems (CHI’96), Vancouver, Canada,
1996.
[19] M. Koster. A method for Web robots
control. Technical report, Internet Engineering</p>
      <p>Task Force (IETF), 1996.</p>
      <p>[30] G. Salton. The SMART retrieval system :
experiments in automatic document processing.</p>
      <p>Prentice Hall, 1971.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chakrabarti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. E.</given-names>
            <surname>Dom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>David Gibson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rajagopalan</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Tomkins</surname>
          </string-name>
          .
          <article-title>Spectral filtering for resource discovery</article-title>
          .
          <source>In Conference on Research and Development in IR (SIGIR'98)</source>
          , Melbourne, Australia,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chiaramella</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kheirbek</surname>
          </string-name>
          .
          <article-title>An integrated model for hypermedia and information retrieval</article-title>
          . In M. Agosti and
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Smeaton</surname>
          </string-name>
          , editors,
          <source>Information Retrieval and Hypertext</source>
          . Kluwer Academic Publisher,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ge</surname>
          </string-name>
          <article-title>´ry</article-title>
          . Smartweb : Recherche de zones de
          <article-title>pertinence sur le world wide web</article-title>
          .
          <source>In Congre`s INFORSID'99</source>
          ,
          <string-name>
            <surname>La</surname>
            <given-names>Garde</given-names>
          </string-name>
          , France,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>W. B.</given-names>
            <surname>Croft</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Turtle</surname>
          </string-name>
          . A retrieval model [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lawrence</surname>
          </string-name>
          and
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Giles</surname>
          </string-name>
          .
          <article-title>Accessibility of for incorporating hypertext links</article-title>
          .
          <source>In ACM Con- information on the Web. Nature</source>
          ,
          <year>July 1999</year>
          .
          <article-title>ference on Hypertext (HT'89), Pittsburg</article-title>
          , USA,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Y. K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Yoo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Yoon</surname>
          </string-name>
          .
          <article-title>Index structures for structured documents</article-title>
          .
          <source>In ACM Conference on Digital Libraries (DL'96)</source>
          , Bethesda, Maryland,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Gurrin</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Smeaton</surname>
          </string-name>
          .
          <article-title>A connectivity analysis approach to increasing</article-title>
          precision in re- [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Moore</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. H.</given-names>
            <surname>Murray</surname>
          </string-name>
          .
          <article-title>Sizing the internet. trieval from hyperlinked documents</article-title>
          .
          <source>In Text Technical report, Cyveillance</source>
          ,
          <year>2000</year>
          . REtrieval Conference (TREC'99), Gaithersburg, Maryland,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>O. A.</given-names>
            <surname>McBryan</surname>
          </string-name>
          .
          <article-title>GENVL and WWWW: Tools for taming the Web</article-title>
          .
          <source>In World Wide Web Conference (WWW'94)</source>
          , Geneva, Switzerland,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          .
          <article-title>Authoritative sources in a hyperlinked environnement</article-title>
          .
          <source>In ACM-SIAM Symposium on Discrete Algorithms (SODA'98)</source>
          , San Francisco, California,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>