=Paper= {{Paper |id=Vol-532/paper-4 |storemode=property |title=A Tag Recommender System Exploiting User and Community Behavior |pdfUrl=https://ceur-ws.org/Vol-532/paper4.pdf |volume=Vol-532 }} ==A Tag Recommender System Exploiting User and Community Behavior== https://ceur-ws.org/Vol-532/paper4.pdf
                            A Tag Recommender System Exploiting
                                User and Community Behavior

                      Cataldo Musto                                 Fedelucio Narducci                       Marco De Gemmis
               Dept. of Computer Science                         Dept. of Computer Science              Dept. of Computer Science
              University of Bari ‘Aldo Moro’                    University of Bari ‘Aldo Moro’         University of Bari ‘Aldo Moro’
                           Italy                                             Italy                                  Italy
            cataldomusto@di.uniba.it                               narducci@di.uniba.it                 degemmis@di.uniba.it
                                               Pasquale Lops                             Giovanni Semeraro
                                        Dept. of Computer Science                     Dept. of Computer Science
                                       University of Bari ‘Aldo Moro’                University of Bari ‘Aldo Moro’
                                                    Italy                                         Italy
                                              lops@di.uniba.it                           semeraro@di.uniba.it

ABSTRACT                                                                             Information filtering
Nowadays Web sites tend to be more and more social: users
can upload any kind of information on collaborative plat-                            General Terms
forms and can express their opinions about the content they
                                                                                     Algorithms, Experimentation
enjoyed through textual feedbacks or reviews. These plat-
forms allow users to annotate resources they like through
freely chosen keywords (called tags). The main advantage                             Keywords
of these tools is that they perfectly fit user needs, since the                      Recommender Systems, Web 2.0, Collaborative Tagging Sys-
use of tags allows organizing the information in a way that                          tems, Folksonomies
closely follows the user mental model, making retrieval of
information easier. However, the heterogeneity characteriz-
ing the communities causes some problems in the activity of                          1. INTRODUCTION
social tagging: someone annotates resources with very spe-                              We are assisting to a transformation of the Web towards
cific tags, other people with generic ones, and so on. These                         a more user-centric vision called Web 2.0. By using Web 2.0
drawbacks reduce the exploitation of collaborative tagging                           applications users are able to publish auto-produced con-
systems for retrieval and filtering tasks. Therefore, systems                        tents such as photos, videos, political opinions, reviews, hence
that assist the user in the task of tagging are required. The                        they are identified as Web prosumers: producers + consumers
goal of these systems, called tag recommenders, is to suggest                        of knowledge. Recently the research community has thor-
a set of relevant keywords for the resources to be annotated.                        oughly analyzed the dynamics of tagging, which is the act
This paper presents a tag recommender system called STaR                             of annotating resources with free labels, called tags. Many
(Social Tag Recommender system). Our system is based on                              argue that, thanks to the expressive power of folksonomies
two assumptions: 1) the more two or more resources are sim-                          [17], collaborative tagging systems are very helpful to users
ilar, the more they share common tags 2) a tag recommender                           in organizing, browsing and searching resources. This hap-
should be able to exploit tags the user already used in order                        pens because, in contrast to systems where information about
to extract useful keywords to label new resources. We also                           resources is only provided by a small set of experts, the
present an experimental evaluation carried out using a large                         model of collaborative tagging systems takes into account
dataset gathered from Bibsonomy.                                                     the way individuals conceive the information contained in a
                                                                                     resource [18], so they perfectly fit user needs and user mental
                                                                                     model. Nowadays almost all Web 2.0 platforms embed tag-
Categories and Subject Descriptors                                                   ging: we can cite Flickr1 , YouTube2 , Del.icio.us3 , Last.fm4 ,
H.3.1 [Information Storage and Retrieval]: Content                                   Bibsonomy5 and so on. These systems provide heteroge-
Analysis and Indexing: Indexing methods; H.3.3 [Information                          neous contents (photos, videos, musical habits, etc.), but
Storage and Retrieval]: Information Search and Retrieval:                            they all share a common core: they let users to post new re-
                                                                                     sources and to annotate them with tags. Besides the simple
                                                                                     act of annotation, the tagging of resources has also a key so-
                                                                                     cial aspect; the connection between users, resources and tags
                                                                                     generates a tripartite graph that can be easily exploited to
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are            1
                                                                                       http://www.flickr.com
not made or distributed for profit or commercial advantage and that copies           2
                                                                                       http://www.youtube.com
bear this notice and the full citation on the first page. To copy otherwise, to      3
                                                                                       http://delicious.com/
republish, to post on servers or to redistribute to lists, requires prior specific   4
permission and/or a fee.                                                               http://www.last.fm/
                                                                                     5
Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$10.00.                                       http://www.bibsonomy.org/
analyze the dynamics of collaborative tagging systems. For             The paper is organized as follows. Section 2 analyzes re-
example, users that label the same resource by using the            lated work. Section 3 explains the architecture of the system
same tags might have similar tastes and items labeled with          and how the recommendation approach is implemented. The
the same tags might share common characteristics.                   experimental evaluation carried out is described in Section
   Undoubtedly the power of tagging lies in the ability for         4, while conclusions and future work are drawn in the last
people to freely determine the appropriate tags for a re-           section.
source [10]. Since folksonomies do not rely on a predefined
lexicon or hierarchy they have the main advantage to be             2. RELATED WORK
fully free, but at the same time they generate a very noisy
                                                                       Usually the works in the tag recommendation area are
tag space, really hardly to exploit for retrieval or recom-
                                                                    broadly divided into three classes: content-based, collabora-
mendation tasks without performing any form of processing.
                                                                    tive and graph-based approaches.
Golder et. al. [4] identified three major problems of col-
                                                                       In the content-based approach, exploiting some Informa-
laborative tagging systems: polysemy, synonymy, and level
                                                                    tion Retrieval-related techniques, a system is able to ex-
variation. Polysemy refers to situations where tags can have
                                                                    tract relevant unigrams or bigrams from the text. Brooks et.
multiple meanings: for example a resource tagged with the
                                                                    al [2], for example, develop a tag recommender system that
term bat could indicate a news taken from an online sport
                                                                    exploits TF/IDF scoring [13] in order to automatically sug-
newspaper or a Wikipedia article about nature. We refer to
                                                                    gests tags for a blog post. In [5] is presented a novel method
synonymy when multiple tags share a single meaning: for
                                                                    for key term extraction from text documents. Firstly, ev-
example we can have simple morphological variations (such
                                                                    ery document is modeled as a graph which nodes are terms
as ’AI’,’artificial intelligence’ and so on to identify a scien-
                                                                    and edges represent semantic relationship between them.
tific publication about Artificial Intelligence) but also lexical
                                                                    These graphs are then partitioned using communities de-
relations (like resources tagged with ‘arts’ versus ‘cultural
                                                                    tection techniques and weighted exploiting information ex-
heritage’). At last, the phenomenon of tagging at different
                                                                    tracted from Wikipedia. The tags composing the most rel-
levels of abstraction is defined as level-variation. This hap-
                                                                    evant communities (a set of terms related with the topic of
pens when people annotate the same web page, containing
                                                                    the resource) are then suggested to the user.
for example a recipe for roast turkey with the tag ‘roast-
                                                                       AutoTag [11] is one of the most important systems imple-
turkey’ but also with a simple ‘recipe’.
                                                                    menting the collaborative approach for tag recommendation.
   Since these problems are of hindrance to completely ex-
                                                                    It presents some analogies with collaborative filtering meth-
ploit the expressive power of folksonomies, in the last years
                                                                    ods: as in the collaborative recommender systems the rec-
many tools have been developed to assist the user in the
                                                                    ommendations are generated based on the ratings provided
task of tagging and to aid at the same time the tag con-
                                                                    by similar users (called neighbors), in AutoTag the system
vergence [3]: we refer to them as tag recommenders. These
                                                                    suggests tags based on the other tags associated with sim-
systems work in a very simple way:
                                                                    ilar posts. Firstly, the tool exploits some IR techniques in
  1. a user posts a resource;                                       order to find similar posts and extracts the tags they are
                                                                    annotated with. All the tags are then merged, building a
  2. depending on the approach, the tag recommender ana-            folksonomy that is filtered and re-ranked. The top-ranked
     lyzes some information related to the resource (usually        tags are finally suggested to the user, who selects the most
     metadata or a subset of the relations in the aforemen-         appropriate ones to attach to the post.
     tioned tripartite graph);                                         TagAssist [15] extends the AutoTags’ approach introduc-
                                                                    ing some preprocessing step (specifically, a lossless compres-
  3. the tag recommender processes this information and             sion over existing data) in order to improve the quality of
     produces a list of recommended tags;                           the recommendations. The core of this approach is repre-
  4. the user freely chooses the most appropriate tags to           sented by a a Tag Suggestion Engine (TSE) which leverages
     annotate the resource.                                         previously tagged posts providing appropriate suggestions
                                                                    for new content.
  Clearly, the more these recommended tags match the user              Marinho [9] investigates the user-based collaborative ap-
needs and her mental model, the more she will use them to           proach for tag recommendation. The main outcome of this
annotate the resource. In this way we can rapidly speed up          work shows that users with a similar tag vocabulary tend to
the tag convergence aiding at the same time in filtering the        tag alike, since the method seems to produce good results
noise of the complete tag space.                                    when applied on the user-tag matrix.
This paper presents the tag recommender STaR. When de-                 The problem of tag recommendation through graph-based
veloping the model, we tried to point out two concepts:             approaches has been firstly addressed by Jäschke et al. in [7].
                                                                    The key idea behind their FolkRank algorithm is that a re-
   • resources with similar content should be annotated             source which is tagged by important tags from important
     with similar tags;                                             users becomes important itself. So, they build a graph
   • a tag recommender needs to take into account the pre-          whose nodes mutually reinforces themselves by spreading
     vious tagging activity of users, increasing the weight         their weights. They compared some recommendation tech-
     of the tags already used to annotate similar resources.        niques including collaborative filtering, PageRank and FolkRank,
                                                                    showing that the FolkRank algorithm outperforms other ap-
  In this work, we identify two main aspects in the tag rec-        proaches. Furthermore, Schmitz et al. [14] proposed associ-
ommendation task: first, each user has a typical manner             ation rule mining as a technique that might be useful in the
to label resources; second, similar resources usually share         tag recommendation process.
common tags.                                                           In literature we can find also other methods (called hy-
brid ), which try to integrate two of more sources of knowl-              users bookmarked the same resource, they will receive
edge (mainly, content and collaborative ones) in order to                 the same suggestions since the folksonomies built from
improve the quality of recommended tags.                                  similar resources are the same.
   Heymann et. al [6] present a tag recommender that ex-
ploits at the same time social knowledge and textual sources.        We will try to overcome these drawbacks, by proposing an
They produce recommendations exploiting both the HTML             approach firstly based on the analysis of similar resources ca-
source code (extracting anchor texts and page texts) and the      pable also of leveraging the tags already selected by the user
annotations of the community. The effectiveness of this ap-       during her previous tagging activity, by putting them on the
proach is also confirmed by the use of a large dataset crawled    top of the tag rank. Figure 1 shows the general architecture
from del.icio.us for the experimental evaluation.                 of STaR. The recommendation process is performed in four
   Lipczak in [8] proposes a similar hybrid approach. Firstly,    steps, each of which is handled by a separate component.
the system extracts tags from the title of the resource. Af-      3.1 Indexing of Resources
terwards, it performs an analysis of co-occurrences, in or-
der to expand the sets of candidate tags with the tags that          Given a collection of resources (corpus) with some textual
usually co-occur with terms in the title. Finally, tags are       metadata (such as the title of the resource, the authors, the
filtered and re-ranked exploiting the informations stored in      description, etc.), STaR firstly invokes the Indexer module
a so-called ”personomy”, the set of the tags previously used      in order to perform a preprocessing step on these data by
by the user.                                                      exploiting Apache Lucene6 . Obviously, the kind of metadata
   Finally, in [16] the authors proposed a model based on         to be indexed is strictly dependant on the nature of the
both textual contents and tags associated with the resource.      resources. For example, supposing to recommend tags for
They introduce the concept of conflated tags to indicate a set    bookmarks, we could index the title of the web page and the
of related tags (like blog, blogs, ecc.) used to annotate a re-   extended description provided by users, while for BibteX
source. Modeling in this way the existing tag space they are      entries, we could index the title of the publication and the
able to suggest various tags for a given bookmark exploiting      abstract. Let U be the set of users and N the cardinality
both user and document models.                                    of this set, the indexing procedure is repeated N + 1 times:
                                                                  we build an index for each user (Personal Index ) storing the
                                                                  information on the resources she previously tagged and an
3.     STAR: A SOCIAL TAG RECOMMENDER                             index for the whole community (Social Index ) storing the
       SYSTEM                                                     information about all the tagged resources by merging the
  Following the definition introduced in [7], a folksonomy        singles Personal Indexes.
can be described as a triple (U, R, T ) where:                       Following the definitions presented above, given a user
                                                                  u ∈ U we define P ersonalIndex(u) as:
     • U is a set of users;
     • R is a set of resources;                                       P ersonalIndex(u) = {r ∈ R|∃t ∈ T : tas(u, r) = t}     (1)
     • T is a set of tags.                                        where tas is the tag assignment function tas: U × R → T
                                                                  which assigns tags to a resource annotated by a given user.
  We can also define a tag assignment function tas: U ×
                                                                  SocialIndex represents the union of all the user personal in-
R → T.
                                                                  dexes:
So, a collaborative tagging system is a platform composed                                    [
                                                                                             N

of users, resources and tags that allows users to freely assign              SocialIndex =         P ersonalIndex(ui)        (2)
                                                                                             i=1
tags to resources, while the tag recommendation task for a
given user u ∈ U and a resource r ∈ R can be described as         3.2 Retrieval of Similar Resources
the generation of a set of tags tas(u, r) ⊆ T according to
some relevance model. In our approach these tags are gen-            Next, STaR can take into account users requests in or-
erated from a ranked set of candidate tags from which the         der to produce personalized tag recommendations for each
top n elements are suggested to the user.                         resource. First, every user has to provide some information
STaR (Social Tag Recommender) is a content-based tag rec-         about the resource to be tagged, such as the title of the Web
ommender system, developed at the University of Bari. The         page or its URL, in order to crawl the textual metadata as-
inceptive idea behind STaR is to improve the model imple-         sociated on it.
mented in systems like TagAssist [15] or AutoTag [11].               Next, if the system can identify the user since she has
   Although we agree that similar resources usually share         already posted other resources, it exploits data about her
similar tags, in our opinion Mishne’s approach presents two       (language, the tags she uses more, the number of tags she
important drawbacks:                                              usually uses to annotate resources, etc.) in order to refine
                                                                  the query to be submitted against both the Social and Per-
     1. the tag re-ranking formula simply performs a sum of       sonal indexes stored in Lucene. We used as query the title
        the occurrences of each tag among all the folksonomies,   of the web page (for bookmarks) or the title of the publica-
        without considering the similarity with the resource to   tion (for BibTeX entries). Obviously before submitting the
        be tagged. In this way tags often used to annotate        query we processed it by deleting not useful characters and
        resources with a low similarity level could be ranked     punctuation.
        first;                                                       In order to improve the performances of the Lucene Query-
                                                                  ing Engine we replaced the original Lucene Scoring function
     2. the proposed model does not take into account the
                                                                  6
        previous tagging activity performed by users. If two          http://lucene.apache.org
                                                       Figure 1: Architecture of STaR


with an Okapi BM25 implementation7 . BM25 is nowadays
considered as one of the state-of-the art retrieval models by
the IR community [12].
  Let D be a corpus of documents, d ∈ D, BM25 returns
the top-k resources with the highest similarity value given
a resource r (tokenized as a set of terms t1 . . . tm ), and is
defined as follows:

                           sim(r, d) =
                Pm               nr
                                  ti
                  i=1 k1 ((1−b)+b∗l)+nr       ∗ idf (ti )        (3)
                                      t   i

where nrti represents the occurrences of the term ti in the
document d, l is the ratio between the length of the resource
and the average length of resources in the corpus. Finally, k1
and b are two parameters typically set to 2.0 and 0.75 respec-
tively, and idf (ti ) represents the inverse document frequency
of the term ti defined as follows:

                                N + df (ti ) + 0.5
              idf (ti ) = log                                    (4)
                                  df (ti ) + 0.5
                                                                              Figure 2: Retrieval of Similar Resources
where N is the number of resources in the collection and
df (ti ) is the number of resources in which the term ti occurs.
   Given user u ∈ U and a resource r, Lucene returns the
                                                                        PersonalIndex, instead, returns another online newspaper
resources whose similarity with r is greater or equal than
                                                                        (Tuttosport.com). The similarity score returned by Lucene
a threshold β. To perform this task Lucene uses both the
                                                                        has been normalized.
PersonalIndex of the user u and the SocialIndex. More for-
mally:
                                                                        3.3 Extraction of Candidate Tags
                                                                           The role of the Tag Extractor is to produce as output
P Res(u, q) = {r ∈ P ersonalIndex(u)|sim(q, r) ≥ β}                     the list of the so-called ”candidate tags” (namely, the tags
                                                                        considered as ’relevant’ by the tag recommender). In this
                                                                        step the system gets the most similar resources returned
       S Res(q) = {r ∈ SocialIndex|sim(q, r) ≥ β}                       by the Apache Lucene engine and builds their folksonomies
                                                                        (namely, the tags they have been annotated with). Next, it
   Figure 2 depicts an example of the retrieving step. In
                                                                        produces the list of candidate tags by computing for each
this case the target resource is represented by Gazzetta.it,
                                                                        tag from the folksonomy a score obtained by weighting the
one of the most famous Italian sport newspaper. Lucene
                                                                        similarity score returned by Lucene with the normalized oc-
queries the SocialIndex and returns as the most similar re-
                                                                        currence of the tag. If the Tag Extractor also gets the list of
sources an online newspaper (Corrieredellosport.it) and the
                                                                        the most similar resources from the user PersonalIndex, it
official web site of an Italian Football Club (Inter.it). The
                                                                        will produce two partial folksonomies that are merged, as-
7
    http://nlp.uned.es/ jperezi/Lucene-BM25/                            signing a weight to each folksonomy in order to boost the
tags previously used by the user.
  Formally, for each query q (namely, the resource to be           Table 1: Results comparing the Lucene original scor-
tagged), we can define a set of tags to recommend by build-        ing function with BM25
ing two sets: candT agsp and candT agss . These sets are                Scoring Resource      Pr     Re      F1
defined as follows:
                                                                        Original    bookmark      25.26      29.67   27.29
                                                                         BM25       bookmark      25.62      36.62   30.15
candT agsp (u, q) = {t ∈ T |t = T AS(u, r) ∧ r ∈ P Res(u, q)}
                                                                        Original      bibtex      14.06      21.45   16.99
                                                                         BM25         bibtex      13.72      22.91   17.16
candT agss (q) = {t ∈ T |t = T AS(u, r) ∧ r ∈ S Res(q) ∧ u ∈ U }
                                                                        Original      overall     16.43      23.58   19.37
  In the same way we can compute the relevance of each tag
                                                                        BM25          overall     16.45      26.46   20.29
with respect to the query q as:
                         P                 t
                             r∈P Res(u,q) nr ∗ sim(r, q)             In the example in Figure 3, setting a threshold γ = 0.20,
      relp (t, u, q) =                                     (5)
                                        nt                         the system would suggest the tags sport and newspaper.
                         P               t
                             r∈S Res(q) nr ∗ sim(r, q)             4. EXPERIMENTAL EVALUATION
        rels (t, q) =                                      (6)
                                       nt                             We designed two different experimental sessions to evalu-
where ntr is the number of occurrences of the tag t in the an-     ate the performance of the tag recommender. In the first ses-
notation for resource r and nt is the sum of the occurrences       sion we performed a comparison between the original scoring
of tag t among all similar resources.                              function of Lucene and a novel BM25 implementation, while
   Finally, the set of Candidate Tags can be defined as:           the second was carried out to tune the system parameters.

                                                                   4.1 Description of the dataset
  candT ags(u, q) = candT agsp(u, q) ∪ candT agss (q)      (7)       We designed the experimental evaluation by exploiting a
where for each tag t the global relevance can be defined as:       dataset gathered from Bibsonomy. It contains 263,004 book-
                                                                   mark posts and 158,924 BibTeX entries submitted by 3,617
                                                                   different users. For each of the 235,328 different URLs and
     rel(t, q) = α ∗ relp (t, q) + (1 − α) ∗ rels (t, q)   (8)     the 143,050 different BibTeX entries were also provided some
where α (PersonalTagWeight) and (1−α) (SocialTagWeight)            textual metadata (such as the title of the resource, the de-
are the weights of the personal and social tags respectively.      scription, the abstract and so on).
   Figure 3 depicts the procedure performed by the Tag Ex-           We evaluated STaR by comparing the real tags (namely,
tractor : in this case we have a set of 4 Social Tags (Newspa-     the tags a user adopts to annotate an unseen resource) with
per, Online, Football and Inter) and 3 Personal Tags (Sport,       the suggested ones. The accuracy was finally computed us-
Newspaper and Tuttosport). These sets are then merged,             ing classical IR metrics, such as Precision, Recall and F1-
building the set of Candidate Tags. This set contains 6 tags       Measure. Precision (Pr) is defined as the number of relevant
since the tag newspaper appears both in social and personal        recommended tags divided by the number of recommended
tags. The system associates a score to each tag that indi-         tags. Recall (Re) is defined as the number of relevant rec-
cates its effectiveness for the target resource. Besides, the      ommended tags divided by the total number of relevant tags
scores for the Candidate Tags are weighted again according         available. The F1-measure is computed by the following for-
to SocialTagWeight (α) and PersonalTagWeight (1 − α) val-          mula:
ues (in the example, 0.3 and 0.7 respectively), in order to
boost the tags already used by the user in the final tag rank.                              (2 ∗ P r ∗ Re)
Indeed, we can point out that the social tag ‘football’ gets                         F1 =                                    (9)
                                                                                              P r + Re
the same score of the personal tag ‘tuttosport’, although its
original weight was twice.                                         4.2 Experimental Session 1
3.4 Tag Recommendation                                                Firstly, we tried to evaluate the predictive accuracy of
                                                                   STaR comparing difference scoring function (namely, the
  Finally, the last step of the recommendation process is
                                                                   Lucene original one and the aforementioned BM25 imple-
performed by the Filter. It removes from the list of can-
                                                                   mentation). We performed the same steps previously de-
didate tags the ones not matching specific conditions, such
                                                                   scribed, retrieving the most similar items using the two men-
as a threshold for the relevance score computed by the Tag
                                                                   tioned similarity functions and comparing the tags suggested
Extractor. Obviously, the value for the threshold and the
                                                                   by the system in both cases. Results are presented in Table
maximum number of tags to be recommend is strictly de-
                                                                   1.
pendent from the training data.
                                                                      In general, there is an improvement by adopting BM25
  Formally, given a user u ∈ U , a query q and a thresh-
                                                                   with respect to the Lucene original similarity function. We
old value γ, the goal of the filtering component is to build
                                                                   can note that BM25 improved the both the recall (+ 6,95%
rec(u, q) defined as follows:
                                                                   for bookmarks, +1,46% for BibTeXs entries) and the F1
                                                                   measure (+ 2,86% for bookmarks, +0,17% for BibTeXs en-
     rec(u, q) = {t ∈ candT ags(u, q)|rel(t, q) > γ}               tries).
                       Figure 3: Description of the process performed by the Tag Extractor


4.3 Experimental Session 2                                      each resource in the dataset there are many textual fields,
  Next we designed a second experimental evaluation in or-      such as title, abstract, description, extended description, etc.
der to compare the predictive accuracy of STaR with differ-     In this case we used as query the title of the webpage (for
ent combinations of system parameters. Namely:                  bookmarks) and the title of the publication (for BibTeX en-
                                                                tries).
   • the maximum number of similar documents retrieved             The last parameter we need to tune is the threshold to
     by Lucene;                                                 deem a tag as relevant (γ).We performed some tests sug-
                                                                gesting both 4 and 5 tags and we decided to recommend
   • the value of α for the PersonalTagWeight and Social-       only 4 tags since the fifth was usually noisy. We also fixed
     TagWeight parameters;                                      the threshold value between 0.20 and 0.25.
   • the threshold γ to establish whether a tag is relevant;       In order to carry out this experimental session we used the
                                                                aforementioned dataset both as training and test set. We ex-
   • which fields of the target resource use to compose the     ecuted the test over 50, 000 bookmarks and 50, 000 BibTeXs.
     query;                                                     For each resource randomly chosen from the dataset and for
                                                                each combination of parameters, we executed the following
   • the best scoring function between Lucene standard one      steps:
     and Okapi BM25.
                                                                   • query preparation;
   First, tuning the number of similar documents to retrieve
from the PersonalIndex and SocialIndex is very important,          • Lucene retrieval function invocation;
since a value too high can introduce noise in the retrieval
process, while a value too low can exclude documents con-          • building of the set of Candidate Tags;
taining relevant tags. By analyzing the results returned by
                                                                   • comparing the recommended tags with the real tags
some test queries, we decided to set this value between 5
                                                                     associated by the user;
and 10, depending on the training data.
   Next, we tried to estimate the values for PersonalTag-          • computing of Precision, Recall, and F1-measure.
Weight (PTW) and the SocialTagWeight (STW). An higher
weight for the Personal Tags means that in the recommenda-        Results are presented in Table 2 and Table 3.
tion process the systems will weigh more the tags previously
used by the target user, while an higher value for the So-         Analyzing the results (see Figure ??), it emerges that the
cial Tags will give more importance to the tags used by the     approach we called user-based outperformed the other ones.
community (namely, the whole folksonomy) on the target          In this configuration we set PTW to 1.0 and STW to 0, so
resource. These parameters are biased by the user practice:     we suggest only the tags already used by the user in tagging
if tags often used by the user are very different from those    similar resources. No query was submitted against the So-
used from the community, the PTW should be higher than          cialIndex. The first remark we can make is that each user
STW. We performed an empirical study since it is difficult to   has her own mental model and her own vocabulary: she usu-
define the user behavior at run time. We tested the system      ally prefers to tag resources with labels she already used.
setting the parameters with several combinations of values:     Instead, getting tags from the SocialIndex only (as proved
  i)    PTW = 0.7 STW = 0.3;                                    by the results of the community-based approach) often in-
  ii)   PTW = 0.5 STW = 0.5;                                    troduces some noise in the recommendation process. The
  iii) PTW = 0.3 STW = 0.7.                                     hybrid approaches outperformed the community-based one,
Another parameter that can influence the system perfor-         but their predictive accuracy is still worse when compared
mance is the set of fields to use to compose the query. For     with the user-based approach. Finally, all the approaches
                                                                   not similar items, maybe exploiting structured data or do-
Table 2: Predictive accuracy of STaR over 50, 000                  main ontologies. Furthermore, since tags usually suffer of
bookmarks                                                          typical Information Retrieval problem (namely, polysemy,
  Approach    STW PTW         Pr    Re     F1
                                                                   synonymy, etc.) we will try to establish if the integration
                                                                   of Word Sense Disambiguation tools or a semantic repre-
 Comm.-based        1.0       0.0     23.96    24.60    24.28
                                                                   sentation of documents could improve the performance of
 User-based         0.0       1.0     32.12    28.72    30.33
                                                                   recommender. Another issue to analyze is the application
                                                                   of our methodology in different domains such as multimedia
     Hybrid         0.7       0.3     24.96    26.30     25.61
                                                                   environment. In this field discovering similarity among items
     Hybrid         0.5       0.5     24.10    25.16     24.62
                                                                   just on the ground of textual content could be not sufficient.
     Hybrid         0.3       0.7     23.85    25.12     25.08
                                                                   Finally, we will perform also some studies in the area of
     Baseline        -         -      35.58    10.42     16.11
                                                                   tag-based recommendation, investigating the integration of
                                                                   tag recommenders for recommendations tasks, since reach-
                                                                   ing more quickly the tag convergence could help to build
                                                                   better folksonomies and to produce more accurate recom-
Table 3: Predictive accuracy of STaR over 50, 000                  mendations.
BibTeXs
  Approach    STW PTW         Pr    Re     F1
                                                                   6. REFERENCES
 Comm.-based        1.0       0.0     34.44    35.89    35.15       [1] P. Basile, M. de Gemmis, P. Lops, G. Semeraro,
 User-based         0.0       1.0     44.73    40.53    42.53           M. Bux, C. Musto, and F. Narducci. FIRSt: a
                                                                        Content-based Recommender System Integrating Tags
     Hybrid         0.7       0.3     32.31    38.57     35.16          for Cultural Heritage Personalization. In P. Nesi,
     Hybrid         0.5       0.5     32.36    37.55     34.76          K. Ng, and J. Delgado, editors, Proceedings of the 4th
     Hybrid         0.3       0.7     35.47    39.68     37.46          International Conference on Automated Solutions for
     Baseline        -         -      42.03    13.23     20.13          Cross Media Content and Multi-channel Distribution
                                                                        (AXMEDIS 2008) - Workshop Panels and Industrial
                                                                        Applications, Florence, Italy, Firenze University Press,
                                                                        pages 103–106, November 17-19, 2008.
outperformed the F1-measure of the baseline. We computed            [2] C. H. Brooks and N. Montanez. Improved annotation
the baseline recommending for each resource only its most               of the blogosphere via autotagging and hierarchical
popular tags. Obviously, for resources never tagged we could            clustering. In WWW ’06: Proceedings of the 15th
not suggest anything.                                                   international conference on World Wide Web, pages
  This analysis substantially confirms the results we ob-               625–632, New York, NY, USA, 2006. ACM Press.
tained from other studies performed in the area of the tag-         [3] C. Cattuto, C. Schmitz, A. Baldassarri, V. D. P.
based recommendation [1].                                               Servedio, V. Loreto, A. Hotho, M. Grahl, and
                                                                        G. Stumme. Network properties of folksonomies. AI
                                                                        Communications, 20(4):245–262, December 2007.
5.    CONCLUSIONS AND FUTURE WORK
                                                                    [4] S. Golder and B. A. Huberman. The Structure of
   Collaborative Tagging Systems are powerful tools, since
                                                                        Collaborative Tagging Systems. Journal of
they let users to organize the information in a way that per-           Information Science, 32(2):198–208, 2006.
fectly fits their mental model. However, typical drawbacks of
                                                                    [5] Maria Grineva, Maxim Grinev, and Dmitry Lizorkin.
collaborative tagging systems represent an hindrance, since
                                                                        Extracting key terms from noisy and multi-theme
the complete tag space is too noisy to be exploited for re-
                                                                        documents. In 18th International World Wide Web
trieval and filtering task. So, systems that assist users in the
                                                                        Conference, pages 651–661, April 2009.
task of tagging speeding up the tag convergence are more
and more required. In this paper we presented STaR, a so-           [6] P. Heymann, D. Ramage, and H. Garcia-Molina.
cial tag recommender system. The idea behind our work                   Social tag prediction. In SIGIR ’08: Proceedings of the
was to discover similarity among resources in order to ex-              31st annual international ACM SIGIR conference on
ploit communities and user tagging behavior. In this way                Research and development in information retrieval,
our recommender system was able to suggest tags for users               pages 531–538, New York, NY, USA, 2008. ACM.
and items still not stored in the training set. The experi-         [7] R. Jäschke, L. Marinho, A. Hotho,
mental sessions showed that users tend to reuse their own               L. Schmidt-Thieme, and G. Stumme. Tag
tags to annotate similar resources, so this kind of recommen-           recommendations in folksonomies. In Alexander
dation model could benefit from the use of the user personal            Hinneburg, editor, Workshop Proceedings of Lernen -
tags before extracting the social tags of the community (we             Wissensentdeckung - Adaptivit?t (LWA 2007), pages
called this approach user-based). Next, we showed that the              13–20, September 2007.
integration of a more effective scoring function (BM25) could       [8] M. Lipczak. Tag recommendation for folksonomies
also improve the overall accuracy of the system.                        oriented towards individual users. In Proceedings of
   This approach has a main drawback, since it cannot sug-              ECML PKDD Discovery Challenge (RSDC08), pages
gest any tags when the set of similar items returned by                 84–95, 2008.
Lucene is empty. So, we plan to extend the system in or-            [9] L. B. Marinho and L. Schmidt-Thieme. Collaborative
der to extract significant keywords from the textual content            tag recommendations. pages 533–540. 2008.
associated to a resource (title, description, etc.) that has       [10] A. Mathes. Folksonomies - cooperative classification
     and communication through shared metadata.
     Website, December 2004. http://www.adammathes.
     com/academic/computer-mediated-communication/
     folksonomies.html.
[11] G. Mishne. Autotag: a collaborative approach to
     automated tag assignment for weblog posts. In WWW
     ’06: Proceedings of the 15th international conference
     on World Wide Web, pages 953–954, New York, NY,
     USA, 2006. ACM.
[12] S. E. Robertson, S. Walker, M. H. Beaulieu, A. Gull,
     and M. Lau. Okapi at trec. In Text REtrieval
     Conference, pages 21–30, 1992.
[13] G. Salton. Automatic Text Processing.
     Addison-Wesley, 1989.
[14] C. Schmitz, A. Hotho, R. Jäschke, and G. Stumme.
     Mining association rules in folksonomies. In
     V. Batagelj, H.-H. Bock, A. Ferligoj, and A. Őiberna,
     editors, Data Science and Classification (Proc. IFCS
     2006 Conference), Studies in Classification, Data
     Analysis, and Knowledge Organization, pages 261–270,
     Berlin/Heidelberg, July 2006. Springer. Ljubljana.
[15] S. Sood, S. Owsley, K. Hammond, and L. Birnbaum.
     TagAssist: Automatic Tag Suggestion for Blog Posts.
     In Proceedings of the International Conference on
     Weblogs and Social Media (ICWSM 2007), 2007.
[16] M. Tatu, M. Srikanth, and T. D’Silva. Rsdc’08: Tag
     recommendations using bookmark content. In
     Proceedings of ECML PKDD Discovery Challenge
     (RSDC08), pages 96–107, 2008.
[17] T. Vander Wal. Folksonomy coinage and definition.
     Website, Februar 2007.
     http://vanderwal.net/folksonomy.html.
[18] H. Wu, M. Zubair, and K. Maly. Harvesting social
     knowledge from folksonomies. In HYPERTEXT ’06:
     Proceedings of the seventeenth conference on Hypertext
     and hypermedia, pages 111–114, New York, NY, USA,
     2006. ACM Press.