=Paper= {{Paper |id=Vol-1441/recsys2015_poster7 |storemode=property |title= |pdfUrl=https://ceur-ws.org/Vol-1441/recsys2015_poster7.pdf |volume=Vol-1441 |dblpUrl=https://dblp.org/rec/conf/recsys/DragovicP15 }} ==== https://ceur-ws.org/Vol-1441/recsys2015_poster7.pdf
               Exploiting Reviews to Guide Users’ Selections
                      Nevena Dragovic                                                     Maria Soledad Pera
               Department of Computer Science                                       Department of Computer Science
                    Boise State University                                               Boise State University
                       Boise, ID, USA                                                       Boise, ID, USA
         nevenadragovic@u.boisestate.edu                                              solepera@boisestate.edu

ABSTRACT                                                               is to get users to trust them. In many cases, users need to see more
We introduce HRS, a recommender that exploits user reviews and         details related to a suggestion than just “dry” recommendations to
identifies the features that are most likely appealing to users. HRS   increase their perceived trust on the corresponding recommender
incorporates this knowledge into the recommendation process to         [3]. Recent research works focus on explaining the generated
generate a list of top-k recommendations, each of which is paired      recommendations [1]. Unfortunately, justifying the reasons why
with an explanation that (i) showcases why a particular item was       an item has been suggested to a user is not an easy task. Thanks to
recommended and (ii) helps users decide which items, among the         the growth of online sites that archive user reviews, researchers
ones recommended, are best tailored towards their individual           have suggested examining these reviews to enhance the
interests. Empirical studies conducted using the Amazon dataset        recommendation process [7]. Nonetheless, the better
demonstrate the correctness of the proposed methodology.               understanding of the aspects or features of a particular item that
                                                                       appeal the most to an individual user, such as price in the case of
                                                                       restaurants or pacing of the story in the case of a book, is yet to be
Categories and Subject Descriptors                                     accomplished.
H.3.3 [Information Storage and Retrieval]: Clustering,
Information Filtering, Retrieval Models, Selection Process.            In this paper, we present the initial research conducted to attempt
                                                                       to solve the issues mentioned above. We created Honest
                                                                       Recommendation System (HRS), a novel recommender system
Keywords                                                               that shows items in their real light. In developing HRS, we focus
Recommendation Engine, Explanations, Ranking.                          our efforts in using collected information from users’ reviews to
                                                                       generate personalized suggestions with their corresponding
1. INTRODUCTION                                                        explanations. By incorporating into the recommendation process
Recommendation systems aid users in locating items (either             the feature preferences of an individual user (inferred from his
product or services) of interest [1]. Regardless of the domain,        reviews), we can get to know the user better than by simply
from shopping websites (e.g. Amazon, e-bay), to news related           considering his rating patterns. We strive for the development of a
sites (e.g. Yahoo, CNN), and hotel or restaurant search (e.g. Yelp,    recommender system a user trusts by providing information he is
hotels.com), recommenders have a huge influence on businesses’         interested in, no matter if it has a positive or negative connotation.
success and users’ satisfaction. From a commercial standpoint,         Our main contribution is the increased effectiveness and
existing recommenders enable companies and items to get                satisfaction on a domain independent recommender. This is
advertised by being offered to potential buyers. From a user           accomplished by giving users information they care about, which
prospective, these systems enhance users’ experience by assisting      helps them make the best decision, in terms of selecting the most
them in finding information pertaining to their interests, thus        adequate item among the recommended ones. Users' overall
addressing the information overload concerns that web users have       satisfaction with a recommender is related to the perceived quality
to deal with on a daily basis.                                         of its recommendations and explanations [1]. Consequently,
                                                                       users’ confidence is also increased.
Suggestions generated by existing recommenders are not always
personalized and diverse enough to expose users to a wide range
of items within their realm of interest, not just popular ones [6].    2. OUR PROPOSED RECOMMENDER
This is due to the fact that a common alternative for generating       In this section we discuss HRS overall recommendation process.
recommendations is to rely on existing community data.                 (Parameters used by HRS were empirically determined. However,
Suggesting the same items to similar users within a community          details are omitted due to page constrains.)
can be very vague and impersonal [2]. Newly-developed strategies       Identify User’s Interest on Items. Consider a user U, who is a
take advantage of different users’ generated data to better identify   member of a popular site, such as Yelp or Amazon, which
user preferences in an attempt to further personalize                  archives U’s reviews and rating history. Given that we aim to
recommendations [1]. Another challenge faced by recommenders           provide U with information he values and needs to choose among
                                                                       suggested items, we examine reviews written by U and identify
                                                                       the set of features (i.e., traits) that U cares the most about. Since
                                                                       we know features are mainly expressed as nouns, we perform
                                                                       semantic analysis on reviews1 and consider the frequency of
Copyright is held by the author/owner(s).                              occurrence of nouns U employs in his reviews. We rely on
RecSys 2015 Poster Proceedings, September 16–20, 2015, Vienna,         WordNet-based similarity measures (using WS4J java library,
Austria.                                                               specifically WuPalmer algorithm) to find and cluster similar

                                                                       1
                                                                           Using Stanford CoreNLP http://nlp.stanford.edu/software/.
terms, as different nouns can be used to express similar meaning.       provided by HRS are preferred over the ones provided by SVD,
Each cluster would contain most frequent terms together with its        which does not consider users’ feature preferences.
closest words among the ones U uses in reviews. We do this to
learn what items’ traits U most frequently mentions in his reviews      Table 1. Performance of HRS compared to baseline algorithms
and use that knowledge to predict which candidate items would be          Metrics             SVD                       HRS
of U‘s interest. The top-2 most frequently-used term clusters are
treated as U’s preferred features. Note that each cluster is labeled       NDCG               0.704                     0.748
using the most representative2 cluster term.
Generate Candidate Recommendations. We take advantage of
U’s historical data (i.e. rated items) and employ the well-known
                                                                        4. CONCLUSION & FUTURE WORK
matrix factorization strategy [4] based on LensKit implementation       We developed a new recommendation system that takes advantage
to generate a number of candidate suggestions for U.                    of ratings and reviews, to create personalized suggestions. The
                                                                        first version of HRS, which showcases the main idea and purpose
Generate Top-k Recommendations. We examine archived                     of the system, generated promising results, yet there are
reviews for each candidate item I and following the same process        opportunities to explore in the future that will enhance its
defined for identifying features of interest to U, we identify the      performance. Even though HRS did better than SVD, we plan to
top-2 features most-frequently mentioned in reviews pertaining to       provide deeper examination and comparisons with other baseline
I. Thereafter, we generate a ranking score for I, which shows the       and state-of-the-art recommendation strategies. We will also
degree to which U’s preferred feature are addressed in I’s reviews.     analyze the effect of considering only candidate items with rating
This score is computed by averaging the degree of similarity            scores above 3, which we anticipate will improve the overall
(defined based on WordNet using the RitaWordnet library)                performance of HRS. We will also extend the performance
between all the words in the term clusters generated for U and I.       evaluation by conducting online user studies to further verify the
This score represents the level of U’s interest in I and is used for    fact that HRS helps users in making appropriate choices among
ranking U’s candidate items, such that the top-k ranked candidate       provided suggestions. One of the limitations of the current design
items are selected as the items to be recommended to U.                 of our recommender is that only nouns extracted from reviews are
                                                                        treated as features which cause losing rich information from
Generate Explanations. We generate the corresponding                    adjectives and verbs. To address this issue, we will conduct more
explanation for each recommended item I by showing why I is             in-depth analysis on part-of-speech and type dependencies on
likely appealing to U. We do so by extracting the descriptions          sentences in reviews. We are aware that HRS, in its current state,
other users provided on U’s preferred features pertaining to I from     does not entirely solve the “cold start” problem. We will consider
archived reviews. We identify sentences in reviews pertaining to I      adopting a hybrid recommendation strategy that considers general
that include terms exactly-matching (or highly-similar as               item metadata, along with the popularity of items, in addition to
determined using WordNet) to each of the labels generated for           examining alternative ways to extract information from reviews
U’s clusters. In the explanation of each recommended item, HRS          and further work towards eradicating the cold start problem.
includes 3 sentences for each label. In doing so, HRS provides U
with sufficient information about the recommendations without
overwhelming U with too much information to read related to the         5. REFERENCES
recommended items. As previously stated, we do not emphasize            [1] F. Gedikli, D. Jannach, and M. Ge. How Should I Explain?
the sentiment of the features, since our intent is not to make U like       A Comparison of Different Explanation Types for
one option more than another, but save U’s time in identifying              Recommender Systems. International Journal Human-
information important for him.                                              Computer Studies, 72:367-382, 2014
                                                                        [2] N. Good, J. B. Schafer, J. A. Konstan, A. Borchers, B.
3. EXPERIMENTAL RESULTS                                                     Sarwar, J. Herlocker and J. Riedl. Combining Collaborative
We conducted initial experiments using the Software3 domain in              Filtering with Personal Agents for Better Recommendations.
the Amazon Review dataset [5], which consists of 68,464 users,              In AAAI/IAAI, p. 439-446, 1999.
11,234 items, and 95,084 reviews. We evaluated the performance          [3] S. Kanetkar, A. Nayak, S. Swamy and G. Bgatia. Web-Based
of HRS in terms of Normalized Discounted Cumulative Gain                    Personalized Hybrid Book Recommendation System. In
(NDCG), which considers the correctness of the                              ICAETR, p. 1-5, 2014
recommendations and penalizes relevant recommendations
                                                                        [4] Y. Koren, R. Bell and C. Volinsky. Matrix Factorization
positioned lower in the ranking. We compared HRS with a
                                                                            Techniques for Recommender Systems. IEEE Computer
baseline, yet popular, algorithm: Matrix Factorization (SVD). As
                                                                            Society, 42(8):30-37, 2009.
shown in Table 1, HRS outperforms SVD. The significant NDCG
improvement demonstrates that, in general, recommendations              [5] J.J. McAuley and J. Leskovec. Hiddent Factors and Hidden
                                                                            Topics: Understanding Rating Dimensions with Review
                                                                            Text. In ACM RecSys, p. 165-172, 2013.
                                                                        [6] M.S. Pera. Using Online Data Sources to Make
2
  Using WordNet, we generate a list of synonyms for each cluster            Recommendations on Reading Materials for K-12 and
term, such that the most frequent term among these synonym lists            Advanced Readers. PhD Dissertation, BYU, 2014.
is treated as the corresponding cluster label.                          [7] Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Lui and S. Ma.
3
  Note that we developed HRS to be a generic recommender so it              Explicit Factor Models for Explainable Recommendation
can be used on items on varied domains, beyond the Software                 Based on Phrase-level Sentiment Analysis. In ACM SIGIR, p.
domain we considered only for initial assessment purposes.                  83-92, 2014.