<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Opinionated Explanations of Recommendations from Product Reviews</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Khalil</forename><surname>Muhammad</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Insight Centre for Data Analytics</orgName>
								<orgName type="institution">University College Dublin</orgName>
								<address>
									<settlement>Belfield, Dublin</settlement>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Opinionated Explanations of Recommendations from Product Reviews</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">4BF25063BFF79C5B62B5D6BBE498F9FE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:32+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Recommender systems are now mainstream and people are increasingly relying on them to make decisions in situations where there are too many options to choose from. Yet many recommender systems act like "black boxes", providing little or no transparency into the rationale of their recommendation process <ref type="bibr" target="#b0">[1]</ref>. Related research in the field of recommender systems has focused on developing and evaluating new algorithms that provide more accurate recommendations. However, the most accurate recommender systems may not necessarily be those that provide the most useful recommendations -due to the influence of how recommendations are presented and justified to users <ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref>. Therefore, recommender systems must be able to explain what they do and justify their actions in terms that are understandable to the user. An explanation, in this context, is any added information presented with recommendations to help users better understand why and how a recommendation is made <ref type="bibr" target="#b4">[5]</ref>. Studies show that explanations help users make better decisions and are therefore provided for many reasons <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>, which normally align with the objective of the recommender system. Interestingly, explanations may sometimes be provided from the users (not from the recommender system) to justify their choices <ref type="bibr" target="#b7">[8]</ref>.</p><p>The availability of user-generated reviews that contain real experiences provides a new opportunity for recommender systems; yet, existing methods for explaining recommendations hardly take into account the implicit opinions that people express in such reviews even though studies show that users are increasingly relying on the reviews to make better choices <ref type="bibr" target="#b8">[9]</ref>. Also, explanations usually provide a posthoc rationalisation for recommendations; but, this work is motivated by a more intimate connection between recommendations and explanations, which poses the question: can the recommendation process itself be guided by structures generated to explain recommendations to users?</p><p>This work builds on existing research in the areas of case-based reasoning, recommender systems and opinion mining to propose a novel approach for building explanations in recommender systems. We will also explore the potential of opinionated explanations in driving the recommendation process.</p><p>Copyright © 2015 for this paper by its authors. Copying permitted for private and academic purposes. In Proceedings of the ICCBR 2015 Workshops. Frankfurt, Germany.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Research Plan</head><p>The core focus of this work is to explore the role of opinions in explaining recommendations. Accordingly, we have identified the following areas of interest:</p><p>Ranking, filtering and evaluating feature quality. Feature-level opinion mining algorithms that are capable of extracting very granular opinions, such as <ref type="bibr" target="#b9">[10]</ref>, yield noisy features because they rely on shallow natural language processing (NLP) techniques. Ultimately, these features lack context and are too fine-grained to be intuitive to users. For instance, it will be nonsensical to explain a hotel recommendation as "because visitors liked the wire...", where 'wire' is a feature mined from reviews. Hence, the research question: "how to rank, filter and evaluate features mined from reviews". We will use o↵-the-shelf opinion mining techniques but focus on developing methods for ranking features so that they can be filtered and evaluated for quality (i.e. the extent to which a feature is relevant and presentable to users in explanations). This involves creating new methods for summarising features so that only qualitative and comprehensible features are presented in explanations.</p><p>Generating opinionated explanations. Explanations normally demonstrate how one or more recommended items relate to a user's preferences, normally through an intermediary entity such as a user, item, or feature. For instance, Netflix may use the movies that a user has rated highly in the past to explain a movie recommendation. And since user ratings are often unable to fully represent user preferences, there is a place of fine-grained opinions that are explicitly provided by users in textual reviews. We expect that explanations that are based on opinionated reviews will be more natural and convincing. Hence the research question: how to use such opinions to generate explanations of product recommendations?. We will use opinions from reviews to generate that justify a particular recommendation or sets of recommendations, and we will conduct live-user trials to test for its usefulness in decision-making.</p><p>Driving recommendations using explanations. To date, most recommender systems have treated explanations as an afterthought, presenting them alongside recommendations, but with little connection to the recommendation process itself. This work will explore the potential of using explanations to drive the recommendation process itself so that, for example, an item will be recommended because it can be explained in a compelling way. Hence the research question: how to use explanations to support similarity metrics and ranking strategies in a recommendation process?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Progress</head><p>To address the problem of feature quality, we used the approach in <ref type="bibr" target="#b9">[10]</ref> to mine opinions from a dataset of TripAdvisor hotel reviews. Then, using various lexical and frequency-based filtering techniques, we removed noisy, less opinionated and unpopular features. The remaining features were summarised into higher-level representations by clustering them based on the words they co-occur with in sentences of reviews. This feature representation allows us to replace a lowlevel feature (e.g. 'orange juice') with a more meaningful higher-level one (e.g. 'breakfast') that is suitable for use in explanations.</p><p>We developed a new method for generating personalized explanations which highlight the pros and cons of a recommended item to a user. Our approach focuses on the features that the user has mentioned in their reviews, and those mentioned about the recommended item by other users. In the explanation, we prioritize the features that are likely to be of interest to the user. Each feature is classified as a pro or con based on its sentiment, and it is ranked by its popularity with the user and the recommended item.</p><p>We also developed another explanation strategy that explains a recommended item in comparison with other recommendations. That is, the explanation presents features of the recommended item that are better or worse than its alternatives.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments. This work is supported by the Insight Centre for Data Analytics under grant number SFI/12/RC/2289.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Explaining Collaborative Filtering Recommendations</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Herlocker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Konstan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Riedl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2000 ACM Conference on Computer supported cooperative work</title>
				<meeting>the 2000 ACM Conference on Computer supported cooperative work</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="241" to="250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Being Accurate is not Enough: How Accuracy Metrics Have Hurt Recommender Systems</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Mcnee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Riedl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Konstan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI&apos;06 extended abstracts on Human factors in computing systems</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="1097" to="1101" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Evaluating recommender systems from the users perspective: survey of the state of the art</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">User Modelling and User-Adapted Interaction</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">4-5</biblScope>
			<biblScope unit="page" from="317" to="355" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Workshop on User-centric Evaluation of Recommender Systems and their Interfaces</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">P</forename><surname>Knijnenburg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schmidt-Thieme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Bollen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the fourth ACM conference on Recommender systems</title>
				<meeting>the fourth ACM conference on Recommender systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="383" to="384" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">E↵ective Explanations of Recommendations: User-Centered Design</title>
		<author>
			<persName><forename type="first">N</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mastho↵</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2007 ACM conference on Recommender systems</title>
				<meeting>the 2007 ACM conference on Recommender systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="153" to="156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Designing and Evaluating Explanations for Recommender Systems</title>
		<author>
			<persName><forename type="first">N</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mastho↵</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender Systems Handbook</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="479" to="510" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The Influence of Knowledgeable Explanations on Users&apos; Perception of a Recommender System</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zanker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the sixth ACM conference on Recommender systems</title>
				<meeting>the sixth ACM conference on Recommender systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="269" to="272" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Investigating Explanations to Justify Choice</title>
		<author>
			<persName><forename type="first">I</forename><surname>Nunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Miles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Luck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>De Lucena</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">User Modeling, Adaptation, and Personalization</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="212" to="224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The Di↵erent E↵ects of Online Consumer Reviews on Consumers&apos; Purchase Intentions Depending on Trust in Online Shopping Malls: An Advertising Perspective</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Han</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Internet research</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="187" to="206" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Opinionated Product Recommendation</title>
		<author>
			<persName><forename type="first">R</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schaal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>O'mahony</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mccarthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Smyth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Case-Based Reasoning Research and Development</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">7969</biblScope>
			<biblScope unit="page" from="44" to="58" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
