RecTour 2019, September 19th, 2019, Copenhagen, Denmark. 17 Designing a Conversational Travel Recommender System Based on Data-Driven Destination Characterization Linus W. Dietz Saadi Myftija Wolfgang Wörndl Department of Informatics Department of Informatics Department of Informatics Technical University of Munich Technical University of Munich Technical University of Munich Garching, Germany Garching, Germany Garching, Germany linus.dietz@tum.de saadi.myftija@tum.de woerndl@in.tum.de ABSTRACT tourist packs [17, 31], suggesting attractions [18], trip planners [10, Recommending complex, intangible items in a domain with high 12], and social aspects [13]. In this paper, we focus on the first as- consequences, such as destinations for traveling, requires additional pect and acknowledge that there are further definitions [1]. Herein, care when deriving and confronting the users with recommenda- “destination” refers to cities. The challenge in recommending cities tions. In order to address these challenges, we developed CityRec, a to a user at home arises from the intangibility of the items and the destination recommender that makes two contributions. The first is high emotional involvement [33]. It has been shown that leisure a data-driven approach to characterize cities according to the avail- travel has a positive effect on an individual’s happiness; however, ability of venues and travel-related features, such as the climate it does not impact the overall life satisfaction, which has been at- and costs of travel. The second is a conversational recommender tributed to poor tourism products [23]. An alternative conclusion system with 180 destinations around the globe based on the data- could be that travelers visit the wrong places. This gives rise to driven characterization, which provides prospective travelers with researching improved destination recommender systems that can ef- inspiration for and information about their next trip. An online user ficiently and effectively capture the user’s preferences to overcome study with 104 participants revealed that the proposed system has the cold start problem [5]. Given the characteristics of this domain, a significantly higher perceived accuracy compared to the baseline Burke and Ramezani suggested either the content-based [27] or the approach, however, at the cost of ease of use. knowledge-based [3] paradigm [7]. In traditional information retrieval or static content-based rec- KEYWORDS ommendation, continuously querying for relevant items does not necessarily lead to better results [4]. Instead, a directed exploration Tourism recommendation, Data mining, Cluster analysis, Conver- of the search space using a conversational method is more promis- sational recommender systems ing [8, 11]. Burke et al. proposed and evaluated the FindMe ap- 1 INTRODUCTION proach [6], which allows the critiquing of single items so that the user can refine the recommendations iteratively until she is satisfied In complex recommendation domains, such as the recommenda- with the result. More advanced approaches on this topic are those of tion of tourist destinations, tweaking the algorithmic accuracy McCarthy et al., who propose a method to generate compound cri- ad ultimo brings diminishing returns. It has been shown that the tiques [19], and McGinty and Smyth, who use the adaptive selection embedding of the algorithm in an adequate user interface is of strategy to ensure diverse, yet fitting recommendations over the similar importance [16]. Thus, in this paper, we present a data- course of several critiquing cycles [21]. Recently, Xie et al. showed driven conversational destination recommender system that has that incorporating the user experience into a critiquing system can two contributions: it presents a novel, data-driven approach for improve the performance and recommendations at a reduced effort characterizing destinations on user-understandable dimensions and by the user [35]. In this study, we present a recommender system shows how this characterization can be facilitated in a conversa- leveraging the potentials of the interplay between data science and tional recommender. This approach can be seen as an evolution of user interface design. The items are characterized by a multidimen- Burke’s FindMe Approach [3] in the area of tourism. We thoroughly sional space of features, which are intuitively understandable by evaluated the system from the users’ perspective to understand the the user and can then be critiqued in any direction. To overcome effect of critiquing on the perceived accuracy of the recommenda- the problem of skeptical users hesitating to reveal their complete tions and the satisfaction of the users from using the system. preferences [29] and the observation that users find it difficult to After the literature review in the subsequent section, we will assess their exact preferences until when they are dealing with present the proposed method for characterizing destinations to the actual set of offered options [26], the proposed method uses a realize content-based recommendations. Section 4, presents the the mixture of explicit preference elicitation methods. design and evaluation of the conversational recommender system Using the content-based recommendation paradigm, one has to that heavily relies on the previous characterization. We conclude choose a domain model and distance metric to compute the most our findings and point out future work in Section 5. fitting items for the user. Such models can be realized through on- tologies as done in SigTur [22] or in a the work of Grün et al. [14]. 2 RELATED WORK The latter is an example of ontologies being used to refine user Tourism recommendation is inherently complex and has several profiles by enriching the generic preferences of a tourist through facets. Borràs et al. enumerate four general functionalities of tourism more specific interests. More often, items are simply characterized recommender systems [2]: recommend travel destinations and Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). RecTour 2019, September 19th, 2019, Copenhagen, Denmark. 18 Table 1: Raw values of exemplary cities City Venues Arts Food Nightlife Outdoors Cost Index Temperature Precipitation Rome 36,848 1,995 12,264 2,063 3,482 69.03 15.7°C 798mm Mexico City 213,612 12,158 83,225 16,780 19,330 34.18 15.9°C 625mm Cologne 16,163 966 4,107 1,144 2,127 67.36 10.1°C 774mm Penang 50,647 2,193 21,389 1,686 5,273 43.98 25.7°C 1,329mm Cordoba 3,636 246 1,282 427 379 55.11 17.8°C 612mm using a multidimensional vector space model. In this case, the chal- 3.2 Characterizing Cities Based on Venue Data lenge is how to assign each item a value on each dimension, which We collected a data set of 5,723,169 venues in 180 cities around is commonly done using expert knowledge. For instance, Herzog the world. Foursquare organizes its venues in a tree of 10 top-level and Wörndl [15, 34] characterized regions using travel guides and categories, however, we only analyzed the ones relevant for charac- their own expert knowledge. Neidhardt et al. developed the Seven terizing the cities for travelers: Arts & Entertainment, Food, Nightlife, Factor Model of tourist behavioral roles [24] based on the Big Five and Outdoors & Recreation. We intend to conceptualize these fea- Factor Model [20] and a factor analysis of existing tourist roles [36]. tures as a multidimensional vector space model and represent each Although they showed its merit in subsequent publications [25], a city as a point in this space. The characterization should approxi- common drawback with approaches based on expert judgment is mate the expected experience that a tourist will have at a city. their scalability to large quantities of items and the dependency on To determine a city’s score for a feature, we analyzed the distri- the accuracy of human judgment. To overcome this, they proposed bution of the venue categories. Using the distribution instead of the a strategy [32] for characterizing destinations within the Seven absolute number of venues per category, we eliminated the effect Factor Model. Using a huge data set of 16,950 destinations anno- of city size on the category features. Thus, we obtained the ratio tated with 26 motivational ratings and 12 geographical attributes, of each feature in the city’s category distribution by dividing the they proposed two competing methods, cluster analysis and regres- number of venues per each top level category by the total number sion analysis, to map the destinations to the vector space of the of venues in that city. The underlying assumption is that these Seven Factor Model. In terms of destination characterization, this percentages are indicators of the association level of the city with approach is the most similar to the one we proposed. The main the feature. This requires the cities to be of at least a certain size difference is that our data model is directly defined via the data as the distribution of small cities is less reliable. Thus, the smallest from the destinations and we are not dependent on expert ratings, city considered had at least 1,000 venues, with the median being which is an advantage when scaling the approach [9]. 7,137. We did not analyze the quality of the venues, i.e., through ratings, as we expected having differences in the assessment of the 3 DESTINATION CHARACTERIZATION quality owing to cultural differences. The characterization of destinations such as regions or cities is a Characterizing the cities according to their attractions is a first challenging task. What are the characteristics of a city for tourists step; however, further features are of the travelers’ interest. Us- to base their decision on whether to visit it or not? Previous ap- ing Climate-Data.org3 , we characterized each city using the mean proaches have relied on expert assessment [15, 32], but the short- yearly temperature and the mean yearly precipitation. Furthermore, comings are a potential lack of objectivity and scalability as it is we used Numbeo’s “Cost of Living Index”4 , which is a relative cost quite costly to rate myriads of destinations around the world. Thus, indicator calculated by combining metrics like consumer goods we propose a data-driven approach to characterize cities on the prices, restaurants, transportation, and so on as an approximate basis of the variety of venues per category. The underlying assump- price level of visiting the city. Finally, to account for the city size, tion is that, in a city with many restaurants, the travelers have we also used the number of venues as a proxy feature for the size plenty of options; thus, the quality of experience in the food cate- of the city. Table 1 shows the raw values of the features. gory is high. Conversely, a city with very few cultural sites will be less interesting to a traveler that is interest in this topic. This section 3.3 Cluster Analysis discusses how we collected data about venues and aggregated them To evaluate the characterization of the 180 cities, we performed to determine the touristic value of each city. a cluster analysis, an unsupervised learning method whose goal is to group data items in a way that within the same group, the 3.1 Collecting Venue Information items are similar to each other, whereas the groups are dissimilar. There are several providers of information about destinations. Af- Because the features of the destinations that we considered have ter performing a comparison of providers, such as Google Maps, different value ranges, we first applied min-max scaling to give Facebook Places, Yelp, OpenStreetMap, and some others, we de- each feature the same weight. To find the best segmentation, we cided to use the Foursquare Venue API1 , as it offers sufficient rate experimented with common clustering algorithms, such as k-means, limitations and allows us to specify coordinates of a bounding box k-medoids, and hierarchical clustering. To evaluate the quality of in the request parameters. The deciding argument for Foursquare the resulting clusters, we looked into metrics like the within-cluster was the detailed categorization of venues from its taxonomy2 . sums of squares and the average silhouette width [30]. The former 1 https://developer.foursquare.com/docs/api/venues/search 3 https://en.climate-data.org 2 https://developer.foursquare.com/docs/resources/categories 4 https://www.numbeo.com/cost-of-living/rankings.jsp Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). RecTour 2019, September 19th, 2019, Copenhagen, Denmark. 19 request another set of cities. Furthermore, a tooltip encourages the user to select cities that she finds generally interesting, including those she has already visited. This ensures that the system has enough data to work with for generating the initial user profile but avoids cases where users select many displayed cities, which end up in generic profiles with averaged-out feature values. The result of this step is an initial profile of the user that resides in the same vector space as the items. In Step (2), we display a set of four initial destinations, computed using the Euclidean Distance. To give the users more control over their preference profile, we ask them to provide feedback on the initial recommendations by critiquing the cities’ features one after Figure 1: Normalized values of selected destinations another on a five-point Likert Scale: “much lower” – “lower” — “just right” — “higher” — “much higher.” As can be seen in Figure 2 (b), the is a measure of the variability of the instances within each cluster, user now has more information about the cities, which establishes whereas the latter is a measure of how well the instances fit into transparency and enables her to more informed decisions compared their assigned cluster, as opposed to all the other clusters. to in the first step. Using this feedback, we statically update the user Using a systematic approach, we obtained the best results using profile scores by −0.2, −0.1, 0, 0.1, or 0.2 to attain a more refined hierarchical clustering and five clusters. The clusters named after preference model for the user. the city closest to the centroid are “Cologne, Germany,” with 74 Finally, in the last step, Step (3), the user is presented with a re- Central European and North American cities; “Rome, Italy” with 35 sults page that shows a ranked list of the top five recommendations cities in the Mediterranean and Oceania; “Penang, Malaysia” with and their attributes, which can be explored. This page also contains 48 destinations residing mostly in Asia; “Mexico City, Mexico” with the questionnaire for the evaluation. five metropoleis all around the world; and “Cordoba, Spain,” with 18 small and relatively warm cities in different continents. Figure 1 4.2 Experimental Setup shows the normalized values of the five characteristic cities. The independent variable of the experiment is the version of the recommender system. Because we wanted to investigate the poten- 4 A DATA-DRIVEN CONVERSATIONAL tial advantages and drawbacks of using critiquing in this domain, DESTINATION RECOMMENDER SYSTEM we created a baseline system in addition to the previously described Having characterized the destinations on eight dimensions, we critiquing-based recommender. The only difference in the baseline facilitate it in a content-based critiquing recommender system. system was that the critiquing step, Step (2), is entirely skipped; CityRec is implemented as a web application using NodeJS5 and that is, the outcome of the initial preference elicitation of Step (1) ReactJS6 in the frontend. The codebase comprises about 3,500 lines is the final result and is displayed in the same way as in Step (3). of code and is available on Github7 . A demo can be viewed at The dependent variables are the usage metrics, such as the http://cityrec.cm.in.tum.de. choices made at each step, the time taken to specify the preferences, and the number of clicks. Furthermore, we asked the user to fill 4.1 User Interaction with CityRec out a subset of the ResQue Questionnaire, a validated, user-centric The recommender system has three steps: (1) initial preference evaluation framework for recommender systems [28]. elicitation, shown in Figure 2 (a); (2) refinement through critiquing, (Q1) The travel destinations recommended to me by CityRec shown in Figure 2 (b); and (3) a results page. In Step (1), we obtain matched my interests the initial scores for the user profile by asking the user to select the (Q2) The recommender system helped me discover new travel destinations that best reflect her preferences from a set of 12 cities. destinations We then construct an initial user model by averaging the feature (Q3) I understood why the travel destinations were recommended values of the selected cities. This initial seed of 12 destinations to me is not random, but a diverse representation of the data set. We (Q4) I found it easy to tell the system what my preferences are fill in the first nine slots by selecting two cities from each of the (Q5) I found it easy to modify my taste profile in this recommender five previously established destination clusters (one in the case system of the small “Mexico City” cluster). The remaining three slots are (Q6) The layout and labels of the recommender interface are ade- randomly selected cities to account for the size differences of the quate clusters. Using this approach, we can generate numerous, diverse, (Q7) Overall, I am satisfied with this recommender system but equivalent shortlists because each cluster is represented. From (Q8) I would use this recommender system again, when looking these 12 cities, the users may choose three to five that best reflect for travel destinations their preferences. If a user does not recognize many cities, she can 5 https://nodejs.org/en/ 4.3 Results 6 https://reactjs.org/ A total of 104 individuals participated in the online survey from De- 7 https://github.com/divino5/cityrec-prototype cember 2018 to March 2019. Participants (44% females, 56% males) Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). RecTour 2019, September 19th, 2019, Copenhagen, Denmark. 20 Figure 2 (a): Selection of favorable cities, Step (1) Figure 2 (b): Critiquing of initial recommendations, Step (2) were recruited by sharing the user study on social media and among 4.4 Discussion groups of friends and colleagues. The self-reported ages were 0– The significant difference in (Q1) shows that the perceived recom- 20 (7%), 21–30 (69%,) 31–40 (9%), and 41–50 (5%). Random assign- mendation accuracy is higher, when using the proposed critiquing ment of the systems was performed after a landing page and had recommender system, however, at the cost of worse interface ad- almost equal (51% versus 49%) completion of the survey. equacy (Q6). This is attributable to the overhead of the critiquing Table 2: Differences between the two systems step, Step (2), as it takes triple the time to complete the first two steps and more than triple the number of clicks. Interestingly, the Variable Basel. Critiqu. p W Sig. users value higher accuracy more than the adequacy of the inter- face and the effort as can be seen in the significantly higher user (Q1) Interest match 3.58 3.88 0.043 645 ∗ satisfaction (Q7) and the similar levels of potential future use (Q8). (Q2) Novelty 3.44 3.75 0.118 705 ns Furthermore, we observed that the user profiles of the critiquing (Q3) Understanding 3.46 3.77 0.073 673.5 ns system are significantly higher correlated with the self-assessment (Q4) Tell prefs. 3.73 3.90 0.328 775 ns in the case of Outdoors & Recreation and Nightlife. This is further (Q5) Modify profile 3.24 3.48 0.17 723.5 ns evidence that the critiquing recommender version performs better (Q6) Interface 4.15 3.62 0.009 1,044 ∗∗ in capturing the preferences of the user. In conclusion, the critiquing (Q7) Satisfaction 3.66 3.92 0.037 649 ∗ version should be preferred as it provides better recommendations (Q8) Future use 3.49 3.67 0.166 724 ns from the users’ perspective. Time to results 60.92s 184.07s <0.001 ∗∗∗ Clicks 6.32 21.35 <0.001 ∗∗∗ 5 CONCLUSIONS PCC Food -0.11 -0.01 0.341 ns In this paper, we proposed an approach for tackling the problem PCC Arts 0.05 0.38 0.066 ns of recommending complex items in the domain of travel recom- PCC Outdoors 0.02 0.45 0.024 ∗ mendation. We characterized destinations around the globe in a PCC Nightlife 0.2 0.57 0.028 ∗ user-understandable way and directly used this characterization Significance levels: ∗ p < 0.05; ∗ ∗ p < 0.01; ∗ ∗ ∗ p < 0.001 in an online recommender system. From the evaluation experi- ments conducted, we discovered an interesting trade-off between the perceived recommendation accuracy and the perceived ade- The upper part of Table 2 shows the differences in the mean quacy of the user interface; however, the users seemed to favor values and the significance tests of the dependent variables. The better recommendations over less effort to obtain them. mean values of the ordinal answers to the questionnaire (Q1–Q8) are Because CityRec’s source code has been released, it can also serve for viewing purposes only; the test statistic was calculated using the as a foundation for the community to investigate conversational Wilcoxon rank sum test with continuity correction for independent recommender systems based on data-driven item characterization. populations. The null hypotheses were that the medians of variables The destination characterization showed decent results; however, of the two groups are equal. In three cases, (Q1), (Q6), and (Q7), it would be worthwhile to investigate further useful features of we could refute the null hypothesis, which provides interesting destinations that can be derived from other data sources. In this insights into the users’ assessment of the system. study, we found that, despite higher perceived accuracy (Q1), the In the survey, we also asked the participants to rate their personal interface adequacy (Q6) was rated lower in the critiquing system. importance of tourism-related aspects. Thus, we could compute the Thus, we regard this study as a first step that is to be extended with Pearson Correlation Coefficient (PCC) between the actual profile a more sophisticated preference elicitation approach using active from the system and the self-assessment from the survey. The lower learning. Furthermore, the behavior of the algorithm, with respect part of Table 2 shows these correlations per system and the result to the diversity of the recommendations, should be analyzed as of the one-sided Fisher’s r-to-Z test for independent samples. well. Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). RecTour 2019, September 19th, 2019, Copenhagen, Denmark. 21 REFERENCES systems. In International Conference on Adaptive Hypermedia and Adaptive Web- [1] David Beirman. 2003. Restoring Tourism Destinations in Crisis: A Strategic Mar- Based Systems. Springer, Berlin, Heidelberg, 176–184. keting Approach. Oxford University Press, Oxford, United Kingdom. [20] Robert R. McCrae and Oliver P. John. 1992. An Introduction to the Five-Factor [2] Joan Borràs, Antonio Moreno, and Aida Valls. 2014. Intelligent tourism recom- Model and its Applications. Personality 60, 2 (June 1992), 175–215. https: mender systems: A survey. Expert Systems with Applications 41, 16 (Nov. 2014), //doi.org/10.1111/j.1467-6494.1992.tb00970.x 7370–7389. https://doi.org/10.1016/j.eswa.2014.06.007 [21] Lorraine McGinty and Barry Smyth. 2006. Adaptive Selection: An Analysis [3] Robin D. Burke. 2000. Knowledge-based recommender systems. Encyclopedia of of Critiquing and Preference-Based Feedback in Conversational Recommender library and information science 69, 32 (2000), 180–200. Systems. Electronic Commerce 11, 2 (Dec. 2006), 35–57. https://doi.org/10.2753/ [4] Robin D. Burke. 2002. Interactive Critiquing for Catalog Navigation in E- jec1086-4415110202 Commerce. Artificial Intelligence Review 18, 3 (Dec. 2002), 245–267. https: [22] Antonio Moreno, Aida Valls, David Isern, Lucas Marin, and Joan Borràs. 2013. //doi.org/10.1023/A:1020701617138 SigTur/E-Destination: Ontology-based personalized recommendation of Tourism [5] Robin D. Burke. 2007. Hybrid Web Recommender Systems. In The Adaptive and Leisure Activities. Engineering Applications of Artificial Intelligence 26, 1 (Jan. Web: Methods and Strategies of Web Personalization, Peter Brusilovsky, Alfred 2013), 633–651. https://doi.org/10.1016/j.engappai.2012.02.014 Kobsa, and Wolfgang Nejdl (Eds.). Springer, Berlin, Heidelberg, 377–408. https: [23] Jeroen Nawijn. 2012. Leisure Travel and Happiness: An Empirical Study into the //doi.org/10.1007/978-3-540-72079-9_12 Effect of Holiday Trips on Individuals’ Subjective Wellbeing. phdthesis. Erasmus [6] Robin D. Burke, Kristian J. Hammond, and Benjamin C. Young. 1997. The FindMe University Rotterdam, Rotterdam. [24] Julia Neidhardt, Rainer Schuster, Leonhard Seyfang, and Hannes Werthner. approach to assisted browsing. IEEE Expert 12, 4 (July 1997), 32–40. https: 2014. Eliciting the Users’ Unknown Preferences. In 8th ACM Conference on //doi.org/10.1109/64.608186 Recommender Systems (RecSys ’14). ACM, New York, NY, USA, 309–312. https: [7] Robin D. Burke and Maryam Ramezani. 2011. Recommender Systems Handbook. //doi.org/10.1145/2645710.2645767 Springer, Boston, MA, USA, Chapter Matching Recommendation Technologies [25] Julia Neidhardt, Leonhard Seyfang, Rainer Schuster, and Hannes Werthner. 2015. and Domains, 367–386. https://doi.org/10.1007/978-0-387-85820-3_11 A picture-based approach to recommender systems. Information Technology & [8] Li Chen and Pearl Pu. 2012. Critiquing-based recommenders: survey and emerg- Tourism 15, 1 (March 2015), 49–69. https://doi.org/10.1007/s40558-014-0017-5 ing trends. User Modeling and User-Adapted Interaction 22, 1 (April 2012), 125–150. [26] John W. Payne, James R. Bettman, and Eric J. Johnson. 1993. The adaptive decision https://doi.org/10.1007/s11257-011-9108-6 maker. Cambridge University Press, Cambridge, United Kingdom. [9] Linus W. Dietz. 2018. Data-Driven Destination Recommender Systems. In 26th [27] Michael J. Pazzani and Daniel Billsus. 2007. Content-Based Recommendation Conference on User Modeling, Adaptation and Personalization (UMAP ’18). ACM, Systems. In The Adaptive Web: Methods and Strategies of Web Personalization, New York, NY, USA, 257–260. https://doi.org/10.1145/3209219.3213591 Peter Brusilovsky, Alfred Kobsa, and Wolfgang Nejdl (Eds.). Springer, Berlin, [10] Linus W. Dietz and Achim Weimert. 2018. Recommending Crowdsourced Trips Heidelberg, 325–341. https://doi.org/10.1007/978-3-540-72079-9_10 on wOndary. In RecSys Workshop on Recommenders in Tourism (RecTour’18). [28] Pearl Pu, Li Chen, and Rong Hu. 2011. A User-centric Evaluation Framework for Vancouver, BC, Canada, 13–17. Recommender Systems. In Fifth ACM Conference on Recommender Systems (RecSys [11] Mehdi Elahi, Francesco Ricci, and Neil Rubens. 2016. A survey of active learning ’11). ACM, New York, NY, USA, 157–164. https://doi.org/10.1145/2043932.2043962 in collaborative filtering recommender systems. Computer Science Review 20, [29] Francesco Ricci and Quang Nhat Nguyen. 2007. Acquiring and Revising Prefer- Supplement C (May 2016), 29–50. https://doi.org/10.1016/j.cosrev.2016.05.002 ences in a Critique-Based Mobile Recommender System. IEEE Intelligent Systems [12] Damianos Gavalas, Charalampos Konstantopoulos, Konstantinos Mastakas, and 22, 3 (May 2007), 22–29. https://doi.org/10.1109/MIS.2007.43 Grammati Pantziou. 2014. A survey on algorithmic approaches for solving tourist [30] Peter J. Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and trip design problems. Heuristics 20, 3 (June 2014), 291–328. https://doi.org/10. validation of cluster analysis. Computational and Applied Mathematics 20 (Nov. 1007/s10732-014-9242-5 1987), 53–65. https://doi.org/10.1016/0377-0427(87)90125-7 [13] Ulrike Gretzel. 2011. Intelligent systems in tourism: A Social Science Perspective. [31] Mete Sertkan, Julia Neidhardt, and Hannes Werthner. 2017. Mapping of Tourism Annals of Tourism Research 38, 3 (July 2011), 757–779. https://doi.org/10.1016/j. Destinations to Travel Behavioural Patterns. In Information and Communication annals.2011.04.014 Technologies in Tourism, Brigitte Stangl and Juho Pesonen (Eds.). Springer Interna- [14] Christoph Grün, Julia Neidhardt, and Hannes Werthner. 2017. Ontology-Based tional Publishing, Cham, 422–434. https://doi.org/10.1007/978-3-319-72923-7_32 Matchmaking to Provide Personalized Recommendations for Tourists. In Infor- [32] Mete Sertkan, Julia Neidhardt, and Hannes Werthner. 2019. What is the “Person- mation and Communication Technologies in Tourism, Roland Schegg and Brigitte ality” of a tourism destination? Information Technology & Tourism 21, 1 (March Stangl (Eds.). Springer, Cham, 3–16. 2019), 105–133. https://doi.org/10.1007/s40558-018-0135-6 [15] Daniel Herzog and Wolfgang Wörndl. 2014. A Travel Recommender System for [33] Hannes Werthner and Francesco Ricci. 2004. E-commerce and Tourism. Commun. Combining Multiple Travel Regions to a Composite Trip. In CBRecSys@RecSys. ACM 47, 12 (Dec. 2004), 101–105. https://doi.org/10.1145/1035134.1035141 Foster City, CA, USA, 42–48. [34] Wolfgang Wörndl. 2017. A Web-based Application for Recommending Travel [16] Joseph A. Konstan and John Riedl. 2012. Recommender systems: from algorithms Regions. In Adjunct Publication of the 25th Conference on User Modeling, Adap- to user experience. User Modeling and User-Adapted Interaction 22, 1-2 (April tation and Personalization (UMAP ’17). ACM, New York, NY, USA, 105–106. 2012), 101–123. https://doi.org/10.1007/s11257-011-9112-x https://doi.org/10.1145/3099023.3099031 [17] Qi Liu, Yong Ge, Zhongmou Li, Enhong Chen, and Hui Xiong. 2011. Personalized [35] Haoran Xie, Debby D. Wang, Yanghui Rao, Tak-Lam Wong, Lau Y. K. Raymond, Li Travel Package Recommendation. In IEEE 11th International Conference on Data Chen, and Fu Lee Wang. 2018. Incorporating user experience into critiquing-based Mining (ICDM ’11). IEEE, Vancouver, BC, Canada, 407–416. https://doi.org/10. recommender systems: a collaborative approach based on compound critiquing. 1109/icdm.2011.118 Machine Learning and Cybernetics 9, 5 (May 2018), 837–852. https://doi.org/10. [18] David Massimo and Francesco Ricci. 2018. Clustering Users’ POIs Visit Tra- 1007/s13042-016-0611-2 jectories for Next-POI Recommendation. In Information and Communication [36] Andrew Yiannakis and Heather Gibson. 1992. Roles tourists play. Annals of Technologies in Tourism, Juho Pesonen and Julia Neidhardt (Eds.). Springer, Cham, Tourism Research 19, 2 (Jan. 1992), 287–303. https://doi.org/10.1016/0160-7383(92) 3–14. https://doi.org/10.1007/978-3-030-05940-8_1 90082-z [19] Kevin McCarthy, James Reilly, Lorraine McGinty, and Barry Smyth. 2004. On the dynamic generation of compound critiques in conversational recommender Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).