=Paper= {{Paper |id=Vol-2903/IUI21WS-TExSS-1 |storemode=property |title=Making Business Partner Recommendation More Effective: Impacts of Combining Recommenders and Explanations through User Feedback |pdfUrl=https://ceur-ws.org/Vol-2903/IUI21WS-TExSS-1.pdf |volume=Vol-2903 |authors=Oznur Alkan,Massimiliano Mattetti,Sergio Cabrero Barros,Elizabeth M. Daly |dblpUrl=https://dblp.org/rec/conf/iui/AlkanMBD21 }} ==Making Business Partner Recommendation More Effective: Impacts of Combining Recommenders and Explanations through User Feedback== https://ceur-ws.org/Vol-2903/IUI21WS-TExSS-1.pdf
Making Business Partner Recommendation
More Effective: Impacts of Combining
Recommenders and Explanations through User
Feedback
Oznur Alkan, Massimiliano Mattetti, Sergio Cabrero Barros and Elizabeth
M. Daly
IBM Research, Europe


                                       Abstract
                                       Business partnerships can help businesses deliver on opportunities they might otherwise be unable to
                                       facilitate. Finding the right business partner (BP) involves understanding the needs of the businesses
                                       along with what they can deliver in a collaboration. BP recommendation meets this need by facilitat-
                                       ing the process of finding the right collaborators to initiate a partnership. In this paper, we present a
                                       real world BP recommender application which uses a similarity based technique to generate and ex-
                                       plain BP suggestions, and we discuss how this application is enhanced by integrating a solution that
                                       1. dynamically combines different recommender algorithms, and 2. enhances the explanations to the rec-
                                       ommendations, in order to improve the user’s experience with the tool. We conducted a preliminary
                                       focus group study with domain experts which supports the validity of the enhancements achieved by
                                       integrating our solution and motivates further research directions.

                                       Keywords
                                       explanation, heterogeneous data sources, orchestration, interaction


1. Introduction and                                                                               lenging, since one has to face a large space
                                                                                                  of possible partners and process many dif-
   Background                                                                                     ferent data sources to find the BPs that best
Strategic partnerships are important for busi-                                                    suit ones requirements. BP recommendation
nesses to grow and explore more complex                                                           systems can be a solution as they help to an-
opportunities [1, 2], since these partnerships                                                    alyze the available information around BPs.
can open up possibilities to new products,                                                        In this paper, our focus is on BP Connector,
services, markets and resources [2]. How-                                                         a real-world application that provides com-
ever, finding the right business partner (BP)                                                     pany to company recommendations, where
with whom to form a partnership is chal-                                                          the companies themselves become the subject
                                                                                                  items to recommend to each other, and the
Joint Proceedings of the ACM IUI 2021 Workshops, April                                            recommendations must suit the preferences
13-17, 2021, College Station, USA                                                                 of both parties involved. This setting is stud-
" oalkan2@ie.ibm.com (O. Alkan);
massimiliano.mattetti@ibm.com (M. Mattetti);
                                                                                                  ied under the reciprocal recommender sys-
sergiocabrerobarros@gmail.com (S.C. Barros);                                                      tems research [3]; these systems have arisen
elizabeth.daly@ie.ibm.com (E.M. Daly)                                                             as an extension to classical item-based rec-

                                    © 2021 Copyright for this paper by its authors. Use permit-
                                                                                                  ommendation processes to deal with scenar-
                                    ted under Creative Commons License Attribution 4.0 Inter-
                                    national (CC BY 4.0).
                                                                                                  ios where users become the item being rec-
 CEUR
               http://ceur-ws.org
                                    CEUR   Workshop                        Proceedings            ommended to other users. In this context,
                                    (CEUR-WS.org)
 Workshop      ISSN 1613-0073
 Proceedings
both the end user and the user being rec-         on the quality of the information that is com-
ommended should accept the matching rec-          pleted through the web forms. However,
ommendation to yield a successful recom-          the information entered may not always be
mender performance [4]. Hence, for BP rec-        complete (users might have missed out some
ommendations, both the users who ask for          fields or sections), accurate (users might have
recommendations and the recommendation            mistakenly provided incorrect information)
items themselves are BPs, and the goal is to      or recent (users might have provided infor-
satisfy the interests of the two sides of the     mation some time ago which may be out-
partnership.                                      dated). This results in user and item profiles
   BP Connector has already been deployed         not reflecting the current interests and ac-
by an organization with a large ecosystem of      tual expertise of the BPs, which may degrade
BPs to foster collaborations among them in        not only the quality of the recommendations
order to create a virtuous cycle, where a suc-    but also the explanations. However, the or-
cessful engagement between BPs promotes           ganization deploying BP Connector has ac-
the business interest of the instigating orga-    cess to data around BPs such as the histor-
nization itself. The system defines two roles     ical sales records and product certifications,
for the partnership: the beneficiary and the      which, if integrated into the recommender
helper. Beneficiary refers to the company         logic, would improve the quality of the rec-
who is seeking assistance in a specific ter-      ommendations and the explanations, and this
ritory, technology, etc., whereas the helper      can help users to make better decisions [7].
refers to the company who states that it can         Although using more data has benefits,
provide assistance. The system allows com-        one important challenge is that data around
panies to first specify whether they are seek-    BPs exists in different heterogeneous sources
ing help or asking for help, and then asks        and these data sources have different cov-
them to fill in a form to specify the details     erage. Moreover, there is a possibility that
around their interests and expertise. Both        additional data sources may become avail-
the beneficiary and the helper complete the       able over time. To handle this, hybrid rec-
same forms, therefore providing information       ommendation approaches can be used, which
around the same features. These features          can essentially fuse the benefits of multi-
constitute the BP profiles and are used as        ple data sources and leverage the comple-
both the user and the item profiles by the        mentary knowledge in order to provide bet-
underlying recommender to generate BP rec-        ter recommendations [8, 9]. Hybrid recom-
ommendations [5]. More specifically, a ben-       menders support combining different recom-
eficiary requesting a BP connection is the        menders built on different data sources. For
user who is seeking for recommendations           example, one model might be a collabora-
of helper BPs, where the helper BPs consti-       tive filtering recommender that uses a rat-
tute the items of this recommendation set-        ings matrix including the feedback provided
ting. The initial solution used a content-based   by the companies regarding their previous
recommender [6] which is based on the sim-        partnerships, whereas another model could
ilarity between the profiles of the beneficia-    be a content-based recommender. In such
ries and the helpers to generate both the rec-    cases, it would be important to combine ex-
ommendations and the explanations, where          planations generated from different recom-
the explanations reveal the degree of the sim-    menders as well, which will assist users’ in
ilarity between the two profiles. Therefore,      the decision making process.
the quality of the recommendations depends           Motivated by these discussions, in this pa-
per, we present our solution called Multi            In the rest of the paper, we first present
Source Evidence Recommender, henceforth           a brief review of the related art, and then
referred to as MSER, which is built to en-        describe our solution we designed for BP
hance the recommendation and the expla-           Connector application in order to enhance
nation facilities of BP Connector. MSER           its recommendation and explanation capabil-
can ensemble different recommendation al-         ities. Then, we present the initial focus group
gorithms that are built on top of different       study and discuss our findings. We conclude
data sources. Moreover, it can receive expla-     with proposals for future research.
nations from these different recommenders,
which are presented to the user to support
their decision making process. MSER can           2. Related Work
re-rank and post-process the recommenda-
                                                  BP recommendations have been studied con-
tions based on pre-configured business rules.
                                                  sidering different sources of data and differ-
When we developed MSER, we were aware
                                                  ent types of methods [10]. [1] presents a
that different companies may have differ-
                                                  solution for recommending BPs to individ-
ent goals when seeking a partnership, where
                                                  ual business users through combining item-
these goals strongly influence which features
                                                  based fuzzy semantic similarity and collabo-
and which data sources may be the most rel-
                                                  rative filtering techniques. In [11], authors
evant to support the recommendation pro-
                                                  discuss the reciprocity aspect of the BP rec-
cess. For example, company A may need a lo-
                                                  ommendations, where they propose a ma-
cal presence for a sales opportunity, therefore
                                                  chine learning approach to predict customer-
the location information may be the most im-
                                                  supplier relationships. As discussed before,
portant factor, whereas company B may be
                                                  BP recommendations fall into the category
looking for an expert in a specific technology,
                                                  of reciprocal recommender systems, which
therefore, accurate information on product
                                                  have been applied to many online social
certifications and sales performance could be
                                                  services such as online dating [12, 13], so-
the most important factor. To support this,
                                                  cial media [14], recruitment [15] and online
we designed MSER to enable users to provide
                                                  mentoring systems [16]. All these domains
feedback around the data sources they are in-
                                                  including business partnership increasingly
terested in, in order to better align the rec-
                                                  rely on the concept of matching users with
ommendations with the users’ dynamic in-
                                                  the right users. They differ from the tra-
terests.
                                                  ditional items-to-users recommendation as
   Integrating MSER to the BP Connector ap-
                                                  their goal is not just to predict a user’s pref-
plication leads to substantial changes over
                                                  erence towards a passive item, but to find a
the initial version. These changes lead us to
                                                  match where preferences of both sides are
initially formulate two research questions: 1.
                                                  satisfied [17].
What is the difference in subjective recommen-
                                                     Our solution, MSER, orchestrates differ-
dation quality between the recommendations
                                                  ent recommender algorithms that run on dis-
generated by a single recommender and recom-
                                                  parate data sources, which relates our work
mendations generated by MSER? 2. How do the
                                                  to hybrid recommenders [18, 9]. MSER can
users perceive the explanations generated by
                                                  be considered as a recommender ensemble
MSER? In order to investigate these research
                                                  [19], which is a particular type of hybrid
questions, a preliminary focus group study
                                                  recommenders in which the recommender
with domain experts is conducted which mo-
                                                  algorithms to combine are treated as black
tivates us for further research.
boxes. In [20], authors present several ap-     thors argue that future research should cre-
proaches for generating an ensemble of col-     ate new kinds of information, interaction,
laborative models based on a single collab-     and presentation styles. To this end, MSER
orative filtering algorithm. In [21], authors   is designed to support combining explana-
presented a hybrid recommender with an          tions generated by different recommenders
interactive interface which allows users to     through dynamic user feedback, and it can
adjust the weights assigned to each recom-      support different explanation styles.
mender through sliders. This proposed sys-         The primary contribution of this paper is
tem is designed to provide recommendations      to describe how the recommendation and ex-
on media content leveraging multiple social     planation generation facilities of an exist-
sources. With the enhancements designed         ing recommender application, BP Connector,
for BP Connector, we aim to enable users        are enhanced through designing a solution
to interact with the recommenders. In this      called MSER, which combines recommenda-
regard, our initial choice for a new interac-   tions and explanations through user feed-
tive UI fell on a chatbot system. Among the     back.
possible interaction models, chatbot systems
have seen a steep increase in popularity in
the recent years driven by the wide adop-       3. Proposed Solution:
tion of mobile messaging applications [22].        Multi-Source Evidence
They also represent a natural interface for
conversational recommenders which provide          Recommender (MSER)
recommendations to the users through dia-
                                                The enhancements designed for BP Connec-
logue [23].
                                                tor are encapsulated within MSER, which
   Considering the explanations, in [24], au-
                                                is designed around four main components,
thors reviewed the literature on explanations
                                                Controller, Connector Layer, Rank-Combiner
in decision-support systems, where they dis-
                                                and Post-Processor, as depicted in Figure 1.
tinguished between variables such as the
                                                The figure shows the high-level view of the
length of the explanations, their vocabulary
                                                components in which the components’ inter-
and their presentations, and they concluded
                                                actions are labelled in sequence to show the
that additional studies are necessary to as-
                                                execution flow. Below, we summarize the de-
sess the impact of these variables. In [25],
                                                tails of these components.
authors introduced the concept of recipro-
                                                   Controller connects the client application
cal explanation where the user who is look-
                                                with the underlying recommender logic,
ing for a connection is also presented with
                                                thus making it responsible for orchestrat-
an explanation on what would be the in-
                                                ing the execution flow of MSER. It exposes
terest of the other party in establishing a
                                                a get_recommendations method, which takes
mutual connection. Kouki et al. [26] stud-
                                                two parameters: 1. query parameters, which
ied how to provide useful hybrid explana-
                                                specifies the properties of the recommenda-
tions that capture informative signals from
                                                tion request, 2. recommender weights, which
a multitude of data sources, and conducted
                                                determines the weights that should be as-
a crowd sourced user study to evaluate sev-
                                                signed to different recommender algorithms,
eral different design approaches for hybrid
                                                where a weight of 0 indicates that the corre-
explanations. In another work [27], authors
                                                sponding recommender should be excluded
proposed a taxonomy that categorizes differ-
                                                from the recommendation process.
ent explainable recommenders and the au-
Figure 1: MSER - System architecture.



   Once a recommendation request is re-         mender weights it receives from the Controller
ceived from the client application through      (4). The ranked list is then processed by
calling the get_recommendations method of       the Post-Processor which applies the business
the Controller (1), Controller first forwards   rules (5). Avoid recommending a firm to an-
this request to the Connector Layer (2) which   other firm if their business needs do not coin-
in turn calls the configured recommender        cide or if they operate in different geographies
systems to receive the recommendations and      is an example of a business rule that BP Con-
the explanations (3). The responses received    nector enforces. Each recommender can send
from the recommenders are then handed           an explanation associated with the recom-
over to the Rank-Combiner together with         mended BP, which is also combined by Post-
the recommender weights. Rank-Combiner          Processor to present the final explanation in
computes the ranking of the final recom-        a way that is pre-configured within the solu-
mendation list using a linear combination of    tion. Lastly, the final list is returned to the
the recommendation scores [9, 28], where it     client application (6).
adjusts the weighting based on the recom-          Integrating MSER to BP Connector. The
Figure 2: BP Connector - Sample screenshot for the dialogue-based interface.



initial version of BP Connector used a sin-       as a vector of weights, and then it computes
gle Similarity-Based Recommender (SBR), and       the similarity between these vectors using
through the adoption of MSER, the solu-           the Cosine Similarity metric. The web form
tion has been enhanced with two additional        data represents a kind of explicit user pro-
recommenders: Expert Recommender (ER)             file [13], and SBR tries to connect a proac-
and Performance Recommender (PR). ER has          tive user (beneficiary) with a reactive one
been serving a production application in          (helper), so that the reciprocal recommenda-
the sales domain for more than two years,         tion satisfies the preferences of both sides.
therefore, the existing recommender service          ER formulates the recommendation prob-
was plugged into the BP Connector solution,       lem as an Information Retrieval process [29],
whereas PR is specifically designed for BP        where the sales history of a BP corresponds
Connector.                                        to a document, an attribute of a sales oppor-
   SBR computes the similarity between the        tunity is a field of the document (e.g. coun-
features that the beneficiary and the helper      try, sector, product), and an attribute value
specified in the initial web forms. To achieve    corresponds to a term (e.g. United States for
this, SBR first represents the form parameters    country; banking for sector). The beneficiary
request form plays the role of the query, and a          user interface of BP Connector limited the
TF-IDF Similarity score1 is computed for each            users to following a predefined set of steps.
document, which represents the proficiency               We aimed to increase the interactivity be-
score of the helper BP corresponding to the              tween the user and the application by design-
document.                                                ing a dialogue-based interface that sits next
   PR uses a machine learning model to pre-              to original interface. From this dialogue, ben-
dict the probability of an opportunity being             eficiaries can perform the following interac-
won or lost by considering the expertise of              tions: 1. fill in request details, 2. receive rec-
a BP. It computes a probability score for a              ommendations, 3. guide MSER to use the re-
helper to win an opportunity whose char-                 quired recommenders, and 4. receive expla-
acteristics match the requirements defined               nations. A sample screenshot for the third
in the beneficiary request form. This rec-               interaction listed is given in Figure 2. The
ommender builds a Gradient Boosting Clas-                dialogue is designed to be able to elicit user
sifier [30] for each helper BP in the dataset            preferences towards the recommendation al-
using historical sales data.                             gorithms. It assigns a weight of 1 to a rec-
   Explanations. In addition to the recom-               ommender if the user expresses interest in it,
mendations, each of the three recommenders               or a weight of 0 if the user shows no interest
provide its own set of explanations which                towards it. At the beginning of the conversa-
is combined by MSER. As for the explana-                 tion a weight of 1 is assigned to each recom-
tions, SBR provides the similarity score be-             mender. The dialogue is built using Watson
tween the helper request and the beneficiary             Assistant2 , an existing service which is one of
request as an explanation. Moreover, it pro-             the natural language understanding services
vides four other scores, which represent the             for conversational question answering [31].
overlap between the beneficiary request and
the helper request in terms of technology
(e.g. Analytics, Cloud, Security, etc.), business        4. Evaluation
need (e.g. Consulting, Marketing, Sales, etc.),
                                                         Setup and Participants. We evaluated
industry (e.g. Banking, Education, Health-
                                                         MSER as the new recommender behind BP
care, etc.) and assistance type (e.g. developing
                                                         Connector with two different groups. The
new sales relationship, creating new services,
                                                         first group involved 7 domain experts, and the
supporting new solutions, etc.). ER, on the
                                                         second group included 5 active users of the
other hand, provides the number of deals that
                                                         application. Domain experts were employed
a helper had in the past in the sector, indus-
                                                         by the organization deploying BP Connector
try, country, etc. listed in the beneficiary re-
                                                         and they worked directly with BPs. They op-
quest form. PR establishes a baseline win rate
                                                         erated at a global scale (2 in North America,
given the parameters specified in the benefi-
                                                         1 in Europe, 1 in Middle East and 3 in Asia).
ciary request form. As explanation, the per-
                                                         Active users included the users of the initial
formance of a helper is provided as a relative
                                                         BP Connector before MSER deployment. Do-
increment of the win rate over the baseline’s.
                                                         main experts participated in a remote brief-
As it is relative to a baseline value, perfor-
                                                         ing meeting to get information about the user
mance can assume negative values as well.
                                                         study. Afterwards, they filled in a survey,
   User Interaction. The original form-based
                                                         which was the same for all of them, and then
    1 https://lucene.apache.org/core/8_7_0/core/org/

apache/lucene/search/similarities/TFIDFSimilarity.html       2 https://www.ibm.com/cloud/watson-assistant/
                                    (a) Match score explanation




                                       (b) Short explanation




                                      (c) Detailed explanation
Figure 3: Screenshots from BP Connector User Study - Examples of match score (a), short (b) and
detailed (c) explanations for the recommendations generated for a sample connection request. For the
detailed explanation, explanations for only BP2 is displayed.



participated in a remote focus group to dis-       mendations and explanations using MSER.
cuss the results and provide further feedback.       The surveys were similar for both groups.
Active users, on the other hand, answered a        During the surveys, a partnership request
survey personalized to their company. This         was explained, and three companies were
was performed through selecting one of their       recommended as potential partners, where
former requests made to the initial BP Con-        each recommended company had one expla-
nector and generating a new set of recom-          nation accompanying it. We experimented
on three types of explanations with different     Table 1
levels of details: 1. match score, 2. short ex-   Recommendation quality perceived by the ex-
planation, and 3. detailed explanation. Match     perts for each type of explanation
score explanation includes only the percent-
                                                   Exp. Type      Very   Good Neutral      Bad
age value representing how much the of-
                                                                  good
fer of help from a company fits the help re-
quest, which is generated by SBR, whereas          Match score    0      5       1         1
short explanation and detailed explanation are     Short          2      5       0         0
formed using the explanations from all three       Detailed       1      4       0         1
recommenders, SBR, ER and PR. For the ex-
planations generated by SBR, short explana-
tion includes only the percentage of match,       allows us to explore the completeness princi-
(same with the the match score explanation),      ple as defined in [32], where each explana-
whereas the detailed explanation presents the     tion includes more information than the pre-
details of the overlap between the offer and      vious one in order to detect where informa-
the request of help considering the four di-      tion overload starts generating a problem.
mensions; technology, business need, industry        Results and Discussion. To evaluate
and assistance type, as discussed in Section 3.   how participants perceive the recommenda-
For the explanations generated by ER, short       tions from MSER, we examined their evalu-
explanation includes the total number of op-      ation of the recommendations with each of
portunities the helper BP had in the past with    the explanations provided with them. Table 1
the products listed in the beneficiary request    summarizes the results for the group of ex-
form, together with the product family that       perts. As can be seen from the table, the ma-
represents the main area of expertise of the      jority of the experts ranked the recommen-
helper BP. The detailed explanation, on the       dations as Good independent of which expla-
other hand, includes the details of this exper-   nation type was provided. However, when
tise, specifically, the number of opportunities   they were presented with more than just a
for the different products, countries, sectors    match score, their ratings improved. One
and the deal sizes requested by the benefi-       of the experts said "I like that I can under-
ciary. Finally, the explanation generated by      stand the size of their experience.". Users, on
PR is the same for both types. Examples of        the other hand, responded as Neutral when a
the three types of explanations for the same      match score was provided to them; however,
request are given in Figure 3. If a recom-        receiving either a short or a detailed expla-
mender did not recommend a specific BP that       nation helped them to build more confidence
appeared in the final recommendation list, its    in the recommendations. We observed that
explanation was omitted from both the short       evaluating recommendations without expla-
and the detailed explanations.                    nations is difficult in this context, as one
   A page of the survey showed all three          cannot quantify if a partnership worked or
companies with the same type of explana-          not after it really happens. In our evalua-
tion. Subsequent survey pages showed dif-         tion, however, we could only evaluate the
ferent types. However, the order was always       judgment that the users made of a poten-
kept the same as follows: 1. match score, 2.      tial partnership; therefore, providing users
short explanation, and 3. detailed explana-       with valuable explanations was key to sup-
tion, since each of the next explanations adds    port their decisions.
more information to the previous one. This           Regarding the amount of information pro-
vided (explanation completeness), the prefer-      research. As a future work, we aim to eval-
ence of short versus detailed explanations         uate the scalability of the solution by enlarg-
was not homogeneous among participants.            ing the recommender engine behind BP Con-
One participant mentioned: "Of little value        nector with additional recommender systems
just showing a name and a percentage match"        based on additional data sources such as data
for the match score type, and another one said     around product certifications, ratings given
"I can get an idea of the experience and type of   by the beneficiaries to the helpers they con-
work of each partner." for the detailed type.      nected with, and implicit preferences based
Some declared that the detailed explanation        on users’ behaviour [33] such as requests of
shows too much information and is difficult        connections and responses to matches.
to process, whereas others mentioned that
they would like to have as much information
as possible to decide on future partnerships.      6. Acknowledgements
This aligns with findings in [25] about how
                                                   We would like to acknowledge the support
the cost of the decision influences the expla-
                                                   and collaboration provided by IBM CAO
nation effectiveness. Apart from the personal
                                                   team: Sanjmeet Abrol, Cindy Wu and Alice
preferences, the presentation mode was also
                                                   Chang.
important for our participants. When they
were asked about interaction and visualiza-
tion, personal preferences played an impor-        References
tant role. Participants mentioned that inter-
activity with the system and graphical rep-         [1] J. Lu, Q. Shambour, Y. Xu, Q. Lin,
resentations of the data presented for each             G. Zhang, a web-based personalized
company are desirable. The design could                 business partner recommendation sys-
therefore include an interactive interface in           tem using fuzzy semantic techniques,
which users initially receive a match score,            Computational Intelligence 29 (2013)
ask for a short explanation, and are able to            37–69.
explore the detailed explanation of each di-        [2] W. Bergquist, J. Betwee, D. Meuel,
mension individually. This would allow users            Building strategic relationships: How
to find their own balance in the explana-               to extend your organization’s reach
tion completeness and information overload              through partnerships, alliances, and
scale.                                                  joint ventures, in: Building strategic re-
                                                        lationships: how to extend your orga-
                                                        nization’s reach through partnerships,
5. Conclusion                                           alliances, and joint ventures, 1995, pp.
                                                        246–246.
We presented MSER which is built to en-
                                                    [3] J. Neve, I. Palomares, Hybrid reciprocal
hance the recommendation and the expla-
                                                        recommender systems: Integrating
nation facilities of a real-world application,
                                                        item-to-user principles in reciprocal
BP Connector that provides company to
                                                        recommendation,         in: Companion
company recommendations. An initial user
                                                        Proceedings of the Web Conference
study revealed that the extensions enabled
                                                        2020, WWW ’20, Association for
by MSER can improve both the recommen-
                                                        Computing Machinery, New York, NY,
dation and the explanation capabilities of BP
                                                        USA, 2020, p. 848–854. URL: https:
Connector, and the results motivates further
    //doi.org/10.1145/3366424.3383295.           [10] J. Bivainis, Development of business
    doi:10.1145/3366424.3383295.                      partner selection, Ekonomika 73 (2006)
[4] I. Palomares, C. Porcel, L. Pizzato,              7–18.
    I. Guy, E. Herrera-Viedma,          Recip-   [11] J. Mori, Y. Kajikawa, H. Kashima,
    rocal recommender systems: Analy-                 I. Sakata, Machine learning approach
    sis of state-of-art literature, challenges        for finding business partners and build-
    and opportunities towards social rec-             ing reciprocal relationships, Expert
    ommendation, Information Fusion 69                Systems with Applications 39 (2012)
    (2021) 103–127.                                   10402–10407.
[5] J. Leskovec, A. Rajaraman, J. D. Ull-        [12] P. Xia, B. Liu, Y. Sun, C. Chen, Recip-
    man, Recommendation Systems, 2                    rocal recommendation system for on-
    ed., Cambridge University Press,                  line dating, in: 2015 IEEE/ACM In-
    2014, p. 292–324. doi:10.1017/                    ternational Conference on Advances in
    CBO9781139924801.010.                             Social Networks Analysis and Mining
[6] M. J. Pazzani, D. Billsus, Content-               (ASONAM), 2015, pp. 234–241.
    based recommendation systems, in:            [13] L. Pizzato, T. Rej, T. Chung, I. Ko-
    P. Brusilovsky, A. Kobsa, W. Nejdl                prinska, J. Kay, Recon: A reciprocal
    (Eds.), The Adaptive Web, volume                  recommender for online dating, in:
    4321 of Lecture Notes in Computer Sci-            Proceedings of the Fourth ACM Con-
    ence, Springer, Berlin/Heidelberg, 2007,          ference on Recommender Systems,
    pp. 325–341. URL: http://dx.doi.org/              RecSys ’10, Association for Com-
    10.1007/978-3-540-72079-9_10. doi:10.             puting Machinery, New York, NY,
    1007/978-3-540-72079-9_10.                        USA, 2010, p. 207–214. URL: https:
[7] D. Jannach, M. Jugovac, I. Nunes,                 //doi.org/10.1145/1864708.1864747.
    Explanations and user control in rec-             doi:10.1145/1864708.1864747.
    ommender systems, in: Proceedings            [14] X. Cai, M. Bain, A. Krzywicki,
    of the 23rd International Workshop on             W. Wobcke, Y. S. Kim, P. Comp-
    Personalization and Recommendation                ton, A. Mahidadia, Learning to make
    on the Web and Beyond, ABIS ’19, Asso-            social recommendations: a model-
    ciation for Computing Machinery, New              based approach,        in: International
    York, NY, USA, 2019, p. 31. URL: https:           Conference on Advanced Data Mining
    //doi.org/10.1145/3345002.3349293.                and Applications, Springer, 2011, pp.
    doi:10.1145/3345002.3349293.                      124–137.
[8] C. C. Aggarwal, Ensemble-Based               [15] R. Liu, W. Rong, Y. Ouyang, Z. Xiong, A
    and Hybrid Recommender Systems,                   hierarchical similarity based job recom-
    Springer International Publishing,                mendation service framework for uni-
    Cham, 2016, pp. 199–224. URL: https:              versity students, Frontiers of Computer
    //doi.org/10.1007/978-3-319-29659-3_6.            Science 11 (2016) 912–922.
    doi:10.1007/978-3-319-29659-3_               [16] C.-T. Li,     Mentor-spotting: recom-
    6.                                                mending expert mentors to mentees
[9] R. Burke,         Hybrid recommender              for live trouble-shooting in codementor,
    systems: Survey and experiments,                  Knowledge and Information Systems 61
    User Modeling and User-Adapted In-                (2019) 799–820.
    teraction 12 (2002). doi:10.1023/A:          [17] F. Vitale, N. Parotsidis, C. Gentile, On-
    1021240730564.                                    line reciprocal recommendation with
     theoretical performance guarantees, in:          doi:10.1007/s11257-017-9195-0.
     Advances in Neural Information Pro-         [25] A. Kleinerman, A. Rosenfeld, S. Kraus,
     cessing Systems, 2018, pp. 8257–8267.            Providing explanations for recommen-
[18] C.      Aggarwal,          Recommender           dations in reciprocal environments, in:
     Systems,        2016.      doi:10.1007/          Proceedings of the 12th ACM confer-
     978-3-319-29659-3.                               ence on recommender systems, 2018,
[19] R. Cañamares, M. Redondo, P. Castells,           pp. 22–30.
     Multi-armed recommender system              [26] P. Kouki, J. Schaffer, J. Pujara,
     bandit ensembles,          in: Proceed-          J. O’Donovan, L. Getoor,           User
     ings of the 13th ACM Conference                  preferences for hybrid explanations,
     on Recommender Systems, Rec-                     in: Proceedings of the Eleventh ACM
     Sys ’19, Association for Comput-                 Conference on Recommender Systems,
     ing Machinery, New York, NY,                     2017, pp. 84–88.
     USA, 2019, p. 432–436. URL: https:          [27] G. Friedrich, M. Zanker, A taxonomy
     //doi.org/10.1145/3298689.3346984.               for generating explanations in recom-
     doi:10.1145/3298689.3346984.                     mender systems, AI Magazine 32 (2011)
[20] A. Bar, L. Rokach, G. Shani, B. Shapira,         90–98.
     A. Schclar, Improving simple collab-        [28] M. Claypool, A. Gokhale, T. Miranda,
     orative filtering models using ensem-            P. Murnikov, D. Netes, M. Sartin, Com-
     ble methods, in: International Work-             bining content-based and collaborative
     shop on Multiple Classifier Systems,             filters in an online newspaper, 1999.
     Springer, 2013, pp. 1–12.                   [29] A. Costa, F. Roda, Recommender sys-
[21] S.    Bostandjiev,      J.    O’Donovan,         tems by means of information retrieval,
     T. Höllerer, Tasteweights: a visual              in: Proceedings of the International
     interactive hybrid recommender sys-              Conference on Web Intelligence, Min-
     tem, in: Proceedings of the sixth ACM            ing and Semantics, 2011, pp. 1–5.
     conference on Recommender systems,          [30] A. Natekin, A. Knoll, Gradient boosting
     2012, pp. 35–42.                                 machines, a tutorial, Frontiers in neu-
[22] P. B. Brandtzaeg, A. Følstad, Why peo-           rorobotics 7 (2013) 21. doi:10.3389/
     ple use chatbots, in: International Con-         fnbot.2013.00021.
     ference on Internet Science, Springer,      [31] D. Braun, A. Hernandez-Mendez,
     2017, pp. 377–392.                               F. Matthes, M. Langen,          Evaluat-
[23] D. Jannach, A. Manzoor, W. Cai,                  ing natural language understanding
     L. Chen,        A survey on conver-              services for conversational question
     sational     recommender        systems.,        answering systems, in: Proceedings of
     CoRR abs/2004.00646 (2020). URL:                 the 18th Annual SIGdial Meeting on
     http://dblp.uni-trier.de/db/journals/            Discourse and Dialogue, Association
     corr/corr2004.html#abs-2004-00646.               for Computational Linguistics, 2017,
[24] I. Nunes, D. Jannach, A systematic               pp. 174–185.
     review and taxonomy of expla-               [32] T. Kulesza, S. Stumpf, M. Burnett,
     nations in decision support and                  S. Yang, I. Kwan, W.-K. Wong, Too
     recommender systems,          User Mod-          much, too little, or just right? ways
     eling and User-Adapted Interac-                  explanations impact end users’ mental
     tion 27 (2017) 393–444. URL: https:              models, in: 2013 IEEE Symposium on
     //doi.org/10.1007/s11257-017-9195-0.             Visual Languages and Human Centric
     Computing, IEEE, 2013, pp. 3–10.
[33] L. Pizzato, T. Chung, T. Rej, I. Koprin-
     ska, K. Yacef, J. Kay, Learning user pref-
     erences in online dating, in: Proceed-
     ings of the Preference Learning (PL-
     10) Tutorial and Workshop, European
     Conference on Machine Learning and
     Principles and Practice of Knowledge
     Discovery in Databases (ECML PKDD),
     Citeseer, 2010.