<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Intelligent Recommendations for Citizen Science</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Na'ama Dayan</string-name>
          <email>namadayan@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kobi Gal</string-name>
          <email>kobig@bgu.ac.il</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Avi Segal</string-name>
          <email>avisegal@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guy Shani</string-name>
          <email>shanigu@bgu.ac.il</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Darlene Cavalier</string-name>
          <email>darlene@scistarter.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Arizona State University</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ben-Gurion University</institution>
          ,
          <country country="IL">Israel</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ben-Gurion University</institution>
          ,
          <addr-line>Israel</addr-line>
          ,
          <institution>University of Edinburgh</institution>
          ,
          <country country="UK">U.K.</country>
        </aff>
      </contrib-group>
      <abstract>
        <p />
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Citizen science refers to scientific research that is carried out by
volunteers, often in collaboration with professional scientists. The
spread of the internet has significantly increased the number of
citizen science projects and allowed volunteers to contribute to
these projects in dramatically new ways. For example, SciStarter,
our partners in the project, is an online portal that ofers more than
3,000 afiliate projects and recruits volunteers through media and
other organizations, bringing citizen science to people. Given the
sheer size of available projects, finding the right project, which
best suits the user preferences and capabilities, has become a
major challenge and is essential for keeping volunteers motivated
and active contributors. This paper addresses this challenge by
developing a system for personalizing project recommendations
in the SciStarter ecosystem. We adapted several recommendation
algorithms from the literature based on collaborative filtering and
matrix factorization. The algorithms were trained on historical data
of users’ interactions in SciStarter as well as their contributions to
diferent projects. The trained algorithms were deployed in
SciStarter in a study involving hundreds of users who were provided
with personalized recommendations for projects they had not
contributed to before. Volunteers were randomly divided into diferent
cohorts, which varied the recommendation algorithm that was used
to generate suggested projects. The results show that using the new
recommendation system led people to contribute to new projects
that they had never tried before and led to increased participation
in SciStarter projects when compared to cohort groups that were
recommended the most popular projects, or did not receive
recommendations, In particular, the cohort of volunteers receiving
recommendations created by an SVD algorithm (matrix
factorization) exhibited the highest levels of contributions to new projects,
when compared to the other cohorts. A follow-up survey conducted
with the SciStarter community confirms that users were satisfied
with the recommendation tool and claimed that the
recommendations matched their personal interests and goals. Based on the
positive results, our recommendation system is now fully integrated
with SciStarter. The research has transformed how SciStarter helps
projects recruit and support participants and better respond to their
needs.
Citizen science engages people in scientific research by collecting,
categorizing, transcribing, or analyzing scientific data [
        <xref ref-type="bibr" rid="ref10 ref3 ref4">3, 4, 10</xref>
        ].
These platforms ofer thousands of diferent projects which
advance scientific knowledge all around the world. Through citizen
science, people share and contribute to data monitoring and
collection programs. Usually this participation is done as an unpaid
volunteer. Collaboration in citizen science involves scientists and
researchers working with the public. Community-based groups may
generate ideas and engage with scientists for advice, leadership, and
program coordination. Interested volunteers, amateur scientists,
students, and educators may network and promote new ideas to
advance our understanding of the world. Scientists can create a
citizen-science program to capture more or more widely spread
data without spending additional funding. Citizen-science projects
may include wildlife-monitoring programs, online databases,
visualization and sharing technologies, or other community eforts.
      </p>
      <p>For example, the citizen science portal SciStarter (scistarter.com),
which also comprises our empirical methodology, includes over
3,000 projects, and recruits volunteers through media and other
organizations (Discover, the Girl Scouts, etc). As of July, 2020, there are
82,014 registered users in SciStarter. Examples of popular projects
on SciStarter include iNaturalist 1 in which users map and share
observations of biodiversity across the globe; CoCoRaHS2, where
volunteers share daily readings of precipitation; and StallCatchers 3,
where volunteers identify vessels in the brain as flowing or stalled.
Projects can be taken either online or at a specific physical region.
Users visit SciStarter in order to discover new projects to participate
in and keep up to date with the community events. Figure 1 shows
the User Interface of SciStarter.</p>
      <p>
        According to a report from the National Academies of Sciences,
Engineering, and Medicine [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], citizen scientists’ motivations are
“strongly afected by personal interests,” and participants who
engage in citizen science over a long period of time “have successive
opportunities to broaden and deepen their involvement.” Thus,
sustained engagement through the use of intelligent recommendations
can improve data quality and scientific outcomes for the projects
and the public.
      </p>
      <p>
        Yet, finding the RIGHT project–one that matches interests and
capabilities, is like searching for a needle in a haystack [
        <xref ref-type="bibr" rid="ref24 ref5">5, 24</xref>
        ].
Ponciano et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] who characterized volunteers’ task execution
1https://scistarter.org/seek-by-inaturalist
2https://scistarter.org/cocorahs-rain-hail-snow-network
3https://scistarter.org/stall-catchers-by-eyesonalz
patterns across projects and showed that volunteers tend to explore
multiple projects in citizen science platforms, but they perform tasks
regularly in just a few of them. This result is also reflected in users’
participation patterns in Scistarter. Figure 2 shows a histogram of
the number of projects that users contribute to on the site between
2017 and 2019. As shown by the Figure, the majority of active users
in the SciStarter portal do not contribute to more than a single
project.
      </p>
      <p>SciStarter employs a search engine (shown in Figure 3) that
uses topics, activities, location and demographics (quantifiable
ifelds) to suggest project recommendations. However,
recommending projects based on this tool has not been successful. To begin
with, our analysis shows that about 80% of users do not use the
search tool. Second, those who use the search tool For example,
when querying outdoor projects, the search engine recommends
the CoCoRaHS project and Globe at Night, in which volunteers
measure and submit their night sky brightness observations. But
data shows that people who join CoCoRaHS are more likely to join
Stall Catchers, an indoor, online project to accelerate Alzheimer’s
research.</p>
      <p>
        We address this challenge by using recommendation algorithms
to match individual volunteers with new projects based on the past
history of their interactions on the site [
        <xref ref-type="bibr" rid="ref2 ref7">2, 7</xref>
        ]. Recommendation
systems have been used in other domains, such as e-commerce,
news, and social media [
        <xref ref-type="bibr" rid="ref13 ref8">8, 13</xref>
        ]. However, the nature of interaction
in citizen Science is fundamentally diferent than these domains
in that volunteers are actively encouraged to contribute their time
and efort to solve scientific problems. Compared to clicking on an
advertisement or a product, as is the case for e-commerce and news
sites, considerable more efort is required from a citizen science
volunteer. Our hypothesis was that personalizing recommendations to
users will increase their engagement in the SciStarter portal as
measured by the number of projects that they contribute to following
the recommendations, and the extent of their contributions.
      </p>
      <p>We attempted to enhance participant engagement to SciStarter
projects by matching users with new projects based on past history
of their interactions on the site. We adopted 4 diferent
recommendation algorithms to the citizen science domain. The input to the
algorithms consists of data representing users’ interactions with
afiliated projects (e.g., joining or contributing to a project), and
users’ interactions on the SciStarter portal, (e.g., searching for a
project). The output of the algorithm is a function from user profile
and past history of interactions on SciStarter to a ranking of 10
projects in order of inferred relevance for the user.</p>
      <p>We measured two types of user interactions, which were taken
as the input to the algorithms: (1) Interactions with projects: data
generated as a result of users’ activities with projects, e.g joining
a project, making a contribution to a project or participating in a
project. (2) Interactions on Scistarter portal, such as searching for a
project, or filling a form about the project. The algorithm matches
a user profile and his past history of interactions and outputs a
ranking of 10 projects in decreasing order of relevance for each
user.</p>
      <p>We conducted a randomized controlled study, in which hundreds
of registered SciStarter users were assigned recommendations by
algorithms using diferent approaches to recommend projects. The
ifrst approach personalized projects to participants by using
collaborative filtering algorithms (item-based and user-based), and matrix
factorization (SVD) algorithms. These algorithms were compared to
two non-personalized algorithms: the first algorithm recommended
the most popular projects at that point in time, and the second
algorithm recommended three projects that were manually determined
by the SciStarter admins and custom to change during the study.
The results show that people receiving the personalized
recommendations were more likely to contribute to new projects that they
had never tried before and participated more often in these projects
when compared to participants who received non-personalized
recommendations, or did not receive recommendations, In particular,
the cohort of participants receiving recommendations created by
the SVD algorithm (matrix factorization) exhibited the highest
levels of contributions to new projects, when compared to the other
personalized groups. A follow-up survey conducted with the
SciStarter community confirms that the Based on the positive results,
our recommendation system is now fully integrated with SciStarter.
This research develops a recommendation system for citizen
science domain. It is the first study using AI based recommendation
algorithms in large scale citizen science platforms.
1</p>
    </sec>
    <sec id="sec-2">
      <title>RELATED WORK</title>
      <p>This research relates to past work in using AI to increase
participants’ motivation in citizen science research as well as work in
aplying recommendation systems in real world settings. We list
relevant work in each of these two areas.
1.1</p>
    </sec>
    <sec id="sec-3">
      <title>Citizen Science - Motivation and level of engagement</title>
      <p>
        Online participation in citizen science projects has become very
common [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Yet, most of the contributions rely on a very small
proportion of participants [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. In SciStarter, the group of
participants who contribute to more than 10 projects is less than 10% of
all users. However, in most citizen science projects, the majority
of participants carry out only a few tasks. Many researches have
explored the incentives and motivations of participants in order to
increase participants engagement. Kragh et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] showed that
participants in citizen science projects are motivated by personal
interest and desire to learn something new, as well as by the
desire to volunteer and contribute to science. A prior work of Raddic
et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] also showed that participants engagement has mainly
originated in pure interest in the project topic, such as astronomy
and zoology. Yet, as we tested this finding in our collected data, we
noticed that user interest is very diverse and does not include only
one major topic of interest. Nov et al. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] explored the diferent
motivations of users to contribute, by separating this question to
quantity of contribution and quality of contributions. They showed
that quantity of contribution is mostly determined by the user
interest in the project and by social norms while quality of
contribution is determined by understanding the importance of the
task and by the user’s reputation. In our work we aimed to increase
only the quantity of contributions, since data about the quality of
contribution is not available for us.
      </p>
      <p>
        A significant prior work was done in order to increase
participants engagement, which takes into consideration user motivation
as well. Segal et al. [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] have developed an intelligent approach
which combines model-based reinforcement learning with of-line
policy evaluation in order to generate intervention policies which
significantly increase users’ contributions. Laut et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] have
demonstrated how participants are afected by virtual peers and
showed that participants’ contribution can be enhanced through
the presence of virtual peers.
      </p>
      <p>
        Ponciano et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] characterized volunteers’ task execution
patterns across projects and showed that volunteers tend to explore
multiple projects in citizen science platforms, but they perform
tasks regularly in just a few of them. They have also showed that
volunteers recruited from other projects on the platform tend to
get more engaged than those recruited outside the platform. This
ifnding is a great incentive to increase user engagement in
SciStarter’s platform instead of in the projects’ sites directly, like we do
in our research.
      </p>
      <p>In this research, we attempted to enhance participant
engagement with citizen science projects by recommending the user projects
which best suit the user preferences and capabilities.
1.2</p>
    </sec>
    <sec id="sec-4">
      <title>Increasing user engagement with recommendations</title>
      <p>
        Similar to our work, other researchers, also tried to increase user
engagement and participation by personalized recommendations.
Labarthe et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] built a recommender system for students in
MOOCs that recommends relevant and rich-potential contacts with
other students, based on user profile and activities. They showed
that by recommending this list of contacts, students were much
more likely to persist and engage in MOOCs. A subsequent work
of Dwivedi et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] developed a recommender system that
recommends online courses to students based on their grades in other
subjects. This recommender was based on collaborative filtering
techniques and particularly item based recommendations. This
paper showed that users who interacted with the
recommendation system increased their chance to finish the MOOC by 270%,
compared to users who did not interact with the recommendation
system.
      </p>
      <p>
        Some other studies that concern user engagement with
recommendations systems showed how early intervention significantly
increase user engagement. Freyne et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] showed that users who
received early recommendations in social networks are more likely
to continue returning to the site. They showed a clear diference
in retention rate between the control group, which has lost 42% of
the users, and a group that interacted with the recommendations,
which has lost only 24% of the users.
      </p>
      <p>
        Wu et al. [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], showed how tracking user’s clicks and return
behaviour in news portals succeeds to increase user engagement
with their recommendation system. They formulated the
optimization of long-term user engagement as a sequential decision making
problem, where a recommendation is based on both the estimated
immediate user click and the expected clicks results from the users’
future return.
      </p>
      <p>
        Lin et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], developed a recommendation system for
crowdsourcing which incorporates negative implicit feedback into a
predictive matrix factorization model. They showed that their models,
which consider negative feedback, produce better
recommendations than the original MF approach of implicit feedback. They
evaluated their findings via experiment with data from Microsoft’s
internal Universal Human Relevance System and showed that the
quality of task recommendations is improved with their models. In
our work, we use only positive implicit feedback, due to the low
users trafic, where a significant evidence of negative feedback is
hard to be found.
      </p>
      <p>
        Recommendation algorithms are mostly evaluated by their
accuracy. The underlying assumption is that accuracy will increase
user satisfaction and ultimately lead to higher engagement and
retention rate. However, past research has suggested that accuracy
does not necessarily lead to satisfaction. Wu et al [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ] investigated
the efects of popular approaches such as collaborative-filtering
and content-based to see if they have diferent efects on user
satisfaction. Results of the study suggested that product awareness (the
set of products that the user is initially aware of before using any
recommender system) plays an important role in moderating the
impact of recommenders. Particularly, if a consumer had a relatively
niche awareness set, chances are that content based systems would
garner more positive responses on the satisfaction of the user. On
the other hand, they showed that users who are more aware of
popular items, should be targeted with collaborative filtering
systems instead. A subsequent work of Nguyen et al [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], showed that
individual users’ preferences for the level of diversity, popularity,
and serendipity in recommendation lists cannot be inferred from
their ratings alone. The paper suggested that user satisfaction can
be improved by integrating users’ personality traits into the process
of generating recommendations, which were obtained by a user
study.
      </p>
    </sec>
    <sec id="sec-5">
      <title>METHODOLOGY</title>
      <p>Our goals for the research project were to (1) help users discover
new projects in the SciStarter ecosystem - matching them with
projects that are suitable to their preferences. (2) learn user behavior
in SciStarter, and develop a recommendation system which will
help increase the number of project they contribute to. (3) measure
users’ satisfaction with the recommendation system.</p>
      <p>
        We adopted several canonical algorithms from the
recommendation systems literature: CF user based [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], CF item based [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ],
Matrix Factorization [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Popularity [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These approaches were
chosen as they are all based on analyzing the interactions between
users and items and do not rely on domain knowledge which is
lacking (such as project’s location, needed materials, ideal age group
etc.). Each algorithm receives as input a target user and the
number of recommendations to generate (N). The algorithm returns a
ranking of top N projects in decreasing order of relevance for the
user. We provide additional details about each algorithm below.
2.0.1 User-based Collaborative Filtering. In this algorithm, the
recommendation is based on user similarities [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. The ranking of
a project for a target user is computed by comparing users who
interacted with similar projects. We use a KNN algorithm [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to
ifnd similar users, where the similarity score for user vector U1
and user vector U2 from the input matrix, is calculated with cosine
similarity.
      </p>
      <p>
        1 ∗  2
 ( 1,  2) =
|| 1|||| 2||
We chose the value of  to be the minimal number such that the
number of new projects in the neighborhood of similar users to the
target user equaled the number of recommendations. In practice 
was initially chosen to be 100 and increased until this threshold was
met. This was done so that there will always be suficient number
of projects to recommend for users.
2.0.2 Item-based Collaborative Filtering. In this algorithm the
recommendation is based on project similarity [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. The algorithm
generates recommendations based on the similarity between projects
calculated using people’s interaction with these projects. Similarity
score for project vector P1 and project vector P2 from the input
matrix, is calculated with cosine similarity.
      </p>
      <p>1 ∗  2
 ( 1,  2) =  ( ) =
|| 1|||| 2||</p>
      <p>
        The algorithm then recommends on the top-N most similar
projects to the set of projects the user has interacted with in the
past.
2.0.3 Matrix Factorization - SVD. The Matrix factorization
algorithm (SVD) directly predicts the relevance of a new project to a
target user by modeling the user-project relationship [
        <xref ref-type="bibr" rid="ref14 ref27">14, 27</xref>
        ]. This
model-based algorithm (as opposed to the two memory based
algorithms presented earlier) was chosen since it is one of the leading
recommendation system algorithms [
        <xref ref-type="bibr" rid="ref11 ref14 ref26">11, 14, 26</xref>
        ]. SVD uses a matrix
where the users are rows, projects are columns, and the entries
are values that represent the relevance of the projects to the users.
This users-projects matrix is often very sparse and has many
missing values, since users engage with a very small portion of all the
available items.
      </p>
      <p>The algorithm estimates the relevance of a target project for a
user by maintaining a user model and a project model that include
hidden variables (latent factors) that can afect how users choose
items. These variables have no semantics, they are simply numbers
in a matrix; in reality, aspects like gender, culture, age etc. may
afect the relevance, but we do not have access to them.</p>
      <p>The singular value decomposition (SVD) of any matrix  is a
factorization of the form    . This algorithm is used in
recommendation systems in order to find the multiplication of the three
matrices  , ,   , to estimate the original matrix  and hence,
to predict the missing values in the matrix. As mentioned above,
the matrix  includes missing values as users did not participate
in all projects. We estimate the missing values which reflect how
satisfied will the user be with an unseen project. In the settings
of recommendation system, the matrix  is a left singular matrix,
representing the relationship between users and latent factors. 
is a rectangular diagonal matrix with non-negative real numbers
on the diagonal, while   is a right singular matrix, indicating
the similarity between items and latent factors. SVD decreases the
dimension of the utility matrix , by extracting its latent factors. It
maps each user and item into a latent space with r dimensions and
with this, we can better understand the relationship between users
and projects, and compare between their vectors’ representations.
Let Rˆ be the estimation of the original matrix R. Given Rˆ, which
includes predictions for all the missing values in R, we can rank
each project for a user, by its score in Rˆ. The projects with the
highest ranking are then recommended to the user. In our settings,
like in the other algorithms described before, Rˆ is a binary matrix.
3</p>
    </sec>
    <sec id="sec-6">
      <title>RESULTS</title>
      <p>The first part of the study compares the performance of the diferent
algorithms on historical SciStarter Data. The second part of the
study implements the algorithms in the wild, and actively assigns
recommendations to users using the diferent algorithms.</p>
      <p>Of the 3000 existing projects SciStarter ofers, 153 projects are
afiliate projects. An afiliate project is one that uses a specific API
to report back to SciStarter each time a logged in SciStarter user
has contributed data or analyzed data on that project’s website or
app. As data of contributions and participation only existed for the
afiliate projects, we only used these projects in the study.
3.1</p>
    </sec>
    <sec id="sec-7">
      <title>Ofline Study</title>
      <p>
        The training set for all algorithms consisted of data collected
between January 2012 to September 2019. It included 6353 users who
contributed to 127 diferent projects. For the collaborative filtering
and SVD algorithm, we restricted the training set to users that made
at least two activities during that time frame, whether contributing
to a project or interacting on the SciStarter portal. We
chronologically split the data into train and test sets such that 10% of the
latest interactions from each user are selected for the test set and
the remaining 90% of the interactions are used for the train set.
As a baseline, we also considered an algorithm that recommends
project according to decreasing order of popularity, measured by
the number of users who contribute to the project [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>We evaluate the top-n recommendation result using precision
and recall metrics with varying number of recommendations.</p>
      <p>Fig 4 shows results of the precision and recall curves for the 4
examined algorithms. As can be seen from the figure, user-based
collaborative filtering and SVD are the best algorithms and their
performance is higher than Popularity and Item based collaborative
ifltering. The Popularity recommendation algorithm generated the
lowest performance.
The second part of the study was an online experiment. Users
who logged on to SciStarter starting on December 2nd, 2019 were
randomly assigned to one of 5 cohorts, each providing
recommendations based on diferent algorithm: (1) Item-based Collaborative
Filtering, (2) User-based Collaborative Filtering, (3) Matrix
Factorization, (4) Most popular projects, (5) Promoted projects. Promoted
projects were manually determined by SciStarter and often aligned
with social initiatives and current events. Among these projects are
GLOBE Observer Clouds4, Stream Selfie 5 and TreeSnap 6. Another
example is FluNearYou, in which individuals report flu symptoms
online, was one of the promoted projects during the COVID-19
outbreak. These projects change periodically by the SciStarter
administrators.</p>
      <p>The recommendation tool was active on SciStarter for 3 months.
Users who logged on during that time were randomly divided into
cohorts, each receiving a recommendation from a diferent
algorithm. Each cohort had 42 or 43 users. The recommendations were
embedded in the user’s dashboard in decreasing order of relevance,
in sets of three, from left to write. Users could scroll to reveal more
projects in decreasing or increasing order of relevance. Figure 5
shows the top three recommended projects for a target user.</p>
      <p>All registered users in SciStarter received notification via email
about the study, stating that the “new SciStarter AI feature provides
personalized recommend projects based on your activity and
interests.” A link to a blog post containing more detailed explanations
of recommendation algorithms, their role in the study,
emphasizing that‘ ‘all data collected and analyzed during this experiment
on SciStarter will be anonymized." Also, users are allowed to opt
4https://scistarter.org/globe-observer-clouds
5https://scistarter.org/stream-selfie
6https://scistarter.org/treesnap
out of receiving recommendations at any point, by clicking on the
link “opt out from these recommendations" in the recommendation
panel. In practice, none of the participants selected the opt out
option at any point in time.</p>
      <p>Figure 6 (top) shows the average click trough rate (defined as the
ratio recommended projects that the users accessed) and Figure 6
(bottom) shows the average hit rate (defined as the percentage of
instances in which users accessed at least one project that was
recommended to them). As shown by the Figure, both measures
show a consistent trend, in which the user-based collaborative
algorithms achieved the best performance, while the baseline method
achieving worse performance. Despite the trend, the diferences
between conditions were not statistically significant in the  &lt; 0.05
range. We attribute this to the fact that we measured clicks on
recommended projects rather than actual contributions which is
the most important aspect for citizen science.</p>
      <p>
        To address this gap we defined two new measures that consider
the contributions made by participants to projects, which considers
the system utility and identified by Gunawardana and Shani [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
The measures include the average number of activities that users
carried out in recommended projects (RecE), and the average
number of activities that users carried out in non-recommended projects
(NoRecE). Figure 7 compares the diferent algorithms according
to these two measures. The results show that users assigned to
the intelligent recommendation conditions performed significantly
more activities in recommended projects than those assigned to
the Popularity and Baseline conditions. Also, users in the SVD
algorithm performed significantly less activities in non-recommended
projects than the Popularity and Baseline conditions. These results
were statistically significant according to Mann-Whitney tests (see
Appendix for details).
      </p>
      <p>Lastly, we measure the average number of sessions for users in
the diferent conditions, where sessions are defined as a continuous
length of time in which the user is active in a project. Figure 8
shows the average number of sessions for users in the diferent
cohorts, including the number of sessions for the historical data
used to train the algorithms, in which no recommendations were
provided. The results show that users receiving recommendations
from the personalized algorithms performed more sessions than the
number of sessions in historical data. These results are statistically
significant. Although there is a clear trend that users in the  
condition achieved the highest number of sessions, these results
were not significant in the  &lt; 0.05 range.</p>
      <p>
        To explain the success of SVD’s good performance in the online
study, we note first that SVD is considered as a leading algorithm
in the domain of recommendation systems [
        <xref ref-type="bibr" rid="ref11 ref26">11, 26</xref>
        ]. Second, in our
setting SVD tended to generate recommendations that participants
had not heard about before, which the survey reveals to be more
interesting to them. One participant remarked: "I did not click on
either project because I have looked at both projects (several times)
previously", "I am more interested in projects I didn’t know exists
before".
      </p>
      <p>Lastly, we note the obstacles we encountered when carrying
out the study. The first obstacle we encountered was the small
number of relevant projects that could be recommended. Out of
almost 3000 projects that SciStarter ofers, we restricted ourselves
to about 120 projects are afiliate projects which actively provide
data of users’ interactions. Another obstacle was that we were
constrained to a subset of users who log on to SciStarter and use
it as a portal to contributing to the project, rather than accessing
the project directly. Out of the 65,000 registered users of SciStarter,
only a small percentage are logged in to both SciStarter and an
afiliate project. As a result, we have relatively few users getting
recommendations. In addition, some of SciStarter’s projects are
location-specific and can only be done by users in the same physical
location. (e.g collecting a water sample from a particular lake located
in a particular city). Therefore, we kept track of users’ location
and restricted our recommendation system to be a location-based
system, which recommends users with projects they are able to
participate in.
3.3</p>
    </sec>
    <sec id="sec-8">
      <title>User Study</title>
      <p>In order to learn what is the users’ opinion on the
recommendations, and their level of satisfaction, we conducted a survey with
SciStarter’s users. Our survey was sent to all SciStarter
community users. 138 users have filled the survey, where each user was
asked about the recommendations presented to him by the
algorithm he was assigned to. The survey included questions about
users’ overall satisfaction with the recommendation tool as well
as questions about their pattern of behavior before and after the
recommendations. The majority of users (75%) were very satisfied
with the recommendation tool and claimed that the
recommendations matched their personal interests and goals. The majority of
users (54%) reported they have clicked on the recommendations
and visited the project’s site, while only 8% of users did not click the
recommendation or visited the project site. Interestingly, users who
were not familiar with the recommended projects before, clicked
more on the recommendations, as well as users who previously
performed a contribution to a project.</p>
      <p>
        Users who did not click on the recommendations can be divided
into 3 main themes: (1) Users who don’t have the time right now or
will click the project in the future. (2) Users who feel that the
recommendations are not suitable for their skills and materials: "Seemed
out of my league", "I didn’t have the materials to participate". This
behaviour was also discussed in [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], and was named "classification
anxiety". (3) Users who feel that the recommendations are not
suitable for their interests: "No interest in stall catchers", "The photos
and title didn’t perfectly match what I am looking for".
      </p>
      <p>The survey provided evidence for the positive impact of using the
recommendation systems in SciStarter, which include the following
comments. “I am very impressed by the new Artificial Intelligence
feature from SciStarter! Your AI feature shows me example projects
that I didn’t know before exist", and “I like how personalized
recommendations are made for citizen science users".
4</p>
    </sec>
    <sec id="sec-9">
      <title>CONCLUSION AND FUTURE WORK</title>
      <p>This work reports on the use of recommendation algorithms to
increase engagement of volunteers in citizen science, in which
volunteers collaborate with researchers to perform scientific tasks.
These recommendation algorithms were deployed in SciStarter, a
portal with thousands of citizen science projects, and were
evaluated in an online study involving hundreds of users who were
informed about participating in a study involving AI based
recommendation of new projects. We trained diferent recommendation
algorithms using a combination of data including users’ behavior in
SciStarter as well as their contributions to the specific project. Our
results show that using the new recommendation system led people
to contribute to new projects that they had never tried before and
led to increased participation in SciStarter projects when compared
to a baseline cohort group that did not receive recommendations.
The outcome of this research project is the AI-powered
Recommendation Widget which has been fully deployed in SciStarter. This
project has transformed how SciStarter helps projects recruit and
support participants and better respond to their needs. It was so
successful in increasing engagement, that SciStarter has decided to
make the widget a permanent feature of their site. This will help
support deeper, sustained engagement to increase the collective
intelligence capacity of projects and generate improved scientific,
learning, and other outcomes. The results of this research have
been featured on the DiscoverMagazine.com 7. While we observed
significant engagement with the recommendation tool, one may
consider adding explanations to the recommendations in order
to increase the system’s reliability and user’s satisfaction with it.
Moreover, we plan to extend the recommendation system to include
content based algorithms, and test its performance as compared
to the existing algorithms. We believe that integrating content in
Citizen Science domain can be very beneficial. Even though users
tend to participate in a variety of diferent projects, we want to be
able to capture more intrinsic characteristic of the projects, such as
the type of the task a user has to perform or the required efort.
7https://www.discovermagazine.com/technology/ai-powered-smart-projectrecommendations-on-scistarter</p>
      <p>A
A.1</p>
    </sec>
    <sec id="sec-10">
      <title>APPENDIX</title>
    </sec>
    <sec id="sec-11">
      <title>Significance tests - number of activities</title>
      <p>A Mann-Whitney test was conducted to compare between each
condition in the online experiment. Table 1 presents the results
of the pairwise tests for the measures RecE and NoRecE that are
significant.</p>
      <sec id="sec-11-1">
        <title>Condition1</title>
        <p>SVD</p>
      </sec>
      <sec id="sec-11-2">
        <title>Popularity</title>
      </sec>
      <sec id="sec-11-3">
        <title>Baseline</title>
      </sec>
      <sec id="sec-11-4">
        <title>Popularity</title>
      </sec>
      <sec id="sec-11-5">
        <title>Baseline</title>
      </sec>
      <sec id="sec-11-6">
        <title>Popularity</title>
      </sec>
      <sec id="sec-11-7">
        <title>Baseline</title>
      </sec>
      <sec id="sec-11-8">
        <title>Baseline</title>
        <p>U
SVD</p>
      </sec>
      <sec id="sec-11-9">
        <title>Popularity</title>
      </sec>
      <sec id="sec-11-10">
        <title>Baseline</title>
      </sec>
      <sec id="sec-11-11">
        <title>Condition2</title>
      </sec>
      <sec id="sec-11-12">
        <title>Past-Data</title>
      </sec>
      <sec id="sec-11-13">
        <title>Past-Data</title>
      </sec>
      <sec id="sec-11-14">
        <title>Past-Data</title>
      </sec>
      <sec id="sec-11-15">
        <title>Past-Data</title>
      </sec>
      <sec id="sec-11-16">
        <title>Past-Data</title>
        <p>U</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Hyung</given-names>
            <surname>Jun Ahn</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>Utilizing popularity characteristics for product recommendation</article-title>
          .
          <source>International Journal of Electronic Commerce</source>
          <volume>11</volume>
          ,
          <issue>2</issue>
          (
          <year>2006</year>
          ),
          <fpage>59</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Xavier</given-names>
            <surname>Amatriain</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Big &amp; personal: data and models behind netflix recommendations</article-title>
          .
          <source>In Proceedings of the 2nd international workshop on big data</source>
          ,
          <article-title>streams and heterogeneous source Mining: Algorithms, systems, programming models and applications</article-title>
          .
          <source>ACM</source>
          , 1-
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Rick</given-names>
            <surname>Bonney</surname>
          </string-name>
          , Caren B Cooper, Janis Dickinson, Steve Kelling, Tina Phillips, Kenneth V Rosenberg,
          <string-name>
            <given-names>and Jennifer</given-names>
            <surname>Shirk</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Citizen science: a developing tool for expanding science knowledge and scientific literacy</article-title>
          .
          <source>BioScience</source>
          <volume>59</volume>
          ,
          <issue>11</issue>
          (
          <year>2009</year>
          ),
          <fpage>977</fpage>
          -
          <lpage>984</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Dominique</given-names>
            <surname>Brossard</surname>
          </string-name>
          , Bruce Lewenstein, and
          <string-name>
            <given-names>Rick</given-names>
            <surname>Bonney</surname>
          </string-name>
          .
          <year>2005</year>
          .
          <article-title>Scientific knowledge and attitude change: The impact of a citizen science project</article-title>
          .
          <source>International Journal of Science Education</source>
          <volume>27</volume>
          ,
          <issue>9</issue>
          (
          <year>2005</year>
          ),
          <fpage>1099</fpage>
          -
          <lpage>1121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Hillary</surname>
            <given-names>K Burgess</given-names>
          </string-name>
          , LB DeBey, HE Froehlich, Natalaie Schmidt, Elli J Theobald, Ailene K Ettinger,
          <string-name>
            <surname>Janneke</surname>
            <given-names>HilleRisLambers</given-names>
          </string-name>
          , Joshua Tewksbury, and
          <article-title>Julia</article-title>
          K Parrish.
          <year>2017</year>
          .
          <article-title>The science of citizen science: Exploring barriers to use as a primary research tool</article-title>
          .
          <source>Biological Conservation</source>
          <volume>208</volume>
          (
          <year>2017</year>
          ),
          <fpage>113</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Sahibsingh</surname>
            <given-names>A</given-names>
          </string-name>
          <string-name>
            <surname>Dudani</surname>
          </string-name>
          .
          <year>1976</year>
          .
          <article-title>The distance-weighted k-nearest-neighbor rule</article-title>
          .
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          <volume>4</volume>
          (
          <year>1976</year>
          ),
          <fpage>325</fpage>
          -
          <lpage>327</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Surabhi</given-names>
            <surname>Dwivedi</surname>
          </string-name>
          and
          <string-name>
            <given-names>VS Kumari</given-names>
            <surname>Roshni</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Recommender system for big data in education</article-title>
          .
          <source>In</source>
          <year>2017</year>
          5th National Conference on E-Learning &amp;
          <article-title>E-Learning Technologies (ELELTECH)</article-title>
          .
          <source>IEEE</source>
          , 1-
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Daniel</surname>
            <given-names>M Fleder</given-names>
          </string-name>
          and
          <string-name>
            <given-names>Kartik</given-names>
            <surname>Hosanagar</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Recommender systems and their impact on sales diversity</article-title>
          .
          <source>In Proceedings of the 8th ACM conference on Electronic commerce. ACM</source>
          ,
          <volume>192</volume>
          -
          <fpage>199</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Jill</given-names>
            <surname>Freyne</surname>
          </string-name>
          , Michal Jacovi, Ido Guy, and
          <string-name>
            <given-names>Werner</given-names>
            <surname>Geyer</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Increasing engagement through early recommender intervention</article-title>
          .
          <source>In Proceedings of the third ACM conference on Recommender systems. ACM</source>
          ,
          <volume>85</volume>
          -
          <fpage>92</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Cary</surname>
            <given-names>Funk</given-names>
          </string-name>
          , Jefrey Gottfried, and Amy Mitchell.
          <year>2017</year>
          .
          <article-title>Science news and information today</article-title>
          . Pew Research Center (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Stephen</given-names>
            <surname>Gower</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Netflix prize</article-title>
          and SVD.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Asela</given-names>
            <surname>Gunawardana</surname>
          </string-name>
          and
          <string-name>
            <given-names>Guy</given-names>
            <surname>Shani</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>A survey of accuracy evaluation metrics of recommendation tasks</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          <volume>10</volume>
          ,
          <issue>12</issue>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J</given-names>
            <surname>Itmazi</surname>
          </string-name>
          and
          <string-name>
            <given-names>M</given-names>
            <surname>Gea</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>The recommendation systems: Types, domains and the ability usage in learning management system</article-title>
          .
          <source>In Proceedings of the International Arab Conference on Information Technology (ACIT'</source>
          <year>2006</year>
          ), Yarmouk University, Jordan.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Yehuda</surname>
            <given-names>Koren</given-names>
          </string-name>
          , Robert Bell, and
          <string-name>
            <given-names>Chris</given-names>
            <surname>Volinsky</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Matrix factorization techniques for recommender systems</article-title>
          .
          <source>Computer 42</source>
          ,
          <issue>8</issue>
          (
          <year>2009</year>
          ),
          <fpage>30</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Gitte</given-names>
            <surname>Kragh</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>The motivations of volunteers in citizen science</article-title>
          .
          <source>environmental SCIENTIST 25</source>
          ,
          <issue>2</issue>
          (
          <year>2016</year>
          ),
          <fpage>32</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Hugues</surname>
            <given-names>Labarthe</given-names>
          </string-name>
          , François Bouchet, Rémi Bachelet, and
          <string-name>
            <given-names>Kalina</given-names>
            <surname>Yacef</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Does a Peer Recommender Foster Students' Engagement in MOOCs?</article-title>
          .
          <source>International Educational Data Mining Society</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Jefrey</surname>
            <given-names>Laut</given-names>
          </string-name>
          , Francesco Cappa, Oded Nov, and
          <string-name>
            <given-names>Maurizio</given-names>
            <surname>Porfiri</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Increasing citizen science contribution using a virtual peer</article-title>
          .
          <source>Journal of the Association for Information Science and Technology 68</source>
          ,
          <issue>3</issue>
          (
          <year>2017</year>
          ),
          <fpage>583</fpage>
          -
          <lpage>593</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Christopher H Lin</surname>
          </string-name>
          , Ece
          <string-name>
            <surname>Kamar</surname>
            , and
            <given-names>Eric</given-names>
          </string-name>
          <string-name>
            <surname>Horvitz</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing</article-title>
          .
          <source>In Twenty-Eighth AAAI Conference on Artificial Intelligence .</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <source>[19] Engineering National Academies of Sciences, Medicine</source>
          , et al.
          <year>2018</year>
          .
          <article-title>Learning through citizen science: enhancing opportunities by design</article-title>
          .
          <source>National</source>
          Academies Press.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Tien</surname>
            <given-names>T</given-names>
          </string-name>
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>F Maxwell</given-names>
          </string-name>
          <string-name>
            <surname>Harper</surname>
          </string-name>
          ,
          <source>Loren Terveen, and Joseph A Konstan</source>
          .
          <year>2018</year>
          .
          <article-title>User personality and user satisfaction with recommender systems</article-title>
          .
          <source>Information Systems Frontiers</source>
          <volume>20</volume>
          ,
          <issue>6</issue>
          (
          <year>2018</year>
          ),
          <fpage>1173</fpage>
          -
          <lpage>1189</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Oded</surname>
            <given-names>Nov</given-names>
          </string-name>
          , Ofer Arazy, and
          <string-name>
            <given-names>David</given-names>
            <surname>Anderson</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Scientists@ Home: what drives the quantity and quality of online citizen science participation</article-title>
          ?
          <source>PloS one 9</source>
          ,
          <issue>4</issue>
          (
          <year>2014</year>
          ),
          <year>e90375</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Lesandro</given-names>
            <surname>Ponciano</surname>
          </string-name>
          and Thiago Emmanuel Pereira.
          <year>2019</year>
          .
          <article-title>Characterising volunteers' task execution patterns across projects on multi-project citizen science platforms</article-title>
          .
          <source>In Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems. ACM</source>
          ,
          <volume>16</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M</given-names>
            <surname>Jordan Raddick</surname>
          </string-name>
          , Georgia Bracey, Pamela L Gay,
          <string-name>
            <surname>Chris J Lintott</surname>
            , Phil Murray, Kevin Schawinski, Alexander S Szalay, and
            <given-names>Jan</given-names>
          </string-name>
          <string-name>
            <surname>Vandenberg</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Galaxy zoo: Exploring the motivations of citizen science volunteers</article-title>
          .
          <source>arXiv preprint arXiv:0909.2925</source>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Francesco</surname>
            <given-names>Ricci</given-names>
          </string-name>
          , Lior Rokach, and
          <string-name>
            <given-names>Bracha</given-names>
            <surname>Shapira</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Recommender systems: introduction and challenges</article-title>
          .
          <source>In Recommender systems handbook. Springer</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Dana</surname>
            <given-names>Rotman</given-names>
          </string-name>
          , Jenny Preece, Jen Hammock, Kezee Procita, Derek Hansen, Cynthia Parr,
          <string-name>
            <given-names>Darcy</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and David</given-names>
            <surname>Jacobs</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Dynamic changes in motivation in collaborative citizen-science projects</article-title>
          .
          <source>In Proceedings of the ACM 2012 conference on computer supported cooperative work</source>
          .
          <volume>217</volume>
          -
          <fpage>226</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Rowayda</surname>
            <given-names>A</given-names>
          </string-name>
          <string-name>
            <surname>Sadek</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>SVD based image processing applications: state of the art, contributions and research challenges</article-title>
          .
          <source>arXiv preprint arXiv:1211.7102</source>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Badrul</surname>
            <given-names>Sarwar</given-names>
          </string-name>
          , George Karypis, Joseph Konstan,
          <string-name>
            <given-names>and John</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>Incremental singular value decomposition algorithms for highly scalable recommender systems</article-title>
          .
          <source>In Fifth international conference on computer and information science</source>
          , Vol.
          <volume>1</volume>
          . Citeseer.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>J</given-names>
            <surname>Ben Schafer</surname>
          </string-name>
          , Dan Frankowski, Jon Herlocker, and
          <string-name>
            <given-names>Shilad</given-names>
            <surname>Sen</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Collaborative ifltering recommender systems</article-title>
          .
          <source>In The adaptive web</source>
          . Springer,
          <fpage>291</fpage>
          -
          <lpage>324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Avi</surname>
            <given-names>Segal</given-names>
          </string-name>
          , Kobi Gal, Ece Kamar, Eric Horvitz, and
          <string-name>
            <given-names>Grant</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Optimizing Interventions via Ofline Policy Evaluation: Studies in Citizen Science</article-title>
          .
          <source>In ThirtySecond AAAI Conference on Artificial Intelligence .</source>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Avi</surname>
            <given-names>Segal</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ya'akov Gal</surname>
          </string-name>
          , Robert J Simpson, Victoria Victoria Homsy, Mark Hartswood,
          <string-name>
            <surname>Kevin R Page</surname>
            , and
            <given-names>Marina</given-names>
          </string-name>
          <string-name>
            <surname>Jirotka</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Improving productivity in citizen science through controlled intervention</article-title>
          .
          <source>In Proceedings of the 24th International Conference on World Wide Web</source>
          .
          <fpage>331</fpage>
          -
          <lpage>337</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Ling-Ling</surname>
            <given-names>Wu</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yuh-Jzer Joung</surname>
            , and
            <given-names>Jonglin</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Recommendation systems and consumer satisfaction online: moderating efects of consumer product awareness</article-title>
          .
          <source>In 2013 46th Hawaii International Conference on System Sciences. IEEE</source>
          ,
          <fpage>2753</fpage>
          -
          <lpage>2762</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Qingyun</surname>
            <given-names>Wu</given-names>
          </string-name>
          , Hongning Wang,
          <string-name>
            <surname>Liangjie Hong</surname>
            , and
            <given-names>Yue</given-names>
          </string-name>
          <string-name>
            <surname>Shi</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Returning is believing: Optimizing long-term user engagement in recommender systems</article-title>
          .
          <source>In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM</source>
          ,
          <year>1927</year>
          -
          <fpage>1936</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>