<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Model Threshold Optimization for Segmented Job-Jobseeker Recommendation System</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yichao Jin</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anirudh Alampally</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dheeraj Toshniwal</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhiming Xu</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ankush Girdhar</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Indeed.com</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>jinyichao</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>aalampally</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>dtoshniwal</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ankush}@indeed.com</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>Recently, job-jobseeker recommendation system has played an important role in helping people get more timely and suitable jobs in the domain of HR technology. Most existing recommender systems proposed an unified model to serve all the job and jobseekers from diferent backgrounds. While very limited work, if not none, had paid attention to the possible performance gap among diferent segments. In this work, we use the occupation data to define the job segment, and study the segment-level performance comparison from an existing recommendation system within our organization. We then try to identify the possible causes, and make multiple attempts to deal with the problem. Finally, we adopt the most feasible approach to conduct the per-segment level model threshold optimization. In particular, we properly formulate a constrained optimization problem, and propose an eficient algorithm to speed up the threshold optimization process. Our prototype implementation enables the online A/B tests. The experimental results from real online products indicate significant performance improvement in terms of both recommendation quality and coverage on a list of selected segments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Job-jobseeker Recommendation</kwd>
        <kwd>Segmentation</kwd>
        <kwd>Threshold Optimization</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Regression models to predict each steps along with the
application funnel for every single job-jobseeker pair. In
Nowadays, online job marketplaces such as Indeed.com, particular, we have three models in a tandem. The first
CareerBuilder and LinkedIn, are serving hundreds of mil- model predicts the probability of receiving a response
lions jobseekers by connecting them to the right job op- (either positive or negative) from the jobseeker , given
portunities. The target jobseekers of such recommender the recommendation is made. The second one predicts
systems should not limit to any specific segments or the probability of getting a positive response (e.g., apply
groups. Instead, we should try to help all the jobseek- or enquiry), given receiving the jobseeker response. The
ers with a variety of profiles to get their best jobs in an third one predicts the probability of having a positive
emeficient and scalable manner. ployer response (e.g., interview schedule or hire decision),</p>
      <p>The job-jobseeker recommendation platform is one of given the application is made from the jobseeker. Each
the most important engines that we are using to help peo- model has its own threshold to filter out certain matches
ple get jobs within our organization. There are multiple with low scores, and the product of all the model scores
ways that we recommend either jobs or jobseekers to the will be used to rank the remaining matches.
other side throughout diferent surfaces. Specifically, on Currently we only have one set of models for all types
the jobseeker-facing side, we sent invite-to-apply emails of jobs and jobseekers, while we found the performance
or app notifications to the jobseekers. We also display gap is huge across diferent job segments in terms of their
a list of recommended jobs on the homepage. On the occupation. Although we already use a few
segmentemployer-facing side, we provide instant candidate rec- specific data (e.g., job title, industry, etc.) as the model
ommendations to the employers as soon as they publish features, the data didn’t seem to be good enough to
reprea new job post. sent all the explicit or implicit features that are associated</p>
      <p>Underneath the recommendation platform, we have with the segment. There are certainly many ways to
immultiple match providers, where each provider has its prove the per-segment performance, including adding
own way to retrieve and rank matches. In this paper, we more segment-specific features, and training dedicated
will mainly focus on the ranking stage for our longest- models for each segment. But choosing the best cut-of
lived probabilistic-based match provider using Logistic threshold score per segment turned out to be the most
Regression models. Specifically, we have a set of Logistic practicable and efective way to achieve the goal.</p>
      <p>In this work, we propose an eficient approach to
optimize the segmented job-jobseeker recommendation
performance by tuning the per-segment model thresholds.</p>
      <p>RecSys in HR’22: The 2nd Workshop on Recommender Systems for
Human Resources, in conjunction with the 16th ACM Conference on
Recommender Systems, September 18–23, 2022, Seattle, USA.</p>
      <p>© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Specifically, we formulate a constrained optimization to
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g ACttEribUutRion W4.0oInrtekrnsahtioonpal (PCCroBYce4.0e).dings (CEUR-WS.org) identify the potential improvement space per segment.
Three attempts are discussed, and the most feasible one
is adopted in the production. We also apply a greedy
search algorithm to speed up the segment-specific
threshold tuning process. Our prototype implementation and
the corresponding AB tests on selected segments had
suggested considerable improvements in terms of
better recommendation quality and higher volume on both
applies and positive outcomes from the job applications.</p>
      <p>In summary, the main contributions of this paper are
as follows. We hope our work can provide a reference
on similar problems in the industry.</p>
      <p>There were also works focusing on jointly examining
the resumes from jobseekers and the job descriptions
from the job side, mostly for the high-tech job profiles.</p>
      <p>Malinowski et al. [5] presented a probabilistic-based
CV and job recommender, that relied extensively on the
structured resume data from a limited number (i.e., 100)
of high-skilled jobseekers. Javad et al. [6] used named
entity recognition (NER) to explicitly extract the skills
from resume, and further used them to facilitate the
recommender system. Qin et al. [7, 8] proposed a neural
network based representation to embed the skills from
resume and job descriptions, and ranking the matches
• We report the occupation-based segment-level based on the vector similarities. Luo et al. [9] introduced
investigation using the real-world data from our adversarial learning to learn more expressive
represenorganization. tation from similar sources. However, the majority of
• We formulate a constrained optimization problem jobseekers in the labor market (e.g., truck driver, retail
to facilitate our segmentation work in the job- sellers, etc.), do not have properly written resumes, if
jobseeker recommendation system. not completely without a resume. Consequently, such
• We propose an efective way to optimize the per- methods might not work well for these jobseekers.
segment performance by tuning the thresholds While most existing job recommendation systems
on diferent models. [10, 11, 12] tried to have one model worked for all
dif• We implement an automated model threshold ferent job profiles, very limited work noticed the
sigtuning module into the pipeline, and the online nificant diference among these job and jobseeker
proexperimental results from the real products in- files. This work, on the contrary, attempts to identify
dicate promising performance improvement on such diferences and make operational optimization
corboth recommendation quality and coverage. respondingly, with the objective to improve the overall
recommendation performance.</p>
    </sec>
    <sec id="sec-2">
      <title>3. Overview of Our Match Recommendation Platform</title>
      <p>This section presents an overview of the match
recommendation platform, and justifies the segment-level
optimization is needed. In particular, we first present our
probabilistic-based models that are still driving a
significant number of recommendations within our
organization. We then study the feature distribution from both
the job and jobseeker side, and identify the performance
gap across a variety of segments.</p>
      <sec id="sec-2-1">
        <title>3.1. Probabilistic-based Models</title>
        <p>Our recommendation match provider is built on top of a
series of probabilistic-based models. Each of them takes
care of a single step along the application funnel. In
particular, as depicted in figure 1, each model takes a
subset of features from job, employer, jobseekers’
contents (e.g., resumes, questionnaires, etc.) and behaviors
(e.g., apply history, feedback from previous applications,
responses to previous recommendations, and inferred
interests, etc.) as the input features. And the model outputs
the probability for its own step.</p>
        <p>More specifically, the first model focuses on if the
jobseeker responds to the recommendation, given that the
The rest of the paper is organized in the following ways.
In Section 2, we discuss and review a few related works.
In Section 3, we provide an overview of our existing
recommendation platform. In section 4, we describe the
segment-level model threshold tuning that we use to
optimize the recommendation performance. In Section
5, we illustrate the evaluation results on three selected
segments. And finally, Section 6 concludes this work.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>Many existing works had studied the overall framework
of eficient job recommendation systems. Kenthapadi
et al. [1] discussed the candidate selection,
personalized relevance model, and match redistribution, as three
main sub-systems in the job recommendation system
at Linkedin. Lu et al. [2] presented a hybrid-ranking
system by combining the interaction-based and
contentbased features from both job and jobseekers, and
calculate a ranking score accordingly. Shalaby et al. [3]
built a graph-based job recommendation framework at
CareerBuilder.com, using a similar hybrid approach by
combining the behavior-based and content-based data
together into weighted scores for the ranking purpose.
Diaby et al. [4] proposed a taxonomy-based job
recommender system that segmented both job and jobseekers
into a taxonomy system using their occupation data.
recommendation had been sent out. It can be any
response, such as clicking the ”apply job” or ”not interested”
button, unsubscribing, replying, or giving out a rating.
The second model focuses on if the jobseeker actually
applied for the job, given he/she had made any kind of
responses to our recommendation. The third model deals
with the probability further on the employer side,
focusing on if the employer sends any positive outcome
to the submitted application, such as follow-up
conversations to further understand the applicant, interview
arrangements, or even making an ofer.</p>
      <p>(|) = (|)
∗( |)
∗(| )</p>
      <p>We conduct both the scoring as shown in Eq.1, and
ifltering as shown in Eq. 2, based on this model chain.
In particular, each logistic regression model follows the
sigmod function to generate the probability output (|) ,
where  is the ground truth,  refers to the stage that the
model is dealing with.  (  ) = ∑    , and   refers to
the input feature vector,   refers to the weights to be
trained for each feature. At the same time, there is a
customized threshold  for each model, to filter out the
matches having low probability at that stage. Eventually,
only the matches that pass all the three cutof threshold at
each stage will be assigned a non-zero score to represent
the probability of having a positive outcome, given a sent
recommendation  (|) . Finally, this score will
be passed into the ranking and aggregation module as
the next step. It is easy to see that the multiplication
leads to higher precision but lower recall, because it will
be filtered out as long as the job-jobseeker pair gets a low
score at any stage in the chain.</p>
      <p>(|) =
{
3.2. Performance Gap among Segments
From our historical data, we found there are significant
performance gaps across diferent segments in terms of
their occupation, as we used to have one unified set of
models with the same threshold values, to serve all the
jobs and jobseekers from diferent backgrounds. This
observation is based on a segment-level comparison on
the recommendation performance with other organic
channels, where jobseekers search and find jobs by
themselves. Ideally, we expected the recommender system
would consistently perform better, because it should
pro(1) vide better matches with higher accuracy. However, such
an assumption is not always true.</p>
      <p>Figure 2 studies the performance gap in terms of Apply
start Rate (AR) and Positive outcome over Apply (PoA)
of the biggest 16 occupations from our real-world data.
The AR metric indicates the quality of the jobseeker
engagement, while the PoA metric indicates the quality of
the employer engagement. It is clear that a bunch of
segments (mostly blue-collar jobs) are with low AR but
okay PoA, indicating the model gets higher precision but
lower recall there. While a few other segments (mostly
white-collar jobs) are sufering from low PoA but okay
AR, indicating the model gets higher recall but lower
precision. After all, we believe all these segments could have
considerable improvement spaces, but through diferent
initial locations and diferent directions towards the top
right corner as shown in the figure 2.</p>
      <p>Note that, we cannot directly compare the absolute
metrics across diferent segments, because the
performance can be afected by the segment nature, instead of
the recommendation quality.</p>
      <p>The examination motivates us to further check if these
segments are big enough to try the segment-level
optimization. And if so, we also would like to understand
(2) the reasons that lead to such performance gaps.</p>
      <sec id="sec-3-1">
        <title>3.3. Segment-level Investigation</title>
        <p>Figure 3 shows the segment distribution in terms of the
number of active jobs based on their top-level occupation
from our organization in 2022H1. We clearly serve a full
spectrum of jobs and jobseekers from a variety of
occupations, without any single occupation clearly dominating
the whole population. Every occupation-based segment
occupies a certain portion in the job market. As a result,
the segment-level optimization could have a reasonable
expectation to benefit the overall performance.</p>
        <p>We next want to examine if the performance gap
originates from the diferent feature distribution among
different segments. Specifically, we look at a mixture of
blue-collar and white-collar jobs, on both the job and
jobseekers side. As expected, the blue-collar jobseekers
(e.g., delivery drivers, retail sellers, etc.) tend to have
a much shorter resume, which in turn makes the skill
and experiences extraction, or even resume embedding
less representative than the white-collar jobseekers (e.g.,
software development, technical managers, etc.). Similar
pattern can be observed on the job side too, where the
white-collar jobs tend to list more job requirements in
terms of hard skills and experiences, while blue-collar
jobs tend to focus more on licences and soft skills.</p>
        <p>The observations lead us to reconsider if our
existing approach using the same model set with the same
threshold setting is good enough to handle all these cases.
Although we already use a few segment-specific data (e.g.,
job title, industry) as the model features, we suspect they
might not be representative enough to properly
diferentiate the specific requirements. As a result, we work on a
few diferent approaches on the segment-level
optimization, and discuss the feasibility based on our real-world
experiences in the next section.
4. Segment-level Optimization
There are a number of possible ways to do the
segmentlevel optimization for our probabilistic-based
recommendation system. In particular, we report three diferent
attempts that we have tried in this section. For each
attempt, we evaluate not only its efectiveness, but also its
scalability in the long run.
4.1. First Attempt: dedicated models per
segment
The most intuitive solution that first came to us was to
build dedicated sets of models for each segment. We
selected a list of low-performed occupation-based
segments (i.e., Security Guard, Retail Store Manager, and
Quick Service Server) according to Figure 2, and trained
a dedicated set of models for each segment. As a result,
every segment got three diferent models as shown in
Figure 1, and they were trained by only using the historical
dataset from that specific segment.</p>
        <p>Surprisingly, the initial experimental results did not
align well with our expectation, showing mixed signals
in terms of the recommendation quality and volume. In
particular, for all the three experimented segments, we
observed significant decreases in terms of the Applystart
Rate (AR) or Positive outcome over Apply (PoA) ranging
from -9.8% to -15.6%, while an improvement in terms of
the number of apply starts and positive outcomes ranging
from 6.0% to 13.8%. However, our expectation on the
dedicated models was to have considerable improvements
on all the key metrics at the same time.</p>
        <p>After a close examination on the approach and the
corresponding models, we found three major issues that
lead to the disappointing results. First, we did not setup
a formulation to properly represent the overall
objective. Consequently, we even did not have a clearly
deifned expectation and target for the optimization at the
very beginning. Second, we over-emphasized the model
training part, whereas we missed the fact that the cutof
thresholds play even more important roles to trade-of
precision and recall. Therefore, we believe the dedicated
models still need careful threshold tuning, to maximize
its benefit. Lastly, we noticed that we were ongoing many
other initiatives (such as an alternative way of embedding
features, or even adding new features, etc.) that kept
improving the baseline models from other members within
our organization, while our treatment models were kept
unchanged during the experiment. This made the
experimental comparison inconsistent over time. More
importantly, we could not fix this easily, because the
large-scale model auto-updates together with the
parameter fine-tuning could be too expensive in terms of both
the initial engineering eforts and the following
infrastructural maintenance.</p>
      </sec>
      <sec id="sec-3-2">
        <title>4.2. Second Attempt: online reinforcement learning with multi-armed bandit</title>
        <p>By learning the lessons from our first attempt, we would
like to formulate an optimization problem to
appropriwe want to simultaneously improve both
recommendation quality and volume, on all the key metrics including
applystart volume, positive outcome volume, applystart
rate, and positive outcome over apply. While we can
focus slightly more on AR for those low-AR segments,
or positive outcomes for those low-PoA segments. And
the control variables that we can operate here are the
thresholds for each model per segment.
 &gt; 0,  ∈ {,  , , }
(3)
(4)
(5)
(6)
(7)
max
⃗

s.t. 0 &lt;   &lt; 1,  ∈ {,  , , }
∑
∑
∈{,,,}
∈{,,,}
Δ ()⃗ =

  Δ ()⃗
  = 1
 ()⃗ −</p>
        <p>()⃗ &lt;   −   ,  ∈ {, }
sampling methods such as Thompson sampling could
speed up the convergence rate to some extent.</p>
        <p>However, there were still a list of issues that prevented
us from doing eficient multi-armed bandit tests for our
segmented threshold optimization. First, the underlying
baseline models were being iterated in parallel, resulting
diferent treatment groups with fixed threshold settings.
Second, the online reinforcement learning could take a
long time to get converged, especially when a few target
segments are with small sample size. Last but not least,
we also sufered from the delayed data issue from the
up-streaming data sources, considering the signals from
employer-side (e.g., interview schedule and results, etc.)
could take up to a few weeks to come back after the
application had been made. Consequently, this attempt
is unfortunately also impractical to our problem.</p>
      </sec>
      <sec id="sec-3-3">
        <title>4.3. Third Attempt: ofline threshold tuning per segment</title>
        <p>By learning from the previous two failed attempts, we
confirmed that fine-tuning the thresholds for each
segment could be the feasible solution to optimize the
performance. But it was not practical to find out the optimal
solution throughout the reinforcement learning approach
over the online iterations. As a result, we came up with
our third attempt by using a proper ofline evaluation
algorithm based on the historical job and jobseeker
inately capture our task and the objective. In particular, in the inconsistent and unreliable comparison among
⃗
can be afected by the threshold setting  . With these  ⃗= (, ,  , , )
in terms of the objective value  and</p>
        <p>As a result, we formulate a constrained optimization
problem as shown in Equation 3 to 7. Specifically, the
objective function aims to maximize the weighted combina- teraction data from all the channels.
tion of all the key metrics, including apply start volume
 , apply start rate  , positive outcome volume  , and
positive outcome over apply</p>
        <p>. For each metric,  
represents the weight for us to shift focus between low-AR</p>
        <p>Algorithm 1 describes the proposed greedy searching
process to find out the optimal threshold settings per
segment in an eficient manner. Specifically, the algorithm
takes a few diferent inputs, including the models (i.e.,
and low-PoA segments, and Δ indicates the correspond- Jobseeker Response JR model, Jobseeker Apply JA model,
ing performance improvement. In the meanwhile, there
are a few Service Level Agreements (SLOs) that we must
meet, including the unsubscription rate must lower than
0.05%, and negative feedback ratio from jobseekers must
be lower than 25% among all the feedback. These SLOs
are hard requirements, so that we even want to add a
marginal bufer  to the constraint. Both Δ and</p>
        <p>metrics
definitions, our task is to find out the optimal model
thresholds for each segment that could maximize the
objective function, while fulfilling all the constraints.</p>
        <p>With the clearly defined objective (or reward) function
and constraints, one possible way is to adopt multi-armed
bandits as a reinforcement learning approach to find out
the optimal solution in the online environment.
Specifically, we can setup multiple test groups in the
production, each group has diferent threshold settings. Then
we keep monitoring the performance on the objective
value and constraints, and adjust the trafic allocation
towards the better performing variances gradually. Some
and Positive Outcome PO model) as discussed in Section
1, the historical data for model performance evaluation
per segment, and a default threshold setting. We select
an upper  and a lower bound  for each model
respectively, by plus-minus a range over the default value. We
then follow a greedy searching process to find out the
optimal settings, that can achieve the best performance
the four key metrics as defined in Equation 3 to 7. The
expected outputs are the optimal threshold settings  ⃗
per segment, which can achieve no worse performance
than the default ones.</p>
        <p>The greedy part originates from the fact that JA model
threshold correlates well with the applystart rate, and
the same pattern applies to PO model threshold and the
positive outcome over apply. On the other hand, when we
increase any model threshold, the applystart and positive
outcome volume can only go down or at most flat. As a
result, if we want to improve both quality and volume as
Algorithm 1 Greedy Threshold Searching Algorithm</p>
        <p>Require: Models: JR, JA, PO
Require: Historical job-jobseeker interactions</p>
        <p>Require: Default threshold set  ⃗
function greedySearch( ⃗,  ⃗, 1 , 2 )
for  1
in  (</p>
        <p>1 ,  ⃗ (1),  1 ) do
⃗ ← (  ,  1</p>
        <p>,  ⃗ (2))
 ⃗ )(1) &lt;  (⃗1)</p>
        <p>then
⃗ ← (  ,  1

,  2 )</p>
        <p>2 ,  ⃗ (2), − 2 ) do
 ⃗ )(2) &lt;  (⃗2)</p>
        <p>then
if  (
end if
for  2
break</p>
        <p>in  (
if (
end if
if  (
end if
break
 ⃗← 
 ⃗←    (
⃗

obvious that we only need to check the areas with non- initiatives. Previously, the retained models would be
end for
end for
return  ⃗,  ⃗
end function
⃗ ←</p>
        <p>⃗
 ←  (
 ←  (
 ← (
 ⃗← (, ,  , , )
for   in  (
 ⃗ )( ),  ← (
 ⃗ )( ),  ← (
 ⃗ )</p>
        <p>,   ,   ) do
 ⃗ ,  ⃗ ←greedySearch( ⃗ ,  ,⃗ ,  
 ⃗ ,  ⃗ ←greedySearch( ⃗ ,  ,⃗ ,  
 ⃗ )( )
 ⃗ )( )
)
)
end for
return optimal threshold set  ⃗ per segment
the key metrics at the same time, we need to search the JA
and PO model threshold in diferent directions starting
from the default value. In addition, once we reach the
boundary, by either observing a volume that is smaller
than default when increasing the threshold, or a ratio that
is smaller than the default when decreasing threshold, we
do not need to further down the same direction. However,
the JR model does not display a clear relationship with
our targeted key metrics, therefore, we still do a full grid
search on the JR model in the outer loop.
zero value can be ignored due to the negative volume
 ⃗ ) and (
 ⃗ ) &gt;  (⃗)</p>
        <p>then
⃗
)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Performance Evaluation</title>
      <p>By following the third attempt as discussed in the
previous Section, we further integrate the algorithm into our
model training pipeline as a prototype implementation.
The performance is evaluated throughout proper online
A/B testing from real recommendation products. The
experimental results demonstrate promising signals for
all the three selected segments. Moreover, the approach
is generally applicable for all the segments.</p>
      <sec id="sec-4-1">
        <title>5.1. Prototype Implementation</title>
        <p>Under our current model pipeline, the unified model set
is retrained upon either the regular daily update or a
production release on various other model improvement
put into the production model storage, thus they can be
directly invoked by the online recommendation system
Segment
Security Guard
Retail Store Manager
Quick Service Server
+17.29%
+86.49% ↑
+1.02%
In this work, we presented an efective solution to
improve the performance of our job-jobseeker
recommendation system. Specifically, we started by identifying
the performance gap among diferent segments, followed
by segment-level investigations. We then reported three
diferent attempts, and came up with the most feasible
approach by tuning the model thresholds per segment. The
detailed solution was presented, including a proper
problem formulation as a constrained optimization problem,
an eficient algorithm to speed up the threshold
optimization process, and the prototype implementation. Finally,
online A/B tests from real products proved the
performance improvement in terms of both recommendation
quality and quantity.</p>
      </sec>
      <sec id="sec-4-2">
        <title>5.2. Online AB Test Results</title>
        <p>We continue to focus on the same three low-performed
segments (i.e., Security Guard, Retail Store Manager, and
Quick Service Server), but with a more rigorous online
A/B testing plan. In particular, we run the online A/B
experiments for two weeks with the power analysis
indi</p>
        <p>Our future work will mainly follow three venues. First,
we are going to scale up the auto threshold optimization
to more segments, and also figure out the way to
minimize the diference between ofline evaluation and the
actual online performance. Second, we will evaluate
if similar segmentation work can benefit other match
providers that are based on more sophisticated models
(e.g., neural networks, or deep collaborative filtering).</p>
        <p>Third, we will extend our segmentation optimization
work for the same match provider into our international
markets, where the user behaviors and job requirements
can become diferent even for the same occupation across
diferent countries and markets.
ing people and jobs: A bilateral
recommendation approach, in: Proceedings of the 39th
Annual Hawaii International Conference on System
Sciences (HICSS’06), volume 6, IEEE, 2006, pp.</p>
        <p>137–145.
[6] F. Javed, P. Hoang, T. Mahoney, M. McNair,
Largescale occupational skills normalization for online
recruitment, in: Twenty-ninth IAAI conference,
2017, pp. 4627–4634.
[7] C. Qin, H. Zhu, T. Xu, C. Zhu, L. Jiang, E. Chen,</p>
        <p>H. Xiong, Enhancing person-job fit for talent
recruitment: An ability-aware neural network
approach, in: The 41st international ACM SIGIR
conference on research &amp; development in information
retrieval, 2018, pp. 25–34.
[8] C. Qin, H. Zhu, T. Xu, C. Zhu, C. Ma, E. Chen,</p>
        <p>H. Xiong, An enhanced neural network approach
[1] K. Kenthapadi, B. Le, G. Venkataraman, Personal- to person-job fit in talent recruitment, ACM
Transized job recommendation system at linkedin: Practi- actions on Information Systems 38 (2020) 1–33.
cal challenges and lessons learned, in: Proceedings [9] Y. Luo, H. Zhang, Y. Wen, X. Zhang, Resumegan: an
of the eleventh ACM conference on recommender optimized deep representation learning framework
systems, 2017, pp. 346–347. for talent-job fit via adversarial learning, in:
Pro[2] Y. Lu, S. El Helou, D. Gillet, A recommender system ceedings of the 28th ACM international conference
for job seeking and recruiting website, in: Proceed- on information and knowledge management, 2019,
ings of the 22nd International Conference on World pp. 1101–1110.</p>
        <p>Wide Web, 2013, pp. 963–966. [10] T. A.-O. Shaha, Y. Mourad, A survey of job
recom[3] W. Shalaby, B. AlAila, M. Korayem, L. Pournajaf, mender systems, International Journal of Physical
K. AlJadda, S. Quinn, W. Zadrozny, Help me find a Sciences 7 (2012) 5127–5142.
job: A graph-based approach for job recommenda- [11] F. Abel, A. Benczúr, D. Kohlsdorf, M. Larson,
tion at scale, in: 2017 IEEE international conference R. Pálovics, Recsys challenge 2016: Job
recommenon big data (big data), IEEE, 2017, pp. 1544–1553. dations, in: Proceedings of the 10th ACM
confer[4] M. Diaby, E. Viennet, Taxonomy-based job recom- ence on recommender systems, 2016, pp. 425–426.
mender systems on facebook and linkedin profiles, [12] J. Dhameliya, N. Desai, Job recommender systems:
in: 2014 IEEE Eighth International Conference on A survey, in: 2019 innovations in power and
adResearch Challenges in Information Science (RCIS), vanced computing technologies (i-PACT), volume 1,
IEEE, 2014, pp. 1–6. IEEE, 2019, pp. 1–5.
[5] J. Malinowski, T. Keim, O. Wendt, T. Weitzel,
Match</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>