<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Matching: Leveraging Semantic Textual Relatedness and Knowledge Graphs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vadim Zadykian</string-name>
          <email>vadim.zadykian@mymtu.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bruno Andrade</string-name>
          <email>bruno.andrade@mtu.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Haithem Afli</string-name>
          <email>haithem.afli@mtu.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ADAPT Centre, Munster Technological University</institution>
          ,
          <addr-line>Cork</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Semantic Textual Relatedness (STR) captures nuanced relationships between texts that extend beyond superficial lexical similarity. In this study, we investigate STR in the context of job title matching - a key challenge in resume recommendation systems, where overlapping terms are often limited or misleading. We introduce a self-supervised hybrid architecture that combines dense sentence embeddings with domain-specific Knowledge Graphs (KGs) to improve both semantic alignment and explainability. Unlike previous work that evaluated models on aggregate performance, our approach emphasizes data stratification by partitioning the STR score continuum into distinct regions: low, medium, and high semantic relatedness. This stratified evaluation enables a fine-grained analysis of model performance across semantically meaningful subspaces. We evaluate several embedding models, both with and without KG integration via graph neural networks. The results show that fine-tuned SBERT models augmented with KGs produce consistent improvements in the high-STR region, where the RMSE is reduced by 25% over strong baselines. Our findings highlight not only the benefits of combining KGs with text embeddings, but also the importance of regional performance analysis in understanding model behavior. This granular approach reveals strengths and weaknesses hidden by global metrics, and supports more targeted model selection for use in Human Resources (HR) systems and applications where fairness, explainability, and contextual matching are essential.</p>
      </abstract>
      <kwd-group>
        <kwd>STR</kwd>
        <kwd>job title matching</kwd>
        <kwd>knowledge graph</kwd>
        <kwd>explainability</kwd>
        <kwd>BERT</kwd>
        <kwd>KG</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Semantic Textual Relatedness (STR) is a nuanced and
context-dependent concept in Natural Language Processing
(NLP) that measures the degree to which two text segments
(words, sentences, or phrases) share semantically
meaningful connections. Unlike Semantic Textual Similarity (STS)
[
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ], which focuses primarily on surface-level closeness
(e.g., synonyms or paraphrases), STR captures more abstract
and associative relationships. For example, the words
“mitten” and “glove” are semantically similar, whereas “hand”
and “glove” are semantically related, yet dissimilar. STR
also difers from Semantic Lexical Relatedness (SLR) [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ],
which considers the relatedness of individual words rather
than broader concepts.
      </p>
      <p>
        While STS and SLR have been widely adopted in tasks
such as paraphrase detection, question-answering, and
summarization, STR remains underexplored, especially in
domain-specific contexts such as talent acquisition and job
recommender systems. In these settings, job titles often
exhibit significant lexical diversity while denoting functionally
similar or hierarchically related roles. Traditional
keywordor syntactic-based methods fail to account for this variation
[
        <xref ref-type="bibr" rid="ref5 ref6">6, 5</xref>
        ]. For instance, “Chief Executive Oficer” and
“Managing Director” may have no shared tokens but represent
nearly identical positions, whereas “Director of Sales” and
“Vice President, Marketing” are distinct but related roles.
This issue is compounded in global and multilingual hiring
scenarios, where terminology is inconsistent or localized.
      </p>
      <p>
        Another critical challenge in Human Resource (HR)
applications is explainability. Job Recommender Systems
(JRS) and Resume Recommender Systems (RRS) increasingly
influence hiring decisions and career mobility. However,
exwork that combines fine-tuned Sentence-BERT (SBERT) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
embeddings with domain-specific Knowledge Graphs (KGs).
This integration leverages the complementary strengths of
embeddings and KGs: embeddings capture contextual
semantics even in the absence of lexical overlap, while KGs
encode structured hierarchical and functional relationships
(e.g., career progressions, job categories). Critically, KGs
enable us to trace explicit reasoning paths behind
predictions (e.g., “Project Lead” → “Team Leadership Roles” →
“Program Manager”), thereby providing interpretability that
is crucial in HR contexts. Our work ofers the following
contributions:
• we employ a self-supervised data pipeline that
eliminates the need for manually labeled similarity scores
by generating training pairs from cosine similarities
between job descriptions
• we construct a knowledge graph to represent
jobskill relationships and learn a neural mapping from
textual embeddings to graph embeddings
• we introduce a hybrid modeling approach that
integrates dense sentence embeddings with knowledge
graph embeddings derived from a structured skill
ontology
• we utilize a stratified evaluation framework by
partitioning similarity scores into three interpretable
regions: low, medium, and high STR. This
regionaware analysis enables fine-grained assessment of
model behavior that would otherwise be obscured by
global metrics such as RMSE or Pearson correlation
We hypothesize that STR-aware systems augmented with
KGs will produce more diverse and contextually relevant
job matches, while also meeting the explainability needs of
HR stakeholders. This positions STR and KGs not only as
CEUR
      </p>
      <p>ceur-ws.org
technical improvements but also as key enablers of fairness,
trust, and transparency in modern workforce ecosystems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Semantic Textual Relatedness</title>
        <p>
          Semantic Textual Relatedness (STR) is often confused with
Semantic Textual Similarity (STS) [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. STS can be
considered a component of STR [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], as semantically similar texts
(e.g., paraphrases) are inherently related. However, STR
encompasses a broader range of semantic associations beyond
surface-level resemblance, including hierarchical, causal,
and contextual connections [
          <xref ref-type="bibr" rid="ref11 ref12 ref6">11, 12, 6</xref>
          ].
        </p>
        <p>
          Abdalla et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] demonstrate that integrating STR into
search and recommendation pipelines enables retrieval of
thematically relevant content, even when explicit term
overlap is absent.
        </p>
        <p>
          Recommender systems utilizing STR generally
outperform those that rely solely on surface similarity [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. These
context-aware models can identify semantic connections
between user preferences and items [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], improve
personalization and diversity [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], and mitigate cold-start problems
[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          Recent advances in contextual language models have
further propelled STR modeling. BERT [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and Sentence-BERT
(SBERT) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] efectively capture polysemy and co-reference
by leveraging bidirectional context.
        </p>
        <p>
          Within the context of JRS and RRS, various BERT-based
models are used for job title classification [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], resume
classification [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], job-resume matching [
          <xref ref-type="bibr" rid="ref19 ref20 ref21 ref22">19, 20, 21, 22</xref>
          ], Named
Entity Recognition [
          <xref ref-type="bibr" rid="ref18 ref23 ref24">23, 24, 18</xref>
          ], and semantic ranking of job
recommendations [
          <xref ref-type="bibr" rid="ref25 ref26 ref27">25, 26, 27</xref>
          ].
        </p>
        <p>
          In summary, STR has emerged as a key factor in the
development of intelligent systems that require deep
semantic understanding. Contextual language models, especially
ifne-tuned SBERT variants, provide a solid foundation for
approximating STR [
          <xref ref-type="bibr" rid="ref10 ref28 ref29 ref30 ref31">10, 28, 29, 30, 31</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Explainability in HR Systems</title>
        <p>
          Transparency in HR systems is not only desirable, but it
may also be mandatory. For example, the EU AI Act [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ]
classifies certain systems used in HR as “high-risk” and
subjects them to strict traceability and explainability
requirements. Explainability in HR systems, particularly in job
recommendation systems, is indispensable for fostering
fairness, transparency, loyalty and trust [
          <xref ref-type="bibr" rid="ref33 ref34 ref35">33, 34, 35</xref>
          ], facilitating
informed decision-making [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ], mitigating biases [
          <xref ref-type="bibr" rid="ref36 ref37">36, 37</xref>
          ],
and catering to diverse stakeholder needs [
          <xref ref-type="bibr" rid="ref35 ref38 ref39">38, 35, 39</xref>
          ], while
ensuring regulatory compliance.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Knowledge Graphs</title>
        <p>Knowledge graphs (KGs) have emerged as valuable tools for
enhancing the explainability of recommender systems.</p>
        <p>
          Recent works have demonstrated how combining KGs
with pre-trained language models (PLMs) — such as BERT
and SBERT — can improve both knowledge graph
completion and downstream semantic tasks [
          <xref ref-type="bibr" rid="ref40 ref41 ref42">40, 41, 42</xref>
          ].
        </p>
        <p>
          Building on these insights, our approach leverages
alignment between textual embeddings and structured
knowledge graphs to improve job-to-job and job-to-skill similarity
estimation. Inspired by multi-task frameworks Kim et al.
[
          <xref ref-type="bibr" rid="ref43">43</xref>
          ] and contextualized reasoning models [
          <xref ref-type="bibr" rid="ref44">44</xref>
          ], we aim to
train sentence encoders that not only capture linguistic
nuance but are also sensitive to the structural topology of skills
and industries.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Job Title Matching</title>
        <p>
          Several recent studies have explored the use of
representation learning and transformer-based models in the
context of job matching and job title normalization. Zhang
et al. [
          <xref ref-type="bibr" rid="ref45">45</xref>
          ] introduce Job2Vec, a multi-view framework that
learns job title embeddings by integrating structured and
unstructured job-related data. Lavi et al. [
          <xref ref-type="bibr" rid="ref46">46</xref>
          ] propose
conSultantBERT, a fine-tuned Siamese SBERT model, which
improves job–candidate matching over keyword-based
approaches by leveraging domain-specific data. Building on
similar ideas, Kaya and Bogers [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ] investigate
sentencepair classification models using job titles as input signals and
highlight the efectiveness of fine-tuned BERT embeddings
for resume-to-job matching[
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>
          Decorte et al. [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ] explore job title normalization
using BERT-based models fine-tuned on recruitment
corpora, showing improved classification into standardized
taxonomies. Liu et al. [
          <xref ref-type="bibr" rid="ref49">49</xref>
          ] propose Title2Vec, a job title
embedding approach designed for Named Entity Recognition
(NER) and classification tasks. Zbib et al. [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ] introduce
a weakly supervised method for learning job title
similarity by mining noisy skill co-occurrence patterns, ofering a
scalable alternative to manually labeled training data. In a
related efort, Rosenberger et al. [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] present CareerBERT,
a transformer-based architecture that aligns resumes and
ESCO job descriptions for job recommendation tasks.
        </p>
        <p>Collectively, these studies demonstrate the growing
relevance of contextual and self-supervised embeddings in
addressing real-world challenges in job matching,
normalization, and recommendation. Our work builds upon these
foundations and contributes further by integrating semantic
representations with knowledge graph embeddings.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. EXPERIMENTAL FRAMEWORK</title>
      <sec id="sec-3-1">
        <title>3.1. Motivation and Research Objectives</title>
        <p>
          Job Recommender Systems leverage a variety of input
signals, including unstructured text (e.g., resumes or job
postings), structured data (e.g., user preferences and skills),
and collaborative features (e.g., interaction history between
users and job listings). One commonly available yet often
underutilized signal is the job title, which has been shown
to carry meaningful semantic information [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ].
        </p>
        <p>
          While relying solely on job titles for matching would be
insuficient, understanding the semantic textual relatedness
(STR) between job titles can enhance filtering, refinement,
and personalization in recommendation systems. This
motivates our exploration of how job titles relate to one another
and how these relationships can be explained —-
contributing to more transparent and informed recommendations
[
          <xref ref-type="bibr" rid="ref34">34</xref>
          ].
3.1.1. Research Objectives
• Develop a self-supervised approach for extracting
semantic representations of skills and job functions.
• Automatically generate labeled datasets for training
and evaluation.
• Assess text embedding strategies in capturing
semantic relationships between job titles.
• Evaluate graph-based models for their ability to
represent and explain job title relationships.
• Analyze model performance across the full STR
spectrum.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Proposed Method</title>
        <p>We propose a self-supervised pipeline for learning
semantic representations of job titles and their alignment with
skill knowledge graphs. The description of the pipeline is
presented in Algorithm 1.</p>
        <p>
          Algorithm 1 Self-Supervised Semantic Job Embedding and
Skill Mapping Pipeline
1: Input: Raw job titles &amp; descriptions, skill descriptions
2: Output: Trained job title to skill graph alignment model
3: Step 0: Summarization
4: Apply a pretrained BART model [
          <xref ref-type="bibr" rid="ref51">51</xref>
          ] to summarize
each job description, removing boilerplate and retaining
functional content.
5: Step 1: Job Embedding Generation
6: Encode each summary using SBERT to obtain
auxiliary job embeddings.
7: Step 2: Pairwise Similarity Computation
8: Compute cosine similarity between all job
embeddings to generate relatedness scores.
9: Step 3: STR Dataset Construction and SBERT
Fine
        </p>
        <p>Tuning
10: Construct a self-supervised dataset using similarity
scores. Split into train/eval with disjoint job titles.
Finetune SBERT on the training set.
11: Step 4: Skill Embedding Generation
12: Encode textual descriptions of skills using a
transformer model to obtain skill embeddings.
13: Step 5: Extraction of Job Functions and Skills
14: For each job, compute cosine similarity to skills.
Select top-ranked skills as semantic matches.
15: Step 6: Knowledge Graph Construction and
Embedding
16: Construct a bipartite graph of jobs and related skills.</p>
        <p>Learn node embeddings using a graph embedding model
(e.g., RGCN, ComplEx).
17: Step 7: Embedding Alignment
18: Train a neural network to map SBERT job title
embeddings to the graph embedding space.
19: Return: Fine-tuned SBERT model and trained graph
model.</p>
        <p>
          Building upon insights from prior studies [
          <xref ref-type="bibr" rid="ref22 ref42 ref43 ref44 ref46 ref47 ref48 ref49 ref50">43, 46, 47, 48,
49, 50, 42, 44, 22</xref>
          ], our approach introduces several
methodological innovations in the domain of job title similarity
and normalization. We extend existing work by integrating
self-supervised learning with pretrained language models
and leveraging a skill-centric knowledge graph to enhance
interpretability.
        </p>
        <p>
          Departing from the Job2Vec framework [
          <xref ref-type="bibr" rid="ref45">45</xref>
          ], which relies
on co-occurrence and structural signals, we adopt a
selfsupervised strategy that aligns contextual embeddings with
knowledge graph embeddings built around explicit job-skill
relationships. Unlike Lavi et al. [
          <xref ref-type="bibr" rid="ref46">46</xref>
          ] and Rosenberger et al.
[
          <xref ref-type="bibr" rid="ref22">22</xref>
          ], whose models emphasize resume-job alignment or
broad job matching, our focus is specifically on job title
normalization and similarity estimation, leveraging both
textual and structured graph-based representations.
        </p>
        <p>
          While Decorte et al. [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ] pursue supervised classification
into a predefined taxonomy, our methodology emphasizes
self-supervised learning techniques that aim to capture
finegrained semantic and structural relationships between job
titles and skills. Similarly, although we share with Kaya and
Bogers [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ] the use of job titles as primary input signals, we
extend this by incorporating richer semantic cues from full
job descriptions. In comparison to Liu et al. [
          <xref ref-type="bibr" rid="ref49">49</xref>
          ], our model
embeds a broader semantic scope by integrating additional
contextual, relational, and graph-based information.
        </p>
        <p>
          Finally, we align with the objective of Zbib et al. [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ]
in leveraging weak supervision for job title similarity, but
diverge in our implementation by combining pre-trained
sentence encoders with a structured knowledge graph of
job-skill relationships. This enables our model to move
beyond raw skill co-occurrence signals and instead produce
semantically aligned, interpretable embeddings that support
robust similarity estimation across diverse job roles.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Text Vectorization Models</title>
        <p>We evaluate five diferent text vectorization model
configurations which are summarized in Table 1.</p>
        <p>Vectorizer
JOBBERT</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Implementation Details</title>
        <p>
          The experiments were carried out in the Google Colab
environment [
          <xref ref-type="bibr" rid="ref53">53</xref>
          ] with specifications listed in Table 5 (see
Appendix A). The proposed pipeline requires several key
hyper-parameters which are specified in Table 4 (see
Appendix A). The implementation code was written in Python
[
          <xref ref-type="bibr" rid="ref54">54</xref>
          ] using Visual Studio [
          <xref ref-type="bibr" rid="ref55">55</xref>
          ]. The source code, together
with the input and output files, is available at [
          <xref ref-type="bibr" rid="ref56">56</xref>
          ].
        </p>
        <sec id="sec-3-4-1">
          <title>3.4.1. Mapping Text to KG Space</title>
          <p>Training. We fine-tune the SBERT model using
anchorsample-score triplets and cosine similarity loss. We learn
a parametric mapping that projects a job title’s SBERT
embedding into the knowledge-graph (KG) embedding space.
We use a lightweight MLP with ℓ2 normalization and MSE
embedding loss.</p>
          <p>Inference. At inference time, we encode job titles via
ifne-tuned SBERT to obtain text embedding vectors, then
compute graph embedding vectors, and calculate cosine
similarity to estimate the STR score.</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>3.4.2. Skill Selection and Graph Pruning</title>
          <p>To prevent generic skills from dominating the graph and
explanations, we remove skills with job share &gt; 20%.
Remaining skills are reweighted by a specificity score (inverse
centrality degree). This yields a sparser, more
discriminative KG and improves both retrieval quality and explanation
specificity.</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Stratified Data Sampling</title>
        <p>To better understand and evaluate model behavior across
varying levels of semantic textual relatedness (STR), we
partition the continuous STR range into three semantically
meaningful zones (Figure 1), guided by domain expertise in
HR:
• Low STR (0.0–0.50): Pairs of job titles that are largely
unrelated or noisy, often representing very diferent
occupations or sectors.
• Medium STR (0.50–0.75): Ambiguous or borderline
cases, which tend to be more challenging due to
partial overlap in semantics.
• High STR (0.75–1.0): Highly related job title pairs,
including near-duplicates, synonyms, or slight
variations of the same role.</p>
        <p>
          This stratification enables a more nuanced evaluation
of model performance, as global performance metrics (e.g.,
overall RMSE or Pearson correlation) may obscure variation
in model behavior across diferent semantic regions. We
hypothesize that some models will perform better in specific
STR regions while underperforming in others. For example,
models trained with Cosine Similarity Loss may efectively
distinguish highly similar and dissimilar titles but struggle
with borderline cases.
3.6. Data
The raw data are obtained from several open-source
collections: a Kaggle dataset [
          <xref ref-type="bibr" rid="ref57">57</xref>
          ], a list of granular skills and
competencies by ESCO [
          <xref ref-type="bibr" rid="ref58">58</xref>
          ], and a list of broad job functions
Indeed job site [
          <xref ref-type="bibr" rid="ref59">59</xref>
          ]. Table 6 (see Appendix A) describes the
data files consumed and produced by the system.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results &amp; Discussion</title>
      <p>Each vectorization model is evaluated on the same validation
dataset where the predicted STR is calculated using the
cosine distance between two final embedding vectors. We
calculate the global RMSE as well as RMSE values for every
data region of interest (i.e. Low STR, Medium STR, High
STR), which are presented in Table 2.</p>
      <p>We also perform paired t-tests to reveal how each model’s
performance varies across STR regions (Low, Medium, High)
based on absolute errors (Table 3 and Figure 2).</p>
      <p>Vectorizer
JOBBERT
JOBBERT-F
MPNET
MPNET-F
MPNET+RGCN</p>
      <p>Global
STR
0.28
0.17
0.29
0.16
0.23</p>
      <p>Low
STR
0.38
0.16
0.14
0.14
0.30</p>
      <p>Medium</p>
      <p>STR
0.11
0.17
0.36
0.16
0.14</p>
      <sec id="sec-4-1">
        <title>4.1. Evaluation of Region-Specific Model</title>
      </sec>
      <sec id="sec-4-2">
        <title>Behavior</title>
        <p>Our analysis highlights the limitations of aggregate metrics
such as global RMSE, which may mask meaningful
variation in model performance across diferent STR zones, as
demonstrated in Table 2. Stratified evaluation reveals that
models often exhibit asymmetric performance as evident
from Figure 2. A model may perform reliably in
distinguishing unrelated job titles (low STR), yet be less consistent in
matching similar roles (medium to high STR).</p>
        <p>The paired t-test analysis reveals statistically significant
diferences in model performance across STR regions, as
measured by absolute errors.
4.1.1. JOBBERT
JOBBERT demonstrates positive t-statistics in “Low vs
Medium” and “Low vs High” which suggests that the model
performs better in Medium and High STR bands compared
to Low. Conversely, the negative t-statistic in “Medium vs
High” implies superior performance in the Medium band
(Figure 3).
4.1.2. JOBBERT-F
JOBBERT-F exhibits fewer significant diferences:
low–medium and low–high contrasts are significant, but
medium–high diferences are not. This stability in medium
and high STR regions may reflect the benefits of fine-tuning
in reducing variance for more semantically similar pairs
(Figure 4).
4.1.3. MPNET
MPNET shows strong, significant diferences across all
region pairs, with particularly large negative t-values for
low–medium and low–high comparisons, indicating much
lower errors in low STR compared to the other regions
(Figure 5).
MPNET-F shows strong, significant diferences across all
region pairs, with particularly large negative t-values for
low–medium and low–high comparisons. However, it
shows no significant diference between medium and high
STR, suggesting more consistent performance at the higher
end of the similarity spectrum (Figure 6).
MPNET-RGCN demonstrates significant diferences for all
region pairs, with large positive t-values, implying higher
errors in low STR and substantially lower errors in medium
and high STR regions. This suggests that KG integration
particularly benefits the model’s handling of semantically
similar job pairs (Figure 7).</p>
        <p>Overall, these results confirm our hypothesis that models
behave diferently across STR regions, and that fine-tuning
or KG integration can improve performance stability in
specific ranges.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.2. Explainability Analysis</title>
        <p>A key motivation of our work is to provide explainable
job-to-job matches through the integration of knowledge
graphs, linking jobs and skills, and providing structured
explanations.</p>
        <p>Figures 8 and 9 present two illustrative cases: one good
match and one poor match. For the high-STR pair “Senior
Performance and Project Analyst” vs. “Director, eCommerce
&amp; Retail”, the explanation highlights a shared set of
highlyspecific skills (e.g., “supervise brand management” with
speciifcity of 0.67). In contrast, the low-quality match “Executive
Ofice Assistant” vs. “Help Desk Shift Supervisor is driven
by overly generic skills (e.g., “supervise ofice workers” with
specificity of 0.0). Such explanations increase transparency
by showing whether similarity arises from meaningful or
spurious overlaps.</p>
        <p>These results strengthen the claim that KGs improve
explainability. Good matches can be justified by showing
the specific shared skills that drive similarity, while poor
matches can be diagnosed by revealing over-reliance on
generic skills.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.3. Practical Implications</title>
        <p>Our findings have important practical implications for
downstream tasks such as re-ranking or candidate filtering. When
irrelevant matches have already been discarded, the
primary challenge lies in fine-grained diferentiation among
relevant alternatives. In such contexts, a model’s
behavior in the medium to high STR range becomes especially
critical (Figure 7). Conversely, when filtering for dissimilar
job pairs (e.g., in deduplication or anomaly detection tasks),
performance in the low STR range is of greater interest
(Figure 5). This suggests that a general-purpose pre-trained
Language Model can be used in the initial stages of the
recommendation pipeline, with a transition to fine-tuned,
domain-specific models at a later stage when more detailed
distinctions are necessary.</p>
        <p>Stratified evaluation also allows us to observe localized
performance patterns and better characterize where models
succeed or fail. This approach informs both model selection
and training strategies — such as adapting loss functions to
focus on under-performing regions or augmenting training
data with examples from underrepresented STR bands.</p>
        <p>Furthermore, by integrating Knowledge Graphs into the
semantic matching process, we provide a structured
reasoning path behind recommendations. This may improve
transparency, build trust with stakeholders, and help
recruiters justify why a specific job was recommended.</p>
        <p>Although focused on job title and skill matching, our
methodology can be applied to other domains such as
academic paper recommendation, product matching, or legal
case retrieval.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.4. Future Work</title>
        <p>Although our current approach provides a foundation for
learning semantic representations of job titles using
graphbased and textual signals, there are opportunities for future
improvement and expansion.</p>
        <p>First, our knowledge graph construction is limited to
skill-based relationships. Incorporating additional semantic
dimensions such as industry classifications, job seniority
levels (e.g., “Lead Engineer” vs. “Intern”), and domain-specific
contexts (e.g., “Data Scientist – Healthcare” vs. “Data
Scientist – Finance”) would provide a more comprehensive
representation of the job landscape.</p>
        <p>Second, the scope of our current model evaluation is
constrained. We only explore a limited set of graph embedding
models, focus exclusively on Cosine Similarity Loss, and
implement a single negative sampling strategy. Future research
should embrace a broader range of models, loss functions
(e.g., contrastive loss, triplet loss), and negative sampling
strategies to assess their efect on model performance.</p>
        <p>Third, our work focuses exclusively on job-to-job
matching. Extending the framework to job-to-resume and
resumeto-job matching tasks could make it more useful in
realworld recruitment systems.</p>
        <p>
          Additionally, our approach currently relies on structured
skill taxonomies such as ESCO [
          <xref ref-type="bibr" rid="ref58">58</xref>
          ] or O*NET [
          <xref ref-type="bibr" rid="ref60">60</xref>
          ], which
may limit generalizability in domains with less formalized
ontologies. Also, we do not address multi-lingual job
descriptions, which presents an important direction for future
development, particularly for global labor markets.
        </p>
        <p>Moreover, our reliance on weak supervision introduces
potential label noise, especially in low-similarity cases,
which raises concerns about general robustness and may
limit the model’s ability to generalize.</p>
        <p>Finally, we train and evaluate on a relatively small dataset,
which may not capture the full variability of job title
semantics across sectors or regions. Scaling the data set and
introducing data augmentation techniques or semi-supervised
learning methods could mitigate this limitation and improve
the robustness of the model.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This study sets the direction for addressing a critical
challenge in HR applications: the need for explainable and
transparent recommendations. Although text embedding models
like SBERT capture complex contextual semantics, they
often lack interpretability. To overcome this limitation, we
introduce a hybrid approach that combines Semantic Textual
Relatedness (expressed by fine-tuned SBERT embeddings)
with domain-specific knowledge graphs using Graph
Neural Networks. This combination enables not only enhanced
performance but also the ability to trace reasoning paths
between matched job titles —- an essential feature for
auditable and trustworthy decision-making in hiring contexts.
As the use of Artificial Intelligence in recruitment systems
expands, approaches that prioritize both semantic depth and
interpretability will be key to ensuring fairness and user
trust.</p>
      <p>Ultimately, this research contributes to building
intelligent, equitable and explainable recommendation systems
that serve both candidates and employers in a dynamic and
evolving labor market.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research was partially supported by the Horizon
Europe project GenDAI (Grant Agreement ID: 101182801) and
by the ADAPT Research Centre at Munster Technological
University. ADAPT is funded by Taighde Éireann –
Research Ireland through the Research Centres Programme
and co-funded under the European Regional Development
Fund (ERDF) via Grant 13/RC/2106_P2.</p>
      <p>We would also like to thank the anonymous reviewers
for their valuable feedback and constructive suggestions,
which have helped to improve the quality and clarity of this
work.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>
        In our literature review process, we employed Scite AI
Research Assistant [
        <xref ref-type="bibr" rid="ref61">61</xref>
        ], which allowed a more comprehensive
review of the existing literature, ensuring that the sources
included in this study were relevant and reliable. We
utilized OpenAI Code Assistant [
        <xref ref-type="bibr" rid="ref62">62</xref>
        ] to accelerate code
development and improve productivity during the implementation
phase. The assistant was used to generate boilerplate code,
troubleshoot and debug runtime errors, and explore
alternative design patterns. We also used Grammarly [
        <xref ref-type="bibr" rid="ref63">63</xref>
        ] and
Microsoft Copilot [
        <xref ref-type="bibr" rid="ref64">64</xref>
        ] for paraphrasing and corrections of
grammatical, syntactical, and other writing errors. After
using these tools/services, the authors reviewed and edited
the content as needed and take full responsibility for the
publication’s content. Generative AI tools were not used
for data analysis, experimentation, or the formulation of
hypotheses and conclusions.
      </p>
    </sec>
    <sec id="sec-8">
      <title>6. Appendices</title>
    </sec>
    <sec id="sec-9">
      <title>A. Tables</title>
      <p>Number of Skills
Per Job
Job-Skill STR
Threshold
Skill-Skill
Threshold
Text Epochs
Graph Epochs</p>
      <p>Description
Data region in which STR
is considered ’high’
Data region in which STR
is considered ’medium’
Data region in which STR
is considered ’low’
Maximum number of
skills assigned to a job
Minimum STR value at
which a skill is considered
related to a job
Minimum STR value at
which a child skill is
considered related to a parent
skill
Number of epochs for
SBERT fine-tuning
Number of epochs for KG
model training</p>
      <p>
        File Description
Input. File ’source_jobs.csv’ is derived
from a Kaggle dataset [
        <xref ref-type="bibr" rid="ref57">57</xref>
        ] and contains
14,000 records covering a wide range of
professional roles: technical, creative,
educational, financial, administrative, and
operational.
      </p>
      <p>
        Input. File ’source_skills.csv’ is derived
from a list of 14,000 skills and competences
[
        <xref ref-type="bibr" rid="ref58">58</xref>
        ] and a list of 50 skill categories [
        <xref ref-type="bibr" rid="ref59">59</xref>
        ].
Input. File ’source_skill_hierarchy.csv’
defines relationships between skill categories.
For example, ‘Marketing’ and ‘Sales’ are
grouped under ‘Sales &amp; Marketing’
Output. File ’train_job_title_pairs.csv’ is a
training dataset. Each row includes anchor
text (job title) which is the basis for
comparison, sample text (job title), and STR score
between the anchor and the sample
Output. File ’eval_job_title_pairs.csv’ is
an evaluation dataset. Each row includes
anchor text (job title) which is the basis for
comparison, sample text (job title), and STR
score between the anchor and the sample
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chandrasekaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mago</surname>
          </string-name>
          ,
          <article-title>Evolution of semantic similarity-a survey</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>54</volume>
          (
          <year>2021</year>
          ). URL: https://doi-org.
          <source>mtu.idm.oclc.org/10</source>
          .1145/ 3440755. doi:
          <volume>10</volume>
          .1145/3440755.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Sensemap:</surname>
          </string-name>
          <article-title>Urban performance visualization and analytics via semantic textual similarity</article-title>
          ,
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>30</volume>
          (
          <year>2024</year>
          )
          <fpage>6275</fpage>
          -
          <lpage>6290</lpage>
          . doi:
          <volume>10</volume>
          . 1109/TVCG.
          <year>2023</year>
          .
          <volume>3333356</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dwivedi</surname>
          </string-name>
          ,
          <article-title>Knowledge graph-based evaluation metric for conversational ai systems: A step towards quantifying semantic textual similarity</article-title>
          , in: S. Dhar,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Dinesh Kumar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dubey</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          Mazumdar (Eds.),
          <source>AGC 2023</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>112</fpage>
          -
          <lpage>124</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Cruse</surname>
          </string-name>
          , Lexical semantics, Cambridge university press,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Webb</surname>
          </string-name>
          ,
          <article-title>Word families and lemmas, not a real dilemma: Investigating lexical units</article-title>
          ,
          <source>Studies in Second Language Acquisition</source>
          <volume>43</volume>
          (
          <year>2021</year>
          )
          <fpage>973</fpage>
          -
          <lpage>984</lpage>
          . doi:
          <volume>10</volume>
          . 1017/S0272263121000760.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Š.</given-names>
            <surname>Zikánová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Hajičová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vidová-Hladká</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jínová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mírovskỳ</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nědolužko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Poláková</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rysová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rysová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Václ</surname>
          </string-name>
          ,
          <article-title>Discourse and coherence: from the sentence structure to relations in text, Ústav formální a aplikované lingvistiky</article-title>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Eficient estimation of word representations in vector space</article-title>
          ,
          <source>arXiv preprint arXiv:1301.3781</source>
          (
          <year>2013</year>
          ). URL: https:// arxiv.org/pdf/1301.3781.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          ,
          <article-title>BERT: pre-training of deep bidirectional transformers for language understanding</article-title>
          , in: J.
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Doran</surname>
          </string-name>
          , T. Solorio (Eds.),
          <source>Proceedings of the</source>
          <year>2019</year>
          <article-title>Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis</article-title>
          , MN, USA, June 2-7,
          <year>2019</year>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <source>Association for Computational Linguistics</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . URL: https://doi.org/10.18653/v1/n19-
          <fpage>1423</fpage>
          . doi:
          <volume>10</volume>
          .18653/V1/N19-1423.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Reimers</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Gurevych</surname>
          </string-name>
          ,
          <article-title>Sentence-bert: Sentence embeddings using siamese bert-networks</article-title>
          , arXiv preprint arXiv:
          <year>1908</year>
          .
          <volume>10084</volume>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .18653/v1/d19-
          <fpage>1410</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Abdalla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vishnubhotla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Mohammad</surname>
          </string-name>
          ,
          <article-title>What makes sentences semantically related: a textual relatedness dataset and empirical study (</article-title>
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          . 48550/arxiv.2110.04845.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sloan</surname>
          </string-name>
          ,
          <article-title>Relational ambiguity between sentences</article-title>
          ,
          <source>College Composition and Communication</source>
          <volume>39</volume>
          (
          <year>1988</year>
          )
          <fpage>154</fpage>
          -
          <lpage>165</lpage>
          . URL: http://www.jstor.org/stable/358025.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Miyabe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Takamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Okumura</surname>
          </string-name>
          ,
          <article-title>Identifying cross-document relations between sentences</article-title>
          ,
          <source>in: Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lombardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boiardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Colombo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schiavone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tamagnone</surname>
          </string-name>
          ,
          <article-title>Top-rank-focused adaptive vote collection for the evaluation of domainspecific semantic models</article-title>
          , in: B.
          <string-name>
            <surname>Webber</surname>
            , T. Cohn,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
          </string-name>
          , Y. Liu (Eds.),
          <source>Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>3081</fpage>
          -
          <lpage>3093</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .emnlp-main.
          <volume>249</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .emnlp-main.
          <volume>249</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <article-title>Recommender system for e-learning based on semantic relatedness of concepts</article-title>
          ,
          <source>Information</source>
          <volume>6</volume>
          (
          <year>2015</year>
          )
          <fpage>443</fpage>
          -
          <lpage>453</lpage>
          . doi:
          <volume>10</volume>
          .3390/ info6030443.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Likavec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Osborne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cena</surname>
          </string-name>
          ,
          <article-title>Property-based semantic similarity and relatedness for improving recommendation accuracy and diversity</article-title>
          ,
          <source>International Journal on Semantic Web and Information Systems (IJSWIS) 11</source>
          (
          <year>2015</year>
          )
          <fpage>1</fpage>
          -
          <lpage>40</lpage>
          . doi:
          <volume>10</volume>
          .4018/ijswis.2015100101.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Natarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vairavasundaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kotecha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Indragandhi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Palani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Saini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ravi</surname>
          </string-name>
          ,
          <article-title>Cd-semmf: Crossdomain semantic relatedness based matrix factorization model enabled with linked open data for user cold start issue</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>52955</fpage>
          -
          <lpage>52970</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2022</year>
          .
          <volume>3175566</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>I.</given-names>
            <surname>Rahhal</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Carley</surname>
            , I. Kassou,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ghogho</surname>
          </string-name>
          ,
          <article-title>Two stage job title identification system for online job advertisements</article-title>
          ,
          <source>IEEE Access 11</source>
          (
          <year>2023</year>
          )
          <fpage>19073</fpage>
          -
          <lpage>19092</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2023</year>
          .
          <volume>3247866</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Tanberk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Helli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kesim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Cavsak</surname>
          </string-name>
          ,
          <article-title>Resume matching framework via ranking and sorting using nlp and deep learning</article-title>
          ,
          <source>in: 2023 8th International Conference on Computer Science and Engineering (UBMK)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>453</fpage>
          -
          <lpage>458</lpage>
          . doi:
          <volume>10</volume>
          .1109/UBMK59864.
          <year>2023</year>
          .
          <volume>10286605</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Pias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hossain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Hossain</surname>
          </string-name>
          ,
          <article-title>Enhancing job matching through natural language processing: A bert-based approach</article-title>
          , in: 2024 International Conference on Innovations in Science,
          <source>Engineering and Technology (ICISET)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICISET62123.
          <year>2024</year>
          .
          <volume>10939860</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Bogers</surname>
          </string-name>
          ,
          <article-title>An exploration of sentencepair classification for algorithmic recruiting</article-title>
          ,
          <source>in: Proceedings of the 17th ACM Conference on Recommender Systems</source>
          , RecSys '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , p.
          <fpage>1175</fpage>
          -
          <lpage>1179</lpage>
          . URL: https://doi-org.
          <source>mtu.idm.oclc. org/10</source>
          .1145/3604915.3610657. doi:
          <volume>10</volume>
          .1145/3604915. 3610657.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rezaeipourfarsangi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Milios</surname>
          </string-name>
          ,
          <article-title>Ai-powered resume-job matching: A document ranking approach using deep neural networks</article-title>
          ,
          <source>in: Proceedings of the ACM Symposium on Document Engineering</source>
          <year>2023</year>
          , DocEng '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          . URL: https://doi-org.
          <source>mtu. idm.oclc.org/10</source>
          .1145/3573128.3609347. doi:
          <volume>10</volume>
          .1145/ 3573128.3609347.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rosenberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wolfrum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Weinzierl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zschech</surname>
          </string-name>
          , Careerbert:
          <article-title>Matching resumes to esco jobs in a shared embedding space for generic job recommendations</article-title>
          ,
          <source>Expert Systems with Applications</source>
          (
          <year>2025</year>
          )
          <article-title>127043</article-title>
          . URL: https://www.proquest.com/scholarly-journals/
          <article-title>careerbert-matching-resumes-</article-title>
          <string-name>
            <surname>esco-</surname>
          </string-name>
          jobs-shared/ docview/3213850804/se-2.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>G. L</given-names>
            ,
            <surname>R. M L</surname>
            , G. H B, K. Mathada
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>B, Intelligent resume scrutiny using named entity recognition with bert</article-title>
          ,
          <source>in: 2023 International Conference on Data Science and Network Security (ICDSNS)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>01</fpage>
          -
          <lpage>08</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICDSNS58469.
          <year>2023</year>
          .
          <volume>10245304</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhobale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bhoir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vyavhare</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yelkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Dharmadhikari</surname>
          </string-name>
          ,
          <string-name>
            <surname>Resumatcher:</surname>
          </string-name>
          <article-title>An intelligent resume ranking system</article-title>
          ,
          <source>in: 2025 3rd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT)</source>
          ,
          <year>2025</year>
          , pp.
          <fpage>1778</fpage>
          -
          <lpage>1783</lpage>
          . doi:
          <volume>10</volume>
          .1109/IDCIOT64235.
          <year>2025</year>
          .
          <volume>10915179</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>A person-job matching method based on bm25 and pre-trained language model</article-title>
          ,
          <source>in: Proceedings of the 2023 6th International Conference on Machine Learning and Natural Language Processing</source>
          , MLNLP '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>78</fpage>
          -
          <lpage>83</lpage>
          . URL: https://doi-org.
          <source>mtu.idm.oclc. org/10</source>
          .1145/3639479.3639494. doi:
          <volume>10</volume>
          .1145/3639479. 3639494.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramyar</surname>
          </string-name>
          , G. Nagarani,
          <string-name>
            <given-names>S.</given-names>
            <surname>Natarajan</surname>
          </string-name>
          ,
          <article-title>Deep learning based approach to streamline resume categorization and ranking</article-title>
          , in: 2024
          <source>International Conference on IoT Based Control Networks and Intelligent Systems (ICICNIS)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>840</fpage>
          -
          <lpage>845</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICICNIS64247.
          <year>2024</year>
          .
          <volume>10823187</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Aleisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Belof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <article-title>Implementing airm: a new ai recruiting model for the saudi arabia labour market</article-title>
          ,
          <source>Journal of Innovation and Entrepreneurship</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <article-title>59</article-title>
          . URL: https://www.proquest.com/scholarly-journals/
          <article-title>implementing-airm-new-ai-</article-title>
          <string-name>
            <surname>recruiting-</surname>
          </string-name>
          model-saudi/ docview/2864704180/se-2.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dufter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yaghoobzadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schütze</surname>
          </string-name>
          ,
          <article-title>Quantifying the contextualization of word representations with semantic class probing, Findings of the Association for Computational Linguistics: EMNLP</article-title>
          <year>2020</year>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          . findings-emnlp.
          <volume>109</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>K.</given-names>
            <surname>Misra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ettinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Rayz</surname>
          </string-name>
          ,
          <article-title>Exploring bert's sensitivity to lexical cues using tests from semantic priming, Findings of the Association for Computational Linguistics: EMNLP</article-title>
          <year>2020</year>
          (
          <year>2020</year>
          )
          <fpage>4625</fpage>
          -
          <lpage>4635</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .findings-emnlp.
          <volume>415</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Eventbert: incorporating event-based semantics for natural language understanding</article-title>
          ,
          <source>Lecture Notes in Computer Science</source>
          (
          <year>2022</year>
          )
          <fpage>66</fpage>
          -
          <lpage>80</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -18315-
          <issue>7</issue>
          _
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hammami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Abdallah Ben Lamine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Baazaoui</surname>
          </string-name>
          ,
          <article-title>Bert-based semantic relations extraction from largescale medical datasets</article-title>
          ,
          <source>in: 2024 IEEE International Conference on Big Data (BigData)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>6460</fpage>
          -
          <lpage>6468</lpage>
          . doi:
          <volume>10</volume>
          .1109/BigData62323.
          <year>2024</year>
          .
          <volume>10825162</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Regulation</surname>
          </string-name>
          <article-title>(eu) 2024/1689 of the european parliament and of the council of 12 july 2024 laying down harmonised rules on artificial intelligence (artificial intelligence act</article-title>
          ),
          <source>Oficial Journal of the European Union, L</source>
          <volume>168</volume>
          ,
          <issue>12</issue>
          <year>July 2024</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>157</lpage>
          ,
          <year>2024</year>
          . URL: https:// artificialintelligenceact.eu/,
          <source>applies from 2 February</source>
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>H.</given-names>
            <surname>Min</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Grandey</surname>
          </string-name>
          , M. Liu,
          <article-title>Wisdom from the crowd: can recommender systems predict employee turnover and its destinations?</article-title>
          ,
          <source>Personnel Psychology</source>
          <volume>77</volume>
          (
          <year>2022</year>
          )
          <fpage>475</fpage>
          -
          <lpage>496</lpage>
          . doi:
          <volume>10</volume>
          .1111/ peps.12551.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Explainable recommendation: A survey and new perspectives, Foundations and Trends® in Information Retrieval (</article-title>
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1561/ 1500000066.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bied</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nathan</surname>
          </string-name>
          , E. Perennes,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Caillou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Crépon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gaillac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sebag</surname>
          </string-name>
          ,
          <article-title>Toward Job Recommendation for All</article-title>
          ,
          <source>in: IJCAI 2023 - The 32nd International Joint Conference on Artificial Intelligence</source>
          ,
          <source>International Joint Conferences on Artificial Intelligence Organization</source>
          , Macau, China,
          <year>2023</year>
          , pp.
          <fpage>5906</fpage>
          -
          <lpage>5914</lpage>
          . URL: https://hal.science/hal-04245528. doi:
          <volume>10</volume>
          .24963/ijcai.
          <year>2023</year>
          /655.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Chao</surname>
          </string-name>
          <string-name>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Intelligent talent recommendation algorithm for college students for the future job market</article-title>
          ,
          <source>Jes</source>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .52783/jes.1721.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>A study of reciprocal job recommendation for college graduates integrating semantic keyword matching and social networking</article-title>
          ,
          <source>Applied Sciences</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .3390/app132212305.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>R.</given-names>
            <surname>Schellingerhout</surname>
          </string-name>
          ,
          <article-title>Explainable multi-stakeholder job recommender systems (</article-title>
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1145/3640457. 3688014.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>R.</given-names>
            <surname>Schellingerhout</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Barile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          ,
          <article-title>A co-design study for multi-stakeholder job recommender system explanations (</article-title>
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>031</fpage>
          -44067-0\_
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <article-title>Kg-bert: Bert for knowledge graph completion</article-title>
          ,
          <year>2019</year>
          . URL: https://arxiv.org/abs/
          <year>1909</year>
          .03193. arXiv:
          <year>1909</year>
          .03193.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Razouk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kern</surname>
          </string-name>
          ,
          <article-title>Efective use of bert in graph embeddings for sparse knowledge graph completion</article-title>
          ,
          <source>in: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing</source>
          , SAC '22,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          , p.
          <fpage>799</fpage>
          -
          <lpage>802</lpage>
          . URL: https://doi.org/10.1145/3477314. 3507031. doi:
          <volume>10</volume>
          .1145/3477314.3507031.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <article-title>Boosting bert-based knowledge graph completion with contrastive learning and hard sample training</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>222</volume>
          (
          <year>2023</year>
          )
          <fpage>71</fpage>
          -
          <lpage>80</lpage>
          . URL: https://www.sciencedirect.com/ science/article/pii/S1877050923009109. doi:https:// doi.org/10.1016/j.procs.
          <year>2023</year>
          .
          <volume>08</volume>
          .145,
          <source>international Neural Network Society Workshop on Deep Learning Innovations and Applications (INNS DLIA</source>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ko</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Seo, Multi-task learning for knowledge graph completion with pretrained language models</article-title>
          , in: D.
          <string-name>
            <surname>Scott</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Bel</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          Zong (Eds.),
          <source>Proceedings of the 28th International Conference on Computational Linguistics</source>
          ,
          <source>International Committee on Computational Linguistics</source>
          , Barcelona,
          <source>Spain (Online)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1737</fpage>
          -
          <lpage>1743</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .coling-main.
          <volume>153</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .coling-main.
          <volume>153</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Naim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Bhat</surname>
          </string-name>
          ,
          <article-title>A contextualized bert model for knowledge graph completion</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2412.11016. arXiv:
          <volume>2412</volume>
          .
          <fpage>11016</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Liu,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <article-title>Job2vec: Job title benchmarking with collective multi-view representation learning</article-title>
          ,
          <source>in: Proceedings of the 28th ACM International Conference on Information and Knowledge Management</source>
          , CIKM '19,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>2763</fpage>
          -
          <lpage>2771</lpage>
          . URL: https://doi.org/10.1145/ 3357384.3357825. doi:
          <volume>10</volume>
          .1145/3357384.3357825.
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lavi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Medentsiy</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Graus, consultantbert: Finetuned siamese sentence-bert for matching jobs</article-title>
          and job seekers,
          <year>2021</year>
          . URL: https://arxiv.org/abs/2109.06501. arXiv:
          <volume>2109</volume>
          .
          <fpage>06501</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaya</surname>
          </string-name>
          , T. Bogers,
          <article-title>Efectiveness of job title based embeddings on résumé to job ad recommendation</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          , volume
          <volume>2967</volume>
          , CEUR Workshop Proceedings,
          <year>2021</year>
          . URL: https://vbn.aau.dk/ ws/portalfiles/portal/464971521/Kaya_Bogers.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <surname>J.-J. Decorte</surname>
            ,
            <given-names>J. Van</given-names>
          </string-name>
          <string-name>
            <surname>Hautte</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Demeester</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Develder</surname>
          </string-name>
          ,
          <article-title>Jobbert: Understanding job titles through skills</article-title>
          ,
          <year>2021</year>
          . URL: https://www.proquest.com/working-papers/
          <article-title>jobbert-understanding-job-</article-title>
          <string-name>
            <surname>titles-</surname>
          </string-name>
          through-skills/ docview/2574960980/se-2.
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. C.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Singhal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Blessing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. H.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <article-title>Title2vec: a contextual job title embedding for occupational named entity recognition and other applications</article-title>
          ,
          <source>Journal of Big Data</source>
          <volume>9</volume>
          (
          <year>2022</year>
          ).
          <source>doi:10.1186/s40537- 022- 00649- 5.</source>
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zbib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Lucas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Retyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Poves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Aizpuru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fabregat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Simkus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>García-Casademont</surname>
          </string-name>
          ,
          <article-title>Learning job titles similarity from noisy skill labels</article-title>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arxiv.2207.00494.
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghazvininejad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          , L. Zettlemoyer, BART:
          <article-title>Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension</article-title>
          , in: D.
          <string-name>
            <surname>Jurafsky</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Chai</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Schluter</surname>
          </string-name>
          , J. Tetreault (Eds.),
          <article-title>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>7871</fpage>
          -
          <lpage>7880</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl- main.703.
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schlichtkrull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Kipf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bloem</surname>
          </string-name>
          , R. van den Berg, I. Titov,
          <string-name>
            <given-names>M.</given-names>
            <surname>Welling</surname>
          </string-name>
          ,
          <article-title>Modeling relational data with graph convolutional networks</article-title>
          ,
          <year>2017</year>
          . URL: https: //arxiv.org/abs/1703.06103. arXiv:
          <volume>1703</volume>
          .
          <fpage>06103</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [53]
          <string-name>
            <surname>Google</surname>
          </string-name>
          , Google colaboratory, https://colab.research. google.com/,
          <year>2024</year>
          . Accessed:
          <fpage>2025</fpage>
          -08-02.
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>Python</given-names>
            <surname>Core</surname>
          </string-name>
          <string-name>
            <surname>Team</surname>
          </string-name>
          ,
          <article-title>Python: A dynamic, open source programming language</article-title>
          ,
          <source>Python Software Foundation</source>
          ,
          <year>2019</year>
          . URL: https://www.python.org/.
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          [55]
          <string-name>
            <surname>Microsoft</surname>
            <given-names>Corporation</given-names>
          </string-name>
          , Visual studio, https: //visualstudio.microsoft.com/,
          <year>2025</year>
          .
          <article-title>Integrated development environment (IDE) by Microsoft</article-title>
          .
          <source>Accessed: 2025-08-07.</source>
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zadykian</surname>
          </string-name>
          , Job title relatedness, ????
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          [57]
          <string-name>
            <given-names>A.</given-names>
            <surname>Koneru</surname>
          </string-name>
          ,
          <article-title>Linkedin job postings (</article-title>
          <year>2023</year>
          - 2024),
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .34740/KAGGLE/DSV/9200871.
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          [58]
          <string-name>
            <given-names>E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. A.</surname>
          </string-name>
          <article-title>Directorate-General for Employment, Inclusion, ESCO handbook - European skills, competences, qualifications and occupations</article-title>
          ,
          <source>Publications Ofice of the European Union</source>
          ,
          <year>2019</year>
          . doi: doi/10. 2767/451182.
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          [59]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Team</surname>
          </string-name>
          ,
          <volume>230</volume>
          job titles in 17 industries to include
          <source>on your resume</source>
          ,
          <year>2025</year>
          . URL: https://www.indeed.com/ career-advice/
          <article-title>resumes-cover-letters/job-title.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          [60] U.S. Department of Labor, O*net online,
          <year>2025</year>
          . URL: https://www.onetonline.org/, accessed:
          <fpage>2025</fpage>
          -08-06.
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          [61]
          <string-name>
            <surname>Scite</surname>
          </string-name>
          , Inc.,
          <source>Ai research assistant</source>
          ,
          <year>2024</year>
          . URL: https:// scite.ai/assistant.
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          [62]
          <string-name>
            <surname>OpenAI</surname>
          </string-name>
          , Openai code assistant,
          <year>2025</year>
          . URL: https:// openai.com,
          <article-title>large language model used for code generation and assistance</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          [63]
          <string-name>
            <surname>Grammarly</surname>
          </string-name>
          ,
          <article-title>Grammarly grammar checker</article-title>
          , n.d. URL: https://www.grammarly.com/grammar-check, accessed:
          <fpage>2025</fpage>
          -04-21.
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          [64]
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          ,
          <article-title>Copilot (gpt-4) [large language model]</article-title>
          , https://copilot.microsoft.com/,
          <year>2025</year>
          . Accessed:
          <fpage>2025</fpage>
          - 08-07.
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          <source>Medium Range Low STR Range 0.5 0.25 5 15</source>
          Table 5 Runtime Environment Setup
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>