<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AI-Driven Resume Analysis and Enhancement Using Semantic Modeling and Large Language Feedback Loops</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Achal Jagadeesh</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chinmayi Ravi Shankar</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sahithya Narayanaswamy Patel</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Levantesi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Semeraro</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ernesto William De Luca</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>George Eckert Institute</institution>
          ,
          <addr-line>Brunswick</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leibniz Institute for Educational Media</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Otto-von-Guericke University</institution>
          ,
          <addr-line>Universitätspl. 2, 39106 Magdeburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University Of Bari Aldo Moro</institution>
          ,
          <addr-line>via E. Orabona 4, 70125, Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>2</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>Fairness is increasingly elusive in the current landscape of Artificial Intelligence and Large Language Models. These technologies can easily inject fake or inaccurate information into the data, often misrepresenting what truly exists. This problem is widely spread in many domain applications, including those dealing with user profiles. In particular, in the job market, this afects both recruiters and job seekers. Resumes are frequently optimized to fit the job call in rather than to reflect genuine qualifications, while automated screening tools may overlook authentic but non-standard profiles. This work proposes a resume analysis and enhancement system. It enables iterative improvement through the use of Large Language Models while preserving the original content. This leads to a consistent improvement in similarity and match quality with job applications. Fairness is achieved not by altering who the candidate is, but by ensuring their actual capabilities are accurately and contextually recognized, thus empowering both evaluators and applicants through authentic enhancement.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Resume enhancement</kwd>
        <kwd>ATS</kwd>
        <kwd>NLP</kwd>
        <kwd>semantic similarity</kwd>
        <kwd>ethical AI</kwd>
        <kwd>Sentence Transformers</kwd>
        <kwd>fairness</kwd>
        <kwd>GPT</kwd>
        <kwd>LLaMA</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>restructure their narratives just to be considered by the
system [6].</p>
      <p>
        AI-driven resume screening systems, currently widely This work presents a resume analysis and
enhanceadopted in company recruitment scenarios, have rede- ment system designed around the principle of
"contexifned the process of candidate evaluation[ 1]. While these tual fairness" [6]. The system avoids modifying or
arsystems possess scalability and consistency, they often tificially enhancing a candidate’s narrative. Instead, it
prioritize standardization over content [
        <xref ref-type="bibr" rid="ref13">2, 3, 4</xref>
        ]. As a re- enhances what is already present suggesting section-wise
sult, applicants are implicitly encouraged to fit into rigid improvements that improve clarity, alignment, and
strucpatterns using standard templates, inflated action verbs, ture without distorting meaning. All suggestions are
and keyword-dense summaries that align with the pars- non-prescriptive and allow the candidate full control over
ing logic of Applicant Tracking Systems (ATS). This leads the integration. To achieve this, the system employs two
to a recruitment ecosystem where many resumes are op- complementary AI components. A Sentence Transformer
timized to pass automated filters rather than to authenti- model (i.e., "multi-qa-MiniLM-L6-cos-v1") [7] computes
cally represent the candidate’s qualifications, context, or the semantic similarity between resume content and job
potential. Such practices introduce a significant and often descriptions. This enables the system to assess how well
unacknowledged issue: fairness. In current automated the candidate’s wording aligns with the job description.
systems, fairness is equated with the uniform application Alongside this, an instruction-tuned LLaMA 3.2 model[8]
of algorithms[5]. However, uniformity is not the same generates fine-grained enhancement suggestions for
inas equity. Two candidates who pursue similar compe- dividual resume sections such as Skills, Experience, and
tencies may be treated diferently based on how closely Summary. These suggestions are tailored to the job
detheir resumes reflect the expected linguistic and struc- scription’s context but preserve the candidate’s
originaltural patterns. Those from non-traditional backgrounds, ity, ofering ways to surface hidden strengths or clarify
interdisciplinary fields, or regions with diferent resume vague phrasing. The result is a system that recognizes
norms may be penalized due to the limitations of auto- intent and potential supporting candidates in expressing
mated parsing logic rather than lack of ability. Moreover, their capabilities authentically and enabling recruiters
candidates may feel compelled to deviate or artificially to evaluate resumes on substance rather than style. In
a landscape increasingly shaped by automation, this
approach represents a shift from optimization toward
interpretation and from filtering toward understanding.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        system is designed for high-throughput resume analysis,
ofering structured outputs that assist recruiters in
canTraditional Applicant Tracking Systems (ATSs) rely on didate filtering. Similar to our work, their approach uses
keyword-based filtering [ 9], which fails to capture the instruction-tuned LLMs for interpreting and processing
contextual nuances in resumes, leading to biased or inac- resume content. However, the two systems diverge
signifcurate candidate evaluations. Recent approaches lever- icantly in purpose and design philosophy. While Gan et al.
age transformer-based models to assess semantic similar- focus on classification and summarization to streamline
ity between resumes and job descriptions. Resume2Vec hiring pipelines, our system emphasizes "contextual
fair[9] introduced a framework using models like BERT, ness"—providing non-intrusive, section-wise suggestions
RoBERTa, and LLaMA [9] to generate embeddings and that retain the candidate’s narrative integrity. Instead of
improve candidate–job alignment through cosine simi- generating summaries or altering resume tone, our
syslarity. Their system outperformed conventional ATSs in tem enhances clarity and alignment using a hybrid model
both ranking accuracy and alignment with human judg- architecture: Sentence-Transformers
multi-qa-MiniLMment across multiple domains. Unlike keyword-centric L6-cos-v1[7] for semantic similarity scoring and LLaMA
methods, Resume2Vec emphasizes context and fairness 3.2[8] for targeted feedback. However other LLMs (i.e.
by preserving the semantic richness of candidate data. LLaMantino [
        <xref ref-type="bibr" rid="ref4 ref5">14, 15</xref>
        ]) or embedding strategies [
        <xref ref-type="bibr" rid="ref6">16</xref>
        ] cold
This shift toward embedding-based analysis lays the foun- be simply adopted by changing few lines of code.
dation for more equitable and intelligent recruitment
systems.
      </p>
      <p>Lavi et al. (2021) [10] introduced conSultantBERT, a 3. Methodology
ifne-tuned Siamese Sentence-BERT model tailored for
resume-job matching, addressing challenges such as data Our framework follows a pipeline with consecutive steps
heterogeneity, cross-linguality, and noisy resume for- Figure 1. Such pipeline begins by taking in two primary
mats. By leveraging cosine similarity between multilin- inputs: the resume uploaded by the job seeker and the
gual embeddings, their model significantly outperformed job description submitted by the recruiter.
both TF-IDF and pre-trained BERT baselines in
predicting resume-vacancy matches. Their findings afirm the Resume upload and processing. We support a
reimportance of domain-specific fine-tuning to preserve sume uploading process for documents in Word (i.e.
semantic integrity in candidate profiles while improving ".docx" extension) or PDF (i.e. ".pdf" extension) format.
matching accuracy. Like our system, conSultantBERT em- The system uses python-docx 1 for Word documents and
phasizes contextual matching without resorting to super- pdfplumber2 for PDFs. These libraries enable accurate
ifcial keyword overlap, highlighting the role of semanti- extraction of plain text and preserves section structure
cally grounded embeddings in achieving fair and scalable as well as formatting semantics. Each parsed resume is
recruitment solutions. While conSultantBERT focuses stored in a document database (Firestore DB and Storage)
on semantic matching between resumes and job descrip- alongside unique metadata including a resume identifier ,
tions using fine-tuned embeddings, our approach not user email, timestamp, and a designated resume name for
only evaluates similarity but also provides customized future tracking and analysis purposes.
resume enhancements using LLMs</p>
      <p>
        Yadav et al. (2025) [
        <xref ref-type="bibr" rid="ref1">11</xref>
        ] developed a rule-based re- Job Description Submission and Structuring.
Resume analysis system that integrates NLP and ATS scor- cruiters provide job descriptions through a structured
ing to enhance automated screening eficiency. Their template by inputting key fields such as job title, required
system parses structured resume data and ranks can- experience, skills, responsibilities, and domain focus
ardidates using metrics such as word count, skill match, eas (e.g., questionnaireFocus)3. These structured fields
and experience, delivering real-time feedback and im- are flattened into a consolidated textual representation,
provement suggestions. While efective in increasing which makes them compatible with vector-based
semanscreening speed and ATS alignment, the model primar- tic models and term-frequency-based keyword extraction.
ily focuses on formatting and keyword optimization. In To maintain consistency and modularity, the flattened job
contrast, our work emphasizes semantic fairness by description is stored in parallel with its structured form
maintaining candidate authenticity, going beyond within the same database, under a unique job identifier.
surface-level optimizations to contextualize and enhance This kind of dual representation allows the system to
genuine qualifications[
        <xref ref-type="bibr" rid="ref2">9, 12</xref>
        ]. dynamically switch between structured access[
        <xref ref-type="bibr" rid="ref7">17</xref>
        ] (e.g.,
      </p>
      <p>
        Gan et al. (2024) [
        <xref ref-type="bibr" rid="ref3">13</xref>
        ] proposed a resume screening
framework based on large language models (LLMs),
utilizing agents such as LLaMA2 and GPT-3.5 to automate
resume classification, scoring, and summarization. Their
      </p>
      <sec id="sec-2-1">
        <title>1https://python-docx.readthedocs.io/en/latest/</title>
        <p>2https://pypi.org/project/pdfplumber/
3Currently, such aspects are not automatically extracted from the
job position but we consider to do that as a future work.
Improvements/suggestions
Get the fields: 1.resume text
2.job text 3. Matching terms
4.keyword score 5.semantic</p>
        <p>score 6.ATS score
Generate
improved
suggestion</p>
        <p>Generate ATS
improvements
Llama 3.2: latest model</p>
        <p>Suggestions are
sentout to user</p>
        <p>STOP</p>
        <p>Enter Job
description(JD)</p>
        <p>fields
Store in Database</p>
        <p>Update
changes</p>
        <p>Firebase</p>
        <p>Matching &amp;
missing terms
Upload resume</p>
        <p>PDF/DOCX
PDF
Plumber</p>
        <p>Docx
based
extraction
Store resume text
along with identifiers
in Database</p>
        <p>Resume text
and JD fields
Extract keywords
using KeyBERT
Get missing terms
&amp; matching terms
using RapidFuzz
ATS score, semantic
score &amp; keyword score</p>
        <p>GET RESUME/JD</p>
        <p>similarity</p>
        <p>Get resume text &amp; JD,
flatten the fields of JD feed to</p>
        <p>encoder
Encode resume text &amp; JD
text to tensor &amp; use cosine
similarity to calculate</p>
        <p>similarity
Resume text</p>
        <p>JD text
Pre process: text to lower
text, regular expression
based text cleaning
Keyword match: extract
keywords from resumes &amp;</p>
        <p>JD text using
count vectorizer
Semantic similarity score
calculation: encode texts
using all-MiniLM-L6-v2 &amp;
calculate cosine similarity
ATS score=Sesmeamnaticnsticcore
score*semantic
weight+keyword
score*keyword weight</p>
        <p>Keywor
d score
for displaying details or generating questionnaires) and
unstructured access (e.g., for semantic similarity and ATS
scoring).</p>
      </sec>
      <sec id="sec-2-2">
        <title>After resumes and job descriptions have been ingested</title>
        <p>Data Cleansing. Both the resume and job descrip- and preprocessed, the system performs a multi-level
tion texts are normalized by converting to lowercase and alignment assessment through semantic similarity and
applying regular expression-based cleaning into alpha- keyword relevance scores. This step is central to producing
numeric characters. This removes extraneous symbols, a fair and interpretable Applicant Tracking System (ATS)
spacing irregularities, and control characters, ensuring score that reflects both explicit and contextual alignment
input consistency before model encoding. This initial between candidate profiles and job requirements.
acquisition and preparation phase ensures that both
resumes and job descriptions are available in clean, com- Semantic Similarity Score. To ensure robustness
parable formats for downstream tasks such as similarity and fairness in semantic evaluation, the system
levercomputation, keyword relevance analysis, and improve- ages two independent transformer models from the
ment suggestion generation. SentenceTransformers library 4:</p>
        <sec id="sec-2-2-1">
          <title>3.1. Similarity, Keyword Score and ATS</title>
        </sec>
        <sec id="sec-2-2-2">
          <title>Score Calculation</title>
          <p>• all-MiniLM-L6-v26</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>Matching Score. The set intersection between ex</title>
        <p>
          tracted resume keywords and job description keywords
is used to calculate a match ratio:
Each model independently encodes the cleaned resume
text and job description text into tensor embeddings. Co- match_score = |matched_keywords| (1)
sine similarity[
          <xref ref-type="bibr" rid="ref8 ref9">18, 19</xref>
          ] is then computed between these |job_keywords|
vectors to assess semantic alignment. If one model under- Fuzzy Matching. To account for synonyms and
performs or introduces bias [
          <xref ref-type="bibr" rid="ref10">20</xref>
          ] in representation (e.g., approximate matches, the system additionally uses
due to phrasing variance), the other acts as a fallback, RapidFuzz9 to detect partial matches between
keypromoting score stability and fairness across domains words, further refining the keyword score. To account for
and candidate profiles[ 10]. The all-MiniLM-L6-v2 model synonyms, spelling variations, and approximate matches,
is used for ATS score calculation[
          <xref ref-type="bibr" rid="ref11">21</xref>
          ] due to its balanced the system incorporates RapidFuzz, a fast string
matchability to capture both semantic meaning and keyword- ing library based on Levenshtein distance. RapidFuzz
level relevance, making it ideal for evaluating overall computes partial similarity ratios between extracted
keyresume compatibility. Meanwhile, multi-qa-MiniLM-L6- words from the job description and the resume, helping
cos-v1 is reserved for pure semantic similarity scoring, as detect near-matches even when exact wording difers.
its QA-focused fine-tuning excels at understanding con- This refinement step enhances the keyword score
accutextual alignment between resumes and job descriptions. racy by capturing relevant but variably phrased skills or
This separation ensures accurate, fair, and domain-robust experiences.
evaluations.
        </p>
        <p>Applicant Tracking System (ATS) Score. The final
Keyword Relevance Score. Keyword-based scoring ATS score is computed as a weighted sum of semantic
complements semantic alignment by focusing on lex- similarity and keyword relevance scores:
ical overlap. This scoring process follows the steps ATS_score = (sem_score · 1) + (keyword_score · 2)
described below: (i) Initial Extraction. The job de- (2)
scription is vectorized using CountVectorizer from Where:
trsrekaclctteitoaenrrm.nR.effsrueeqmauteeuknrecyeyw_aeonxradtlsyrasaricse.te(ixiiot)rnaR.cetteseudxmutse7in,KgaelKyloewwyoBirnEdgRETdx8i--, •• 21==kseeymwaonrtdicwweeigighhtt(d(deeffaauultlt::00.5.5))
which identifies top N significant phrases based on con- This hybrid scoring formula balances surface-level
textual embedding similarity. Keyword extraction is es- term relevance with deep contextual alignment. By
assential and plays a vital role in ensuring fairness during signing separate weights, the system allows recruiters to
evaluation. As shown in Figure 1 the extracted matching prioritize either direct skill inclusion or holistic
candidatekeywords are utilized by the LLM to generate context- job compatibility.
aware suggestions, providing targeted improvements The calculated ATS score serves as a crucial factor for
that align more closely with the job description. This both recruiters and job seekers by helping recruiters
efstep enhances both the relevance and fairness of the feed- ficiently shortlist candidates based on relevance, while
back provided to users. guiding job seekers in optimizing their resumes. Unlike</p>
        <p>Two diferent keyword extraction approaches are used traditional systems that rely solely on keyword matching,
to account for the inherent diferences in data structure this score combines keyword relevance with semantic
and consistency. Job descriptions are entered by users in similarity, capturing not just the presence of required
a structured JSON format and are generally concise and terms but also the contextual alignment between the
standardized, making them ideal for keyword extraction resume and job description. This hybrid approach
enusing CountVectorizer, which captures raw term frequen- sures greater fairness, adaptability across domains, and
cies. In contrast, resumes are uploaded as binary files reduced bias, making it more insightful than
conven(PDF or DOCX) and converted to plain text, often in an tional ATS scores that often overlook phrasing variations
unstructured and inconsistent manner - hence, KeyBERT or implied competencies.
is employed to extract context-aware key phrases using To prevent artificial score inflation and preserve
cansemantic embeddings, ensuring reliable keyword identi- didate authenticity, the system avoids injecting new
keyifcation despite formatting noise or phrasing variability. words or altering the resume’s core content. Instead, it
focuses on identifying and enhancing existing
expressions—both semantically and lexically ensuring fairness
to the job seeker while giving recruiters a transparent,
accurate alignment signal.
5https://huggingface.co/sentence-transformers/
multi-qa-MiniLM-L6-cos-v1
6https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
7https://scikit-learn.org/stable/modules/feature_extraction.html
8https://pypi.org/project/keybert/</p>
      </sec>
      <sec id="sec-2-4">
        <title>9https://rapidfuzz.github.io/RapidFuzz/</title>
        <sec id="sec-2-4-1">
          <title>3.2. Suggestion Generation and</title>
        </sec>
        <sec id="sec-2-4-2">
          <title>Section-Wise Improvements</title>
          <p>To enhance specific resume sections by rephrasing,
clarifying, or restructuring them using best practices
in resume writing we design an enanchement step
grounded on Large Language Models (LLMs) (i.e.,
generate_improved_sections_with_llm). It
focuses on strengthening the candidate’s input by:
• Reinforcing matched keywords in previous ATS</p>
          <p>Score Calculation step
• Improving clarity and formatting of the resume
• Highlighting quantifiable impacts and
actiondriven phrasing actions
• Focusing on improving sections like Professional</p>
          <p>Summary, Experience, Skills, and Education
A structured prompt is generated (see Table 1), including
the resume text, job description, and the list of already
matched keywords. The LLM is explicitly instructed not
to introduce missing or hallucinated terms, ensuring that
improvements remain factual and grounded in the
candidate’s original input. The model returns suggestions in
strict JSON format, each linked to a specific resume
section for traceable integration. We provide the LLM with
the flattened job description and resume text, along with
the matchingTerms, similarityScore, and atsScore, to give
it fuller context for generating accurate, traceable
suggestions. We pass the flattened job description and flattened
resume text, matchingTerms, similarityScore, atsScore
are also passed so that we are giving more context to the
LLM model - LLaMA 3.2 latest.</p>
          <p>Field
Resume Text
JobDescription
MatchedKeywords
ExplicitInstructions
OutputFormat</p>
          <p>Description
Extracted, cleaned resume text
uploaded by the user.</p>
          <p>Flattened string of title, skills,
experience, and role.</p>
          <p>Key terms found in both the resume and
job description.</p>
          <p>Directs model to avoid hallucination
and ensure factual edits only.</p>
          <p>JSON array with fields: sectionName,
suggestion.</p>
          <p>The second service we designed,
generate_ats_score_and_improvements,
operates at a global resume level rather than focusing
on specific sections. It analyzes the entire resume
in the context of the job description and the list of
matched keywords but deliberately avoids altering
content or injecting new, unverified terms. Instead,
it identifies opportunities for structural and stylistic
enhancements that can improve ATS performance
without compromising authenticity.</p>
          <p>Its outputs include:
• Parsing the resume text and job description.
• Evaluating aspects like formatting consistency
(e.g., bullet points, section headers), action verb
usage, and sentence clarity.
• Referencing the matched keywords to ensure
better usage and placement, rather than adding
unrelated terms.
• Returning the output in a strict JSON structure,
which includes: (i) A list of factual, actionable
suggestions; (ii) An estimated ATS score; (iii)
Highlighted areas where improvements can be made
to enhance readability and alignment.</p>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>This makes the output easily integrable into the system while keeping the suggestions grounded in the candidate’s original input and safe from hallucinations.</title>
        <p>Fairness and Transparency Considerations. Both
services are governed by strict instruction constraints to:
• Prevent hallucination of unverified skills
• Avoid inflating match quality with artificial edits
• Respect candidate identity and experience as
originally stated</p>
      </sec>
      <sec id="sec-2-6">
        <title>By focusing solely on strengthening existing, verifi</title>
        <p>able content, this dual-LLM framework ensures that
suggestions are ethical, transparent, and aligned with fair
AI principles - providing job seekers with meaningful
improvement pathways without compromising
truthfulness.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Experimental Evaluation</title>
      <sec id="sec-3-1">
        <title>To test the proposed approach, we decided to design and run two separate experiments to evaluate, how fair the process is and how efective it is.</title>
        <sec id="sec-3-1-1">
          <title>4.1. Experiment 1: Fairness-Aware</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>Resume Enhancement</title>
          <p>This experiment evaluates whether resumes can be
ethically enhanced to better align with job descriptions,
without introducing fabricated content or misleading
embellishments. The objective is to test whether a
candidate’s original experience and qualifications can be made
more contextually relevant while preserving the integrity
and authenticity of the resume. A representative set of
10 manually crafted synthetic resumes (refer Tables 3,
4 in appendix and for column name descriptions refer
Table 5) were selected and evaluated against a curated
synthetic job description for the role ReactJS Frontend how our approach upholds fairness by restricting edits
Developer (API Integration &amp; UI Frameworks) using four to what the candidate’s own language can support. The
key metrics: similarity score, ATS score, matching terms, experiment confirms that our system provides significant
and missing terms. These metrics were computed against improvements while maintaining fairness, i.e.,
enhanctarget job descriptions which were manually crafted by ing the resume without misrepresenting the candidate’s
analyzing real listings for similar job roles. While indi- skills or experience.
vidual scores may vary, the relative diferences (score
deltas) remain consistent across resumes. The resumes 4.2. Experiment 2: Efectiveness
were then enhanced using our LLM-powered suggestion Comparison
engine, which provides section-wise recommendations
based solely on the candidate’s original content and job This experiment compares the efectiveness of two
rerelevance. Enhanced resumes were re-evaluated with sume enhancement strategies, both operating under strict
the same approach previously used, for observing: (i) non-hallucination constraints. The first method uses
Increases in similarity and ATS scores; (ii) Growth in our domain-specific LLM-powered suggestion engine
contextually valid matching terms; (iii) Retention of se- to improve resume-job alignment while preserving the
mantic integrity (i.e., no direct insertion of previously candidate’s original intent and language. The second
missing terms unless already implied). method uses a general-purpose GPT-4o model instructed
to rewrite resumes without adding any content not
origiFairness Criteria. To ensure ethical enhancement, the nally present. Each resume was evaluated in three forms:
system followed three key constraints: the original version, a system-enhanced version using
our custom enhancement engine, and a GPT-enhanced
(GPT-4o)10 version rewritten by a large language model
under strict non-hallucination instructions. All three
versions were analyzed using the same backend evaluation
pipeline (refer Figure 1) to compute similarity score, final
ATS score, semantic similarity score, and keyword match
score.
• All newly introduced terms had to be contextually</p>
          <p>consistent with the original resume.
• Terms from the initial missing terms list were
disallowed unless semantically implied or rephrased
from existing content.
• No artificial keyword stufing or hallucinated
ex</p>
          <p>periences were permitted.</p>
          <p>The improvements were evaluated using changes in
matching terms and missing terms metrics computed
by comparing keyphrases from the job description with
the resume text before and after enhancement (refer
Table 2). These metrics served as our primary quantitative
evidence, ensuring that enhancements improved
alignment without introducing unrelated or fabricated
content, as the suggestion engine operated strictly within
the resume’s original context.</p>
          <p>Experimental results. The system-enhanced resumes
consistently outperformed the original versions in all key
metrics. The summary of ATS and Similarity Scores (in
%) Across Resume Enhancement Systems can be seen
in Table 6. On average, similarity scores improved by
18.7% and ATS scores rose by 22.3% following
enhancement. When comparing system-enhanced resumes to
GPT-enhanced counterparts, our method achieved higher
average similarity scores (43.52% vs. 34.13%) and
comparable semantic similarity scores (46.77% vs. 47.21%),
Experimental results. Following enhancement using despite the GPT-enhanced versions showing a higher
fiour system, all resumes demonstrated meaningful im- nal ATS score (74.25%). However, a deeper inspection of
provements while preserving fairness and integrity. New the results reveals that the elevated ATS scores in
GPTmatching terms were successfully added in every case, enhanced resumes may be attributed to broader keyword
and all additions were contextually aligned with the coverage rather than meaningful contextual alignment.
original resume content. Crucially, none of the origi- In Figure 1, once the suggestions from our system are
nal missing terms were directly reused, and no hallu- updated, a parallel process generates and applies
sugcinated or unrelated information was introduced (re- gestions using Chat GPT-4o as well. Both updated
verfer Table 3 and 4 in appendix). The outcomes of Ex- sions, the one based on our system’s suggestions and the
periment 1, which involved evaluating ten candidate one generated from GPT-4o’s recommendations, are then
resumes for the ReactJS Frontend Developer position, re-evaluated. A comparison spreadsheet is generated
are summarized in Tables 3 and 4 in appendix. While containing the results of both evaluations, highlighting
both LLaMA 3.2 and GPT-4o raise the overall match diferences in ATS scores, similarity scores, and
overcounts, the New_Terms_Added_by_LLaMA3.2 column all improvements. The ATS scores and similarity scores
grows only with fair, semantically grounded additions. In comparison across enhancement Systems can be seen
contrast, New_Terms_Added_by_GPT-4o reflects
GPT4o’s blind injections of extra keywords—demonstrating 10https://openai.com/index/hello-gpt-4o/
in Table 6. The system-enhanced resumes maintained though GPT-4o’s outputs show notable improvements
a more focused and candidate-authentic tone while still over the unmodified baseline, they still fall short of the
improving discoverability (refer Table 3 and Table 4 in Ap- results achieved by our system. This supports the
efecpendix). In multiple cases, the system-enhanced versions tiveness of a fairness-oriented framework that prioritizes
outperformed GPT in similarity score by margins exceed- refining existing content rather than introducing
extraing 16 percentage points, with the highest observed gain neous terms.
reaching 29.07% (refer Table 6). In Figure 3 the bar chart compares ATS scores for ten</p>
          <p>Figure 2 shows that the augmented similarity scores resumes across three conditions: the original
unmodi(system_updated_similarityScore) markedly exceed both ifed documents (blue bars), the LLaMA 3.2-based update
the baseline (original_similarityScore) and the GPT-4o methodology (red bars), and GPT-driven enhancements
derived scores (chatgpt4o_updated_similarityScore), in- (green bars). LLaMA 3.2 updates yield the highest
imdicating that our LLaMA 3.2–based methodology predi- provements boosting scores from approximately 18–25 at
cated on conservative, in situ enhancement of existing baseline to 35–55, whereas GPT enhancements produce
text yields the most substantial improvements in seman- moderate gains, raising baseline values to roughly 26–48.
tic alignment between resumes and job descriptions. Al- In every case, the LLaMA 3.2–adjusted resumes
outperform both the original and GPT-enhanced versions, with strict adherence to defined fairness criteria ensures the
the latter still delivering a substantial uplift relative to tool’s suitability for real-world applications where ethical
unmodified resumes. standards are paramount.</p>
          <p>
            Furthermore, the system’s enhancements did not
introduce any hallucinated content and preserved the re- Comparative Efectiveness and Contextual
Alignsume’s original structure and voice (refer Table 3 and Ta- ment. Experiment 2’s comparative analysis between
ble 4 in Appendix). In contrast, GPT-enhanced rewrites, our system and GPT-based enhancements further
rewhile constrained, occasionally drifted toward general- inforces the strengths of our approach. While
GPTized language or tone inconsistencies. These observa- enhanced resumes sometimes achieved higher ATS
tions reinforce the value of targeted, context-aware en- scores—likely due to broader keyword coverage—the
hancement over generalized rewriting approaches. system-enhanced resumes consistently showed superior
or comparable semantic similarity scores, indicating a
5. Considerations and Limitations closer contextual match to the original resumes.
This distinction is important: higher ATS scores alone
The results from our experiments highlight the eficacy do not guarantee a better quality or more truthful resume.
and robustness of the proposed AI-powered resume en- The tendency of GPT-based rewrites to introduce
generhancement system, especially in terms of fairness, con- alized language or tone inconsistencies could dilute the
textual integrity, and practical relevance for applicant candidate’s unique profile, potentially reducing perceived
tracking systems (ATS). authenticity. In contrast, our system’s targeted,
contextaware enhancements retain the original voice and
structure, ofering improvements that are both meaningful
Fairness and Authenticity Preservation. Experi- and aligned with the candidate’s actual background. The
ment 1 demonstrated that our system can meaningfully observed margin of improvement in similarity scores (up
enhance resumes by adding relevant matching terms to 29.07 percentage points over GPT in some cases)
sugwithout compromising fairness or authenticity. The fact gests that our method excels at fine-grained semantic
that none of the original missing terms were reused and enhancement rather than broad-stroke rewriting. This
no hallucinated or unrelated information was introduced focused approach is likely to yield better candidate-job
is particularly encouraging. This shows that the sys- matching outcomes in ATS environments that value
pretem respects the candidate’s true skills and experiences, cise and relevant keyword and phrase usage.
avoiding unethical exaggeration or fabrication—a critical Additional limitations include the need for improved
requirement in AI-assisted recruitment tools. The aver- performance in domain-specific contexts, sensitivity to
age improvements of 18.7% in semantic similarity and input formats, and the lack of multilingual support.
Ethi22.3% in ATS scores indicate that the enhancements not cal concerns around bias, transparency, and resume
overonly preserve but also amplify the relevance of candidate optimization also warrant future exploration. Ensuring
profiles to job descriptions, improving their discoverabil- fairness, explainability, and data privacy in deployment
ity without sacrificing honesty. environments will be crucial to responsible adoption [
            <xref ref-type="bibr" rid="ref12">22</xref>
            ].
          </p>
          <p>This balance between enhancement and fairness is a While the system shows promising results, some areas
key diferentiator compared to many automated systems merit further attention. Current performance is strongest
that risk introducing biases or misrepresentations. The on English-language resumes with consistent formatting;
improving support for varied layouts and multilingual
inputs is a valuable direction. Our evaluation, centered
on synthetic resumes for a specific domain (Frontend
ReactJS), provides a solid foundation but would benefit
from broader validation across job types and real-world
data. Additionally, while basic bias detection is included,
more comprehensive fairness auditing remains an
important avenue for future development. As with all
LLMenhanced systems, results may vary slightly based on the
quality of job description inputs. Addressing these
aspects can help increase the system’s robustness, fairness,
and generalizability.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Conclusion</title>
      <p>This project demonstrates that our AI-powered resume
enhancement system efectively improves resume quality
while upholding fairness and authenticity. By preserving
resume integrity—without adding fabricated keywords or
skills—the system consistently adds contextually relevant
terms, resulting in substantial improvements in
semantic similarity (18.7%) and ATS scores (22.3%). Compared
to GPT-based rewrites, our approach achieves higher or
comparable semantic alignment while maintaining the
candidate’s original voice and structure, avoiding
generalized or inconsistent language. These findings highlight
the advantage of targeted, context-aware enhancement
methods that responsibly boost candidate discoverability
and preserve authenticity. Consequently, our LLM-based
enhancement system ofers a practical, ethical, and
superior solution for real-world recruitment pipelines. Future
improvements could include support for multilingual
resumes and enhanced robustness for unstructured or
poorly formatted inputs.</p>
    </sec>
    <sec id="sec-5">
      <title>7. Acknowledgments</title>
      <sec id="sec-5-1">
        <title>This research is partially funded by PNRR project FAIR Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU.</title>
        <p>react, developer, exper- reactjs, frontend, back- reactjs, frontend, react, frontend, component, reactjs, frontend, react, functional, accessibility,
tise, apis, redux, ui, end, skilled, freelance, developer, expertise, development, reactjs, backend, developer, performant, flexbox,
dejest, scss, api, experi- axios, frameworks, apis, redux, compo- integrations, scalable, axios, frameworks, type- sign, chrome, devtools,
ence, candidate, man- typescript, es6, com- nents, component, ui, building, quality, ap- script, expertise, apis, integrations, handling,
aging, university, com- ponents, component, jest, development, ux, plications, integrating, redux, es6, components, github, axios, interfaces,
puter, form, grid, sci- agile, development, ux, integrations, applica- components, like, de- component, ui, jest, git, vanilla, integration,
ence, user, high, rest, github, flexbox, inte- tions, api, building, liver, ux, integration agile, development, ux, frontend, component,
state, years, best, con- grations, web, design, experience, integration, github, flexbox, inte- environment, styling,
tract, hooks, time, hook applications, javascript, integrating, scalable, grations, scss, design, reactjs, scalable, quality,
responsive, bachelor, managing, deliver, applications, javascript, agile, teams, hands,
interfaces, formik, boot- university, computer, api, responsive, inter- ensuring, practices, ux,
strap, fetch, building, form, grid, science, user, faces, bootstrap, fetch, tailwind, validation,
devtools, integration, high, rest, state, years, building, experience, integrating, modern,
integrating, css, scal- best, contract, quality, devtools, integration, in- token, typescript,
buildable, layouts, initiative, time, like tegrating, css, scalable, ing, layouts, backend,
responsibilities, collabo- layouts, collaborate, authentication,
perforrate, degree, applying, ensuring, functional, mance, frameworks,
ensuring, functional, chrome, managing, like, bootstrap, testing,
chrome, router, looking, performance, hands, javascript, context,
performance, hands, authentication, vanilla, gitlab, responsive, css,
authentication, vanilla, gitlab, deliver, envi- development,
collabogitlab, deliver, envi- ronment, university, rate, query, es6, fetch,
ronment, tailwind, tailwind, computer, applications, based,
styling, token, startup, styling, form, grid, components, deliver,
accessibility, git, work- token, accessibility, git, library
ing, handling, testing, science, user, handling,
practices, teams, mod- high, testing, practices,
ern, based, library, teams, rest, state,
negotiable, performant, modern, years, based,
quality, implementa- best, library, hooks,
tion, validation, query, performant, quality,
include, like, ideal, time, hook, validation,
equivalent, context query, like, context
Original_Matching_Terms
Original_Missing_Terms
LLaMA3.2_Matching_Terms
New_Terms_Added_by_LLaMA3.2
GPT-4o_Matching_Terms
New_Terms_Added_by_GPT-4o</p>
        <p>Description
The set of job-description keywords that already appeared in the candidate’s
resume before any edits.</p>
        <p>Keywords required by the job but absent from the unmodified resume.</p>
        <p>After our LLaMA 3.2 “in-place” enhancement, this column lists all keywords
in the resume that now match the job description—combining the original
matches with those preserved by conservative rewriting.</p>
        <p>Of the matches in the previous column, these are the new terms introduced
by LLaMA 3.2. Crucially, each is semantically equivalent to language already
used by the candidate.</p>
        <p>The total set of matched keywords after GPT-4o editing—again including both
originally present terms and those retained or reordered by GPT.</p>
        <p>The new keywords injected by GPT-4o. Unlike our method, these often include
terms that were not semantically aligned with the candidate’s original
phrasing.
resume_2_7
resume_2_1
resume_2_2
resume_2_9
resume_2_4
Declaration on Generative AI
During the preparation of this work, the author(s) used ChatGPT (OpenAI), Gemini (Google), and
Grammarly in order to: Paraphrase and reword, Improve writing style, and Grammar and spelling
check. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed
and take(s) full responsibility for the publication’s content.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [11]
          <article-title>Resume analysis using nlp and ats algorithm</article-title>
          ,
          <source>International Journal of Latest Technology in Engineering Management and Applied Science</source>
          <volume>14</volume>
          (
          <year>2025</year>
          )
          <fpage>761</fpage>
          -
          <lpage>767</lpage>
          . URL: https://www.ijltemas.in/ submission/index.php/online/article/view/1937. doi:
          <volume>10</volume>
          .51583/IJLTEMAS.
          <year>2025</year>
          .
          <volume>140400090</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Daryani</surname>
          </string-name>
          , G. Chhabra,
          <string-name>
            <given-names>H.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Chhabra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>An automated resume screening system using natural language processing</article-title>
          and similarity,
          <year>2020</year>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>103</lpage>
          . doi:
          <volume>10</volume>
          .26480/etit.02.
          <year>2020</year>
          .
          <volume>99</volume>
          .103.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Gan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T. Mori,
          <article-title>Application of llm agents in recruitment: A novel framework for resume screening</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2401. 08315. arXiv:
          <volume>2401</volume>
          .
          <fpage>08315</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Basile</surname>
          </string-name>
          , E. Musacchio,
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Siciliani</surname>
          </string-name>
          , G. Fiameni, G. Semeraro,
          <article-title>Llamantino: Llama 2 models for efective text generation in italian language</article-title>
          ,
          <source>CoRR abs/2312</source>
          .09993 (
          <year>2023</year>
          ). URL: https://doi.org/ 10.48550/arXiv.2312.09993. doi:
          <volume>10</volume>
          .48550/ARXIV. 2312.09993. arXiv:
          <volume>2312</volume>
          .
          <fpage>09993</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Basile</surname>
          </string-name>
          , G. Semeraro,
          <article-title>Advanced natural-based interaction for the italian language: Llamantino-3-anita</article-title>
          ,
          <source>CoRR abs/2405</source>
          .07101 (
          <year>2024</year>
          ). URL: https://doi.org/10.48550/arXiv.2405.07101. doi:
          <volume>10</volume>
          .48550/ARXIV.2405.07101. arXiv:
          <volume>2405</volume>
          .
          <fpage>07101</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Basile</surname>
          </string-name>
          , M. de Gemmis, G. Semeraro,
          <article-title>A comparison of word-embeddings in emotion detection from text using bilstm, CNN and self-attention</article-title>
          , in: G. A.
          <string-name>
            <surname>Papadopoulos</surname>
            , G. Samaras,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Weibelzahl</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Jannach</surname>
            ,
            <given-names>O. C.</given-names>
          </string-name>
          Santos (Eds.),
          <source>Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization</source>
          ,
          <string-name>
            <surname>UMAP</surname>
          </string-name>
          <year>2019</year>
          , Larnaca, Cyprus, June 09-12,
          <year>2019</year>
          , ACM,
          <year>2019</year>
          , pp.
          <fpage>63</fpage>
          -
          <lpage>68</lpage>
          . URL: https://doi.org/10.1145/3314183. 3324983. doi:
          <volume>10</volume>
          .1145/3314183.3324983.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mochol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wache</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nixon</surname>
          </string-name>
          ,
          <article-title>Improving the accuracy of job search with semantic techniques</article-title>
          ,
          <year>2007</year>
          , pp.
          <fpage>301</fpage>
          -
          <lpage>313</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>540</fpage>
          -72035-5_
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>I.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <article-title>Resume ranking with tfidf, cosine similarity and named entity recognition</article-title>
          ,
          <source>in: 2024 First International Conference on Data, Computation and Communication (ICDCC)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>224</fpage>
          -
          <lpage>229</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICDCC62744.
          <year>2024</year>
          .
          <volume>10961659</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Daryani</surname>
          </string-name>
          , G. Chhabra,
          <string-name>
            <given-names>H.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Chhabra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>An automated resume screening system using natural language processing</article-title>
          and similarity,
          <year>2020</year>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>103</lpage>
          . doi:
          <volume>10</volume>
          .26480/etit.02.
          <year>2020</year>
          .
          <volume>99</volume>
          .103.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [20]
          <string-name>
            <surname>S. D'Amicantonio</surname>
            ,
            <given-names>M. K.</given-names>
          </string-name>
          <string-name>
            <surname>Kulangara</surname>
            ,
            <given-names>H. D.</given-names>
          </string-name>
          <string-name>
            <surname>Mehta</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Pal</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Levantesi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Polignano</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Purificato</surname>
            ,
            <given-names>E. W. D.</given-names>
          </string-name>
          <string-name>
            <surname>Luca</surname>
          </string-name>
          ,
          <article-title>A comprehensive strategy to bias and mitigation in human resource decision systems</article-title>
          , in: M.
          <string-name>
            <surname>Polignano</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Musto</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Pellungrini</surname>
          </string-name>
          , E. Purificato, G. Semeraro, M. Setzu (Eds.),
          <source>Proceedings of the 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 26-27</source>
          ,
          <year>2024</year>
          , volume
          <volume>3839</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>27</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3839</volume>
          /paper1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>N.</given-names>
            <surname>Reimers</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Gurevych</surname>
          </string-name>
          ,
          <article-title>Sentence-bert: Sentence embeddings using siamese bert-networks</article-title>
          , CoRR abs/
          <year>1908</year>
          .10084 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1908</year>
          .10084. arXiv:
          <year>1908</year>
          .10084.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pellungrini</surname>
          </string-name>
          , E. Purificato, G. Semeraro,
          <string-name>
            <given-names>M.</given-names>
            <surname>Setzu</surname>
          </string-name>
          , Xai.it
          <year>2024</year>
          :
          <article-title>An overview on the future of AI in the era of large language models</article-title>
          , in: M.
          <string-name>
            <surname>Polignano</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Musto</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Pellungrini</surname>
          </string-name>
          , E. Purificato, G. Semeraro, M. Setzu (Eds.),
          <source>Proceedings of the 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 26-27</source>
          ,
          <year>2024</year>
          , volume
          <volume>3839</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3839</volume>
          /paper0.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>resume_2_10 28.86 30.78 27.35 23.98 13.81 26.31 35.76 17.16 24.71 31.85 20.44 21.27 23.68 18.26 20.68 23.33 20.52 23.58 18.88 18.66 58.74 33.63 42.33 37.67 47.24 58.78 28.24 45.31 68.87 56.45 38.97 37.22 44.00 49.23 49.97 45.70 38.09 46.44 45.91 53.66 35.85 34.23 25.70 21.39 38.39 41.13 29.59 32.47 39.80 54.62 31.10 30.56 28.44 29.28 35.14 26.29 37.61 29.55 29.59 46.19</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>