<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>IIR</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Fairness of Exposure in Forensic Face Rankings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Discussion Paper</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Atzori</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianni Fenu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mirko Marras</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Computer Science, University of Cagliari</institution>
          ,
          <addr-line>V. Ospedale 72, 09124 Cagliari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>13</volume>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>In information forensics, (police) agents are usually presented with a ranking of suspects similar to a certain face probe whose identity should be determined. Used for estimating the relevance score of possible suspects, deep face models have been proven to lead to undesirable discriminatory outcomes for certain demographic groups. Despite other non-personalised person rankings being actively investigated, forensic face rankings still represent an underexplored, yet important and peculiar, domain. In this ongoing project, we propose a framework consisting of six state-of-the-art face models and a public data set to quantify (disparate) exposure of demographic groups in forensic face rankings. Our results show that biases in this domain are not negligible and urgently call for ad hoc fairness notions and mitigation.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Identification</kwd>
        <kwd>Bias</kwd>
        <kwd>Biometrics</kwd>
        <kwd>Fairness</kwd>
        <kwd>Equity</kwd>
        <kwd>Exposure</kwd>
        <kwd>Forensics</kwd>
        <kwd>Rankings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Rankings have become one of the dominant forms in which digital systems present results to
users. The prevalence of rankings ranges from search engines [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and online stores [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], to music
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and news feeds [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. One notable task based on rankings is the identification of suspects based
on their face biometrics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Under this task, (police) agents are presented with a ranking of
suspects similar to the face probe. Deep face recognition models are supporting the generation
of these rankings, thanks to their impressive performance in terms of accuracy [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>
        However, deep models adopted to extract a latent face representation for ranking purposes
have been proven to be susceptible to biases [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. For instance, adopting such latent
representations for face authentication has led the system to fail more often for subjects with
darker skin tones [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. As a consequence, considerable eforts have been made to analyse
discriminatory results for groups created on the basis of protected attributes (e.g., gender and
ethnicity) [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ]. Unfortunately, these analysis have focused on pure biometric authentication
and identification [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], without considering undesired impacts from a ranking perspective.
      </p>
      <p>
        Indeed, certain forensic face ranking techniques rely on hand-created sketches from possible
eyewitnesses [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Such partial information is often used in combination with other attributes,
such as textual descriptions [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. To be functional, this type of approach can also be combined
with text-to-image (to generate the query) and image-to-text (to generate search-useful galleries)
techniques [18]. Existing approaches only evaluate whether the ofender appears in the first
position of the ranking [19], often ignoring the rest of the ranking. However, analysing the
ranking composition and the (disparate) exposure of demographic groups is fundamental,
as being exposed to (police) agents as suspects might lead to undesired consequences (e.g.,
being wrongly investigated). Current research has assessed and mitigated unfairness in other
non-personalised people rankings [20, 21, 22], but forensic face ranking still represents an
underexplored domain, characterized by key peculiarities (e.g., normative, content, model).
      </p>
      <p>In this ongoing project, we have the ambitious objective of bridging the face biometrics and
information retrieval research communities by analyzing whether deep face recognition models
lead to unfair exposure across demographic groups in forensic ranking systems. Our novel
contribution is twofold. First, we propose an assessment framework with six state-of-the-art face
recognition models and a public face data set labeled with two protected attributes (i.e., gender
and ethnicity). Second, we conduct an exploratory study aimed at quantifying the (disparate)
exposure of demographic groups in the resulting rankings, depending on the demographic
group of the face probe (RQ1), and which demographic groups are most likely to incorrectly
appear in the top positions of the ranking (RQ2). Our results show what biases forensic rankings
are exposed to, emphasizing the importance of devising mitigation methods in this domain.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>
        Using a dataset annotated with two protected attributes, we first trained six deep face recognition
models. Then, we evaluated the utility and fairness of the rankings generated by these models.
Data Preparation. Our experiments were carried on using the DiveFace [23] data set,
consisting of 140,000 images belonging to 24,000 identities. It is properly annotated with sensitive
information (gender and ethnicity) and balanced in terms of both attributes. Ethnicity labels
include Asian, Black, and Caucasian. Gender labels include Women and Men. There are
therefore six demographic groups represented in the data set: Asian Men, Asian Women, Black Men,
Black Women, Caucasian Men and Caucasian Women. The original authors split the entire
dataset into two sets: a training set and a test set, each of which contained 70% and 30% of
the identities. To the best of our knowledge, this data set is one of the state-of-the-art sources
for fairness analysis in the biometric field. In order to crop and resize the original images, the
DeepFace toolkit [24] was used to detect the bounding box enclosing the face.
Model Preparation. With the face images in the training set, we built and trained a range
of deep face models by combining a collaborative-margin head network and six
convolutional neural networks (CNNs), namely MobileFaceNet, ResNet [25], AttentionNet [26],
ResNeSt [27], RepVGG [28], HRNet [29]). These neural architectures have been proven to yield
state-of-the-art performance in recent face benchmarks [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. For consistency, our experiments
followed the same training procedures described in [30]. More specifically, each model was
trained using 64-sized batches for a maximum of 80 epochs (early stopping, patience 5). The
optimizer was SGD, with momentum 0.9, weight decay 1e-8, initial learning rate 0.1, and decays
at 5, 25, and 68 epochs. The loss function was categorical cross-entropy.
      </p>
      <p>
        Ranking Generation. With the face images in the test set (disjoint from the training set),
we created a ranking system that, given as a query the latent representation of an individual
(probe), ranks all the identities in the gallery and provides the most similar  = 10 identities
to the query. For this purpose, we considered only individuals with at least  = 10 face images
in the test set and sampled only  images for each individual. Due to the uneven number of
images per identity, and in order to equally represent each demographic group in our test set,
we selected  = 32 identities from each group (since the less represented one, Black Women,
had only  identities with at least  images), taking into consideration a total of 1, 920 images
from 192 identities. Given the  images for an individual, 30% of them were assumed to be face
probes (images to be used as a query), and the remaining 70% were included in the gallery. The
latent representations of all the face images of an individual were averaged to obtain a single
averaged representation of that individual. For each probe, we computed the cosine similarity
(range: [
        <xref ref-type="bibr" rid="ref1">-1, 1</xref>
        ]) between its latent representation and the latent representations of the identities
in the gallery. We ranked the identities in the gallery according to their decreasing similarity
with the probe and considered only the K identities most similar to each query.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental Results</title>
      <p>The models’ accuracy was between 98% and 99%. Experiments analyzed demographic group
exposure per probe’s group (RQ1) and overall exposure (RQ2) in rankings across groups.
Exposure for each probe’s demographic group (RQ1). In a first analysis, for each probe’s
group, we computed the averaged exposure of each demographic group across rankings and
models (Fig.1), adopting the definition of [ 31]. As expected, for each probe’s group, the
demographic group with the highest exposure corresponds to the probe’s one (between 50% and 90%).
Going beyond this case, probes of Asian men (top left) led to disparate exposure for Caucasian
men (same gender) and Asian women (same ethnicity) compared to the other groups. Similarly,
the rankings emerging from Asian women’s probes (top center) disproportionately represent
Caucasian women (same gender) and Asian males (same ethnicity). Probes from Black men (top
right) make Caucasian men and Asian men more prominent; in both cases, the gender seemed
to be the main causing factor. Black women’s probes (bottom left), conversely, led to higher
than more equal representation across certain groups beyond that of the probe. For instance,
Asian women and Caucasian females and males are unfairly more at risk of incorrectly appear
in the top ranking. Caucasian probes make same ethnicity counterparts more prominent, along
with Asians of the same gender. We can conclude that, based on the probe’s group, certain
demographic groups (beyond that of the probe) are overexposed in the rankings.
Overall disparate exposure of demographic groups (RQ2). In a second analysis, we
investigated whether the observed disparate exposure across demographic groups is even more
evident if we consider the position in which a certain possible suspect appears. Since our
results were consistent across models, Fig. 2 shows the exposure distribution of groups across
rankings, averaged among models. Comparing exposure across groups, it can be observed that
certain groups tend to have a statistically significant disparate exposure compared to others. In
particular, Asian women often tended to be unfairly represented in the top positions, even when
the probe’s group was diferent. Caucasian men and women were more often and incorrectly
exposed at the top, followed by Asian women, Black men, and Black women. We can conclude
that, in general, women tend to be more often over-exposed in the rankings.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and Future Work</title>
      <p>In this paper, we investigated the extent to which state-of-the-art face models adopted for
forensic face ranking are subject to biases across demographic groups. Our results highlighted
that dark-skinned individuals (especially women) have lower exposure under probes belonging
to other demographic groups (especially female ones). Even more noticeably, dark-skinned
female probes produce the least disparate results compared to all other groups. In addition to
this, Asian and Caucasian are overexposed in the rankings, while women appear to be the most
likely to hold prominent positions. In the next steps, we plan to investigate whether, and possibly
to what extent, other factors (e.g., pose, lighting, expression) influence the considered forensic
face rankings. We also plan to devise potential countermeasures regarding the disparities in
treatment we uncovered through our study in this paper.
Development in Information Retrieval, SIGIR ’22, Association for Computing Machinery,
New York, NY, USA, 2022, p. 1012–1021.
[18] L. Zhang, M. Yang, C. Li, R. Xu, Image-text retrieval via contrastive learning with auxiliary
generative features and support-set regularization, in: Proceedings of the 45th International
ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22,
Association for Computing Machinery, New York, NY, USA, 2022, p. 1938–1943.
[19] D. K. Sharma, A. S. Jalal, B. Sikander, Suspect face retrieval via multicriteria decision
process, in: 2022 9th International Conference on Computing for Sustainable Global
Development (INDIACom), 2022, pp. 849–853.
[20] A. J. Biega, K. P. Gummadi, G. Weikum, Equity of attention: Amortizing individual fairness
in rankings, in: The 41st international acm sigir conference on research &amp; development in
information retrieval, 2018, pp. 405–414.
[21] A. Singh, T. Joachims, Fairness of exposure in rankings, in: Proceedings of the 24th ACM
SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, 2018, pp.
2219–2228.
[22] P. Lahoti, K. P. Gummadi, G. Weikum, Operationalizing individual fairness with pairwise
fair representations, arXiv preprint arXiv:1907.01439 (2019).
[23] A. Morales, J. Fierrez, R. Vera-Rodriguez, R. Tolosana, Sensitivenets: Learning agnostic
representations with application to face images, IEEE Trans. on Pattern Analysis and
Machine Intel. 43 (2020) 2158–2164.
[24] S. I. Serengil, A. Ozpinar, Lightface: A hybrid deep face recognition framework, in: Proc.
of the Innovations in Intelligent Systems and Applications Conference (ASYU 2020), IEEE,
2020, pp. 1–5.
[25] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proc. of</p>
      <p>CVPR 2016, 2016, pp. 770–778.
[26] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, X. Tang, Residual attention
network for image classification, in: Proc. of CVPR 2017, 2017, pp. 3156–3164.
[27] H. Zhang, C. Wu, Z. Zhang, Y. Zhu, H. Lin, Z. Zhang, Y. Sun, T. He, J. Mueller, R. Manmatha,
M. Li, A. J. Smola, Resnest: Split-attention networks, in: Proc. of CVPR 2022, 2022, pp.
2735–2745.
[28] X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, J. Sun, Repvgg: Making vgg-style convnets
great again, in: Proc. of CVPR 2021, 2021, pp. 13733–13742.
[29] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang,
et al., Deep high-resolution representation learning for visual recognition, IEEE Trans. on
Pattern Analysis and Machine Intel. 43 (2020) 3349–3364.
[30] A. Atzori, G. Fenu, M. Marras, Demographic bias in low-resolution deep face recognition
in the wild, IEEE Journal of Selected Topics in Signal Processing (2023) 1–13.
[31] E. Gómez, C. Shui Zhang, L. Boratto, M. Salamó, M. Marras, The winner takes it all:
geographic imbalance and provider (un) fairness in educational recommender systems, in:
Proceedings of the 44th International ACM SIGIR Conference on Research and
Development in Information Retrieval, 2021, pp. 1808–1812.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Toward creating a fairer ranking in search engine results</article-title>
          ,
          <source>Information Processing &amp; Management</source>
          <volume>57</volume>
          (
          <year>2020</year>
          )
          <fpage>102138</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Misra</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. McAuley</surname>
          </string-name>
          ,
          <article-title>Addressing marketing bias in product recommendations</article-title>
          ,
          <source>in: Proceedings of the 13th international conference on web search and data mining</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>618</fpage>
          -
          <lpage>626</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ueda</surname>
          </string-name>
          , G. Penha,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ziviani</surname>
          </string-name>
          ,
          <article-title>Online learning to rank for sequential music recommendation</article-title>
          ,
          <source>in: Proceedings of the 13th ACM Conference on Recommender Systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>237</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Sanchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Manotumruksa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Albakour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lipani</surname>
          </string-name>
          ,
          <article-title>Easing legal news monitoring with learning to rank and bert</article-title>
          ,
          <source>in: Advances in Information Retrieval: 42nd European Conference on IR Research</source>
          , ECIR
          <year>2020</year>
          , Lisbon, Portugal,
          <source>April 14-17</source>
          ,
          <year>2020</year>
          , Proceedings,
          <source>Part II 42</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>336</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jacquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Champod</surname>
          </string-name>
          ,
          <article-title>Automated face recognition in forensic science: Review and perspectives</article-title>
          , Forensic science international
          <volume>307</volume>
          (
          <year>2020</year>
          )
          <fpage>110124</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>Deep face recognition: A survey</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>429</volume>
          (
          <year>2021</year>
          )
          <fpage>215</fpage>
          -
          <lpage>244</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mei</surname>
          </string-name>
          ,
          <article-title>Facex-zoo: A pytorch toolbox for face recognition</article-title>
          ,
          <source>in: Proc. of ACM/MM</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp.
          <fpage>3779</fpage>
          -
          <lpage>3782</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gwilliam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tinubu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hanson</surname>
          </string-name>
          ,
          <article-title>Rethinking common assumptions to mitigate racial bias in face recognition datasets</article-title>
          ,
          <source>in: Proc. of CVPR</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp.
          <fpage>4123</fpage>
          -
          <lpage>4132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Albiero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>KS</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vangara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Bowyer</surname>
          </string-name>
          ,
          <article-title>Analysis of gender inequality in face recognition accuracy</article-title>
          ,
          <source>in: Proc. of the IEEE/CVF Winter Conf. on App. of Computer Vision Workshops</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Albiero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Bowyer</surname>
          </string-name>
          ,
          <article-title>Is face recognition sexist? no, gendered hairstyles and biology are</article-title>
          ,
          <source>in: Proc. of BMVC</source>
          <year>2020</year>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Srinivas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hivner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Atwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ricanek</surname>
          </string-name>
          ,
          <article-title>Exploring automatic face recognition on match performance and gender bias for children</article-title>
          ,
          <source>in: Proc. of WACVW</source>
          <year>2019</year>
          ,
          <year>2019</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. B.</given-names>
            <surname>Sirotin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Vemury</surname>
          </string-name>
          ,
          <article-title>The efect of broad and specific demographic homogeneity on the imposter distributions and false match rates in face recognition algorithm performance</article-title>
          ,
          <source>in: Proc. of BTAS</source>
          <year>2019</year>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Atzori</surname>
          </string-name>
          , G. Fenu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marras</surname>
          </string-name>
          ,
          <article-title>The more secure, the less equally usable: Gender and ethnicity (un) fairness of deep face recognition along security thresholds</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>210</volume>
          (
          <year>2022</year>
          )
          <fpage>212</fpage>
          -
          <lpage>217</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Atzori</surname>
          </string-name>
          , G. Fenu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marras</surname>
          </string-name>
          ,
          <article-title>Explaining bias in deep face recognition via image characteristics</article-title>
          ,
          <source>in: Proc. of IJCB</source>
          <year>2022</year>
          , IEEE,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>J. M. Kleinberg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Mullainathan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Raghavan</surname>
          </string-name>
          ,
          <article-title>Inherent trade-ofs in the fair determination of risk scores</article-title>
          ,
          <source>in: Proc. of ITCS</source>
          <year>2012</year>
          , volume
          <volume>67</volume>
          ,
          <year>2017</year>
          , pp.
          <volume>43</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          :
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>K.</given-names>
            <surname>Ounachad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Oualla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Souhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sadiq</surname>
          </string-name>
          ,
          <article-title>Face sketch recognition-an overview</article-title>
          ,
          <source>in: Proceedings of the 3rd International Conference on Networking, Information Systems &amp; Security, NISS2020</source>
          , Association for Computing Machinery, New York, NY, USA,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <article-title>Progressive learning for image retrieval with hybrid-modality queries</article-title>
          ,
          <source>in: Proceedings of the 45th International ACM SIGIR Conference on Research and</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>