<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Open Source Software 4 (2019) 1389.
[21] R. M. Emerson</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1111/puar.13602</article-id>
      <title-group>
        <article-title>Dialogue-based XAI for predictive policing: a field study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fabian Beer</string-name>
          <email>fabian.beer@uni-bielefeld.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dimitry Mindlin</string-name>
          <email>dimitry.mindlin@uni-bielefeld.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Kost</string-name>
          <email>Sebastian.Kost@polizei.nrw.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Isabel Krause</string-name>
          <email>Isabel.Krause@polizei.nrw.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Katharina Schwarz</string-name>
          <email>Katharina01.Schwarz@polizei.nrw.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kai Seidensticker</string-name>
          <email>Kai.Seidensticker@polizei.nrw.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philipp Cimiano</string-name>
          <email>philipp.cimiano@uni-bielefeld.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena Esposito</string-name>
          <email>elena.esposito9@unibo.it</email>
          <email>elena.esposito@uni-bielefeld.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bielefeld University</institution>
          ,
          <addr-line>Universitätsstraße 25, Bielefeld, 33615</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>State Ofice for Criminal Investigations of North Rhine-Westphalia</institution>
          ,
          <addr-line>Düsseldorf, 40221</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Bologna</institution>
          ,
          <addr-line>Via Zamboni 33, Bologna BO, 40126</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>392</volume>
      <fpage>263</fpage>
      <lpage>278</lpage>
      <abstract>
        <p>AI systems are increasingly used in predictive policing settings in order to identify areas at higher risk of residential burglaries or other crimes, and thus support the ground work of field police oficers that take preventive measures. Explaining predictions of AI systems is crucial in this setting to increase confidence of oficers in acting upon these predictions. This paper presents our field study examining a research unit of the German police that develops and employs predictive policing techniques. We explore the use of a revised version of a previously developed dialogue-based XAI tool, adapted to allow police analysts to inquire the reasons for a model's prediction. Based on semi-structured interviews and ethnographic observations, we analyze user responses on the tool's utility and its potential integration into operational workflows. Our preliminary results concern two distinct levels: individual user interactions and broader organizational communication dynamics.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Predictive Policing</kwd>
        <kwd>Dialogue-based xAI</kwd>
        <kwd>Explainable AI</kwd>
        <kwd>Organization</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Algorithmic predictive systems have long been used in the field of crime prevention, leading to the
development of predictive policing systems that identify potential criminal activities in order to support
interventions to prevent them. Systems such as PredPol, PRECOBS, SKALA, HunchLab or KrimPro have
been used for years by police departments in the US and Europe and are the object of much attention by
public and private observers [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. From an ethical perspective, predictive policing represents a sensitive
application domain, as it potentially threatens civil and privacy rights [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], can lead to discrimination
reinforcement [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and faces challenges related to ensuring legal and moral fairness [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In such a
sensitive application domain, there is a particularly urgent need for reliable and efective XAI tools that
enable a diverse group of stakeholders – including practitioners, regulators, and the broader public – to
evaluate and monitor the operation of predictive policing systems to meet their unique explanation
needs, but most importantly also to increase confidence of those who act upon the predictions of such
systems.
      </p>
      <p>
        While some research has been conducted on understandability within the field of predictive policing [
        <xref ref-type="bibr" rid="ref5 ref6">5,
6, 7, 8</xref>
        ], there is a lack of evidence for the acceptability and usefulness of XAI. To address this gap and
better understand the needs and requirements of police analysts and oficers regarding explainability of
predictions, we carried out a field study with the SKALA team from the Criminological Research Unit of
the State Ofice for Criminal Investigations of North Rhine-Westphalia. SKALA develops their own ML
models for making predictions about the probability of residential burglaries for individual residential
quarters. At the beginning of every week, predictions about the probability of residential burglaries for
individual residential quarters are generated by SKALA’s ML model and then shared with regional police
agencies through a mutually accessible platform [9]. Every regional police agency employs designated
analysts that receive these predictions, "evaluate" and eventually forward them to several operative
forces. In the end, the predictions reach frontline oficers who are expected to follow their indications,
e.g. by patrolling the risk areas more intensely. At each of these steps of the communication chain,
dificulties in understanding the ML predictions can occur. However, at the current state of the project,
SKALA does not rely on any XAI method to explain their predictions. For this reason, we adapted a
previously developed dialogue-based XAI solution by Mindlin and Cimiano [10] and performed a field
trial with three diferent police units and four police analysts to understand how they would use the
solution, the main interaction flows as well as observed limitations and problems. As the analysts are
the first members of the police outside of SKALA that come in contact with the ML predictions, SKALA
suggested to test the XAI tool with them and not with patrol oficers.
      </p>
      <p>We discuss related work (Section 2), the XAI tool (Section 3.1), the design of our study (Section 3.2)
and present parts of our data and preliminary results (Section 3.3).</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Automated or AI-supported decision making often takes place within organizations [11]. Accordingly,
for many XAI application domains like healthcare, the legal field or finance, the problem of explainability
occurs in reference to AI-systems employed in organizations. Following Miller’s [12] suggestion to
study explanation as a social and communicative practice, several studies discuss how within these
organizations the problem of explainability occurs and is dealt with practically [13, 14, 15]. As for
the case of Predictive Policing, Waardenburg et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] argue that in the case of the Dutch police the
organization coped with dificulties in understanding opaque ML models through a new social role
the authors term "Knowledge Brokers". This role was occupied by intelligence oficers who mediated
between data scientists and users. Eventually, these knowledge brokers replaced the ML predictions
with their own. Egbert et al.[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] discuss two German predictive policing applications, diferentiated by
their respective degree of opacity, and analyze their consequences on the decision premises of "decision
programs", "communication channels" and "persons" [16].
      </p>
      <p>Studying the problem of explainability in the field is one thing. Another is the implementation of XAI
tools within organizations. Until now, there has been little evidence of the usefulness and acceptability
of XAI in the daily life of the police. The findings of a study using hand-crafted explanations showed
that police oficers tended to accept recommendations that align with their intuition, but were unlikely
to change their minds in case of misalignment, even when provided with explanations [7]. Another
study applied an XAI solution based on LIME [17] to explain the results of a text classifier to police
oficers [ 8]. Their results suggest that domain experts preferred natural language explanations over
visualizations and numerical representations. While insightful, this use case represents a relatively
low-stakes scenario focused on document classification rather than high-risk decision-making like risk
area prediction.</p>
      <p>To combine the research on the problem of explainability in organizations with research on the use
of XAI tools in predictive policing, we conducted a field study in the German police to observe how
the implementation of an XAI tool might shape the social practices of dealing with the problem of
explainability and how, inversely, those already established social practices might shape the usage of
the XAI tool. The data collection process and preliminary results are described in the following section.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Field Study</title>
      <p>In order to prepare our field study with the analysts of three diferent police units, we ran bi-weekly
meetings with SKALA over a period of six months. We obtained informed consent from the analysts, and</p>
      <p>Predefined Questions
######
######</p>
      <p>######
###### ######
## ############ ###### ##</p>
      <p>################
Interactive Map
Natural
Language</p>
      <p>Teamplates
10 Most Important Attributes
a formal approval from the state’s Ministry of the Interior.1 Due to the limited number of participants,
we decided against using quantitative measures for this pilot study.</p>
      <p>A design choice made in the context of this study was to use a concrete XAI solution that participating
police oficers could interact with to avoid obtaining only abstract answers on their requirements and
needs. We hoped that using a specific and concrete tool would elicit discussions around suitability of
current solutions and existing needs would be more specific and tangible. Thus, we used an existing
dialogue-based XAI tool developed by Mindlin and Cimiano [10] that was adapted for the study on the
basis of feedback received by SKALA in the six-month period.2 This led to changes in the user interface
and result presentation; the updated user interface is shown in Fig. 1.</p>
      <sec id="sec-3-1">
        <title>3.1. Predictive Model and Data</title>
        <p>Due to compliance constraints, we did not have access to the random forest model used by SKALA.
Instead, we trained a surrogate model using historical data and the original model’s labels. As is common
in practice, the classes of interest (risk and high risk) were rare, with the rarest class comprising less than
1% of the data, resulting in a highly imbalanced dataset. Since the highest-risk category is determined
through post-processing rules and appears infrequently, we merged both risk classes into a single
category. This left us with two final classes: no risk and medium risk.</p>
        <p>To address the dataset’s high dimensionality (120 features), we applied univariate feature selection
using an ANOVA F-test to retain the top 10 features. While the selected features vary slightly across
locations, they generally reflect: i) concentration of ofenses within specific time-radius windows, ii)
ofense count in the same windows, iii) variation in activity patterns over time, and iv) minimal or
average time between ofenses in those windows.</p>
        <p>The system answers questions about feature importance and influence (LIME [ 17]),
counterfactuals (DICE [18]), minimal suficient explanations (Anchors [ 19]), individual feature efects (Ceteris
Paribus [20]), and feature value distributions.</p>
        <p>To train the surrogate models, we experimented with oversampling, undersampling, and class
weighting. None of these methods improved F1-scores noticeably. Despite extensive tuning, the models
performed modestly on the minority class. Across regions, we observed a trade-of between precision
and recall: in some cases, the model was more conservative, identifying only a small portion of true</p>
        <sec id="sec-3-1-1">
          <title>1Two analysts did not consent to be quoted directly.</title>
          <p>2The dialogue-based interface is publicly available in the original paper without our modifications. The system uses model
agnostic XAI methods and can be applied to diferent classification problems or extended to regression problems as well.
risk areas (recall of 0.25) but with higher precision (around 0.63); in others, it captured over half of the
actual cases (recall of 0.51) but at the cost of more false positives (precision of 0.22). Macro-average
F1-scores ranged between 0.6 and 0.66.</p>
          <p>These outcomes reflect the dificulty of learning from sparse positive labels and limited signal quality
from the original predictive model’s labels. For the purpose of our research—examining the introduction
of an XAI tool for the first time into the workflow of analysts—this level is acceptable.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Field Observations and Interviews with Analysts</title>
        <p>One of the authors visited three police departments in North-Rhine-Westphalia in Q1 2025 at the
beginning of a week when the analysts receive the predictions of the SKALA system. In total, four
police analysts (named A1 - A4 in the following) participated in the study. In a first session, one of
the authors with expertise in ethnographic studies [21] observed how the police analysts deal with
the predictions of the model. The analysts commented on what they were doing, while the author
would sit next to them, at times asking questions and taking field notes. After the analysts evaluated
the predictions, we conducted semi-structured expert interviews [22]. The following questions were
discussed:
• Could you describe how you process the predictions provided by SKALA?
• When was the last time you encountered dificulties in understanding a prediction?
• We heard from SKALA that they occasionally received requests about the predictions from
individual police agencies. If you have ever asked such questions, could you describe how you
ifled these explanation requests?
• How helpful do you think an application that can answer questions about SKALA’S predictions
would be?3</p>
        <p>A few weeks later, we conducted a second session in which the analysts used the dialogue-based XAI
solution. The aim of this "technology probe" [23] was to elicit natural responses of the analysts in their
daily working environment. Thus, the analysts were given no instructions other than that they should
interact with the tool in whatever way fits their needs. We provided access to the application via a web
link.</p>
        <p>In these interactions, the interviewer first provided some introductory remarks about the interface,
after which the analysts were free to engage with the tool. They started by scrolling through the map,
zooming in and out of it, and selecting (familiar) risk areas. After settling for a specific quarter, the
analysts would proceed by going through the Model Attributes and Further Information section, reading
the informational texts and commenting on the information provided. After this, the analysts started to
engage with the questions and their respective answers from top to bottom and eventually discussed
the tool in general. These interactions lasted about an hour each.</p>
        <p>We then conducted semi-structured interviews focusing on the interaction with the tool, using the
following guideline:
• Could you describe your general impression of the tool?
• How does this interaction with the tool difer from asking questions to the SKALA team?
• How did this interaction impact your perception of SKALAs model?
• How can this tool be implemented in your everyday routines?
• How understandable and appropriate were the available questions?
• How understandable and appropriate were the explanations?</p>
        <p>The guideline was not strictly followed, deviating whenever the conversational flow required it [ 24].
We recorded the evaluation of the predictions, the interviews, as well as the tool interaction. The data
is fully transcribed, but not openly available, as we did not get consent to share it beyond the project.
3All interviews concluded by asking the standard question: Is there anything else you would like to mention?</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Presentation of Interview Data</title>
        <p>The analysis of the interviews was performed with ATLAS.ti and Microsoft Excel. For the expert
interviews, we applied the conceptual toolbox of sociological system theory, specifically its theory of
organization [16], to reconstruct the explanatory communication between SKALA as explainer and the
analysts of the police agencies as explainees. We focused on how the communication is conditioned by
typical organizational factors like the hierarchical status of the staf, the chosen medium of distribution,
the existing communication channels, and the degree of formality/informality.</p>
        <p>As for the follow-up interviews regarding the XAI-tool, the answers of the analysts can be summarized
briefly as follows. In general, the police analysts had a positive impression of the tool, but also some
suggestions for improvement. One positive aspect highlighted is that the tool provides information that
was previously inaccessible (e.g. number of crimes, additional socioeconomic information). In terms of
suggested improvements, the analysts mentioned that they would like to influence the criteria that are
displayed (A4, A3), and would like to receive information about why a certain area was not designated
as a prediction area (A1). Two analysts (A3, A4) mentioned that the design should focus on answering
simple comprehension questions.</p>
        <p>The analysts also identified some limitations of the tool. They highlighted that the tool can answer
questions immediately when they arise, but dificulties in understanding still need to be clarified directly
with SKALA (A1, A2, A3). One analyst perceived the tool as simple to use, as answers can be obtained
with one click (A1). This simplicity was challenged, however, by the need to get to know the tool so
that one can understand its explanations (A2). It was also suggested that the tool should include a text
ifeld to send open questions to SKALA.</p>
        <p>Regarding the question how the interaction with the tool impacted their perception of SKALA’s
model, two analysts mentioned that the tool enhanced their understanding of the model, revealing in
particular the factors used for prediction. Although the analysts had some issues about understanding
the tool and its usage, they found that the development was going in the right direction (A1, A2). For
one analyst (A4), the tool confirmed the impression that the model used by SKALA is complex and
dificult to explain in simple, non-scientific and lay terms. Another analyst (A3) could not relate their
conceptualization of how SKALA generates predictions to what they learned from the tool.</p>
        <p>As for the adequacy of the predefined questions, two analysts (A1, A2) considered the questions about
the most and the least important features, as well as the first attribute-related question as relevant. The
query for which group of features predicts the current result with the greatest certainty was deemed
unintelligible and its usefulness was questioned. One analyst (A4) concurred that the questions might
be relevant to their area of responsibility, highlighting the questions querying feature importance (See
Fig. 1). However, the analyst mentioned that this would only be the case if an answer to the question
would highlight factors that are considered to be relevant for the operational forces of the police (e.g.
the current season) and the statistical aggregates of the model attributes were not considered to be so.
Other respondents mentioned that some potentially relevant questions regarding the time of occurrence
of the ofenses (A3) or the ofense code (A4) were not supported by the tool and that open questions are
likely to emerge during everyday work (A1, A2).</p>
        <p>The explanations were generally regarded as accurate, but also as partially dificult to understand.
This dificulty was attributed to the employed "language". Some answers made reference to the dificulty
of understanding aggregate statistical features, as can be seen in the following excerpt in which A4
describes an ideal explanation:
"A4: Well, if down here it was thrown out (.) the connection to the highway is uninteresting.
Well, if the attributes that are displayed here (.) if I could understand them</p>
        <sec id="sec-3-3-1">
          <title>A3: Yes (laughing)</title>
        </sec>
        <sec id="sec-3-3-2">
          <title>I: So, it’s mainly about the attributes?</title>
          <p>A4: Yes, exactly. Well, the State Ofice could probably- ’Yes, okay, I understand, I understand’
but when I- I would have to understand, okay (..) this residential quarter is not important
because people with low purchasing power live there, they can’t aford watches and that’s
why they don’t get burgled. For me that would be (..) well that’s what I could understand."
Lastly, two analysts (A1, A2) mentioned that they would use the tool selectively to either answer
questions they receive or to familiarize themselves more thoroughly with the model. Two analysts (A3,
A4) stressed that the XAI tool could be helpful for newcomers. That the answers vary from week to week
was seen to be an advantage over SKALAs FAQs. For an actual implementation, however, the questions
would have to be formulated diferently, and the answers would need to be more comprehensible (A3,
A4).</p>
          <p>These follow-up interviews were fully paraphrased, subsequently coded using in-vivo labels (e.g.
’scientific’, ’the Practical’), and eventually compared thematically [ 22]. Here, two reoccurring themes
emerged: ’scientific language’ and ’relevance’. Both of these themes pointed towards dificulties in
understanding raised by the interaction with the tool itself. Based on these two themes, we searched
our material and selected passages in which either dificulties in understanding the XAI tool arose in
practice, or were described in the interviews. In this way, a total of 58 text passages were identified.
Directly comparing these passages allowed us to conclude that there is a lack of alignment between the
conceptualization of the domain by analysts on the one hand, and the XAI tool and the ML model on the
other. We specifically identified two dimensions of this misalignment: a mismatch of "language" and a
mismatch of relevance structures. First, what the analysts describe as "complicated scientific language",
our preliminary results suggest, point to the diference between the categories the analysts use to
think about the world and the statistical aggregates processed by the model to generate its predictions.
Secondly, the analysts often evaluated the information provided by the XAI tool in terms of relevance
for their work by employing a distinction between questions only of personal interest and questions that
are relevant for their field of tasks . Our preliminary results also suggest that the two mismatches are
interrelated at least insofar, as the ’scientific’ would be attributed to the field of tasks of SKALA. Thus,
labeling something as ’too scientific’ might also imply a demarcation in the field of tasks and therefore
in relevance structures.</p>
          <p>By searching for passages of maximum contrast, we identified, for one, that even when the language
barrier was not an issue, some analysts struggled in understanding the counterfactual explanations
of our tool. Further, the use of the tool exposed analysts to the inherent uncertainty involved in the
predictions, which was previously "absorbed" by the model. Here, we refer to the sociological concept
of "uncertainty absorption"[16]. As is typical for organizations, decision-making often requires
subdecision to be made and, since decisions have to be made under conditions of incomplete information,
persons or units have to assume responsibility for them. This "uncertainty absorption" simplifies the
decision-making of others but it requires that only the decision is forwarded to another unit, not the
incomplete information on which it is based. As argued by Besio et al. [25], ML models in organizations
can take responsibility in this sense of absorbing the uncertainty of a decision. Implementing an
XAI-tool, however, enables the analysts to have access to the features that the model’s predictions are
based on, i.e. to the uncertainty previously absorbed by SKALA.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>Our field study relied on hands-on sessions with a dialogue-based XAI solution and semi-structured
interviews. While our study is limited by the small number of participants and the prototypical status
of the tested XAI tool, it could nevertheless provide insights at two levels: the relationship of individual
users with the tool and the communication dynamics within the organization.</p>
      <p>At the individual level, our preliminary results suggest that analysts struggled with the concepts
of machine learning that they could not map straightforwardly into their own world model, labeling
them as overly “scientific”. Analysts argued that explanations constructing a meaningful narrative with
concepts directly mappable to their own conceptualization would be easier to grasp. These preliminary
results are in accordance with a study of police employees that suggests that explanations are often too
abstract and that usability and usefulness are more valued than interpretability [26].</p>
      <p>At the level of the organization, our preliminary results suggest that even if some individual
mismatches were resolved, organizations would have to cope with issues of unabsorbed uncertainty in their
decision-making processes. This hints at the fact that the introduction of XAI solutions has significant
impacts on organizational processes that need to be revisited.</p>
      <p>The validity of these preliminary results depends on the extent to which the surrogate model faithfully
captures the real model. Regarding the dificulty of understanding model attributes, this is the case,
as for non-intuitive counterfactual explanations it may not. We would like to address this distinction
in future work. Additionally, many details of this study are not generalizable as it only involves one
regional police unit. However, we expect that other law enforcement agencies whose algorithmically
supported decision-making process is based on a division of labor will encounter similar questions that
arise out of general aspects of organizations. E.g. in what existing “ecology of explanations” [14] will
the introduction of an XAI tool intervene, and eventually, how does the tool become a part of this larger
ecology?</p>
      <p>This study is the result of interdisciplinary cooperation between computer science researchers,
sociologists and the Criminological Research Unit of the State Ofice for Criminal Investigations of
North Rhine-Westphalia. This unique collaboration enabled the adaptation of an existing XAI tool to the
particular dataset and requirements of SKALA and perform a test of the system ‘in vivo’ and collecting
feedback. We think that such studies are necessary to better understand the ‘reality‘ in which XAI
systems are employed, as this reality check can guide the design and further research of XAI Systems.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This research is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation):
TRR 318/1 2021 – 438445824 and by the European Research Council (ERC) under Advanced Research
Project PREDICT no. 833749. The studies with adult participants were approved by the Ethics Committee
of Paderborn University on January 11, 2021.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work Chat-GPT-4o and Grammarly were used to check grammar and
spelling. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed
and take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Egbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Leese</surname>
          </string-name>
          , Criminal Futures:
          <article-title>Predictive policing and everyday police work</article-title>
          , Taylor &amp; Francis, London / New York, NY,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Schlehahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Aichroth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schreiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D. H.</given-names>
            <surname>Shepherd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. L. W.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <article-title>Benefits and pitfalls of predictive policing</article-title>
          ,
          <source>in: Proc. of the European Intelligence and Security Informatics Conference</source>
          , IEEE,
          <year>2015</year>
          , pp.
          <fpage>145</fpage>
          -
          <lpage>148</lpage>
          . doi:
          <volume>10</volume>
          .1109/EISIC.
          <year>2015</year>
          .
          <volume>29</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Robinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Koepke</surname>
          </string-name>
          ,
          <article-title>Stuck in a Pattern. Early evidence on “predictive policing” and civil rights</article-title>
          ,
          <source>Technical Report, Upturn</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mullainathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          ,
          <article-title>Inherent trade-ofs in the fair determination of risk scores</article-title>
          ,
          <source>in: Proc. of the 8th Innovations in Theoretical Computer Science Conference (ITCS</source>
          <year>2017</year>
          ), volume
          <volume>67</volume>
          of Leibniz International Proceedings in Informatics (LIPIcs),
          <source>Schloss Dagstuhl - Leibniz-Zentrum für Informatik</source>
          ,
          <year>2017</year>
          , pp.
          <volume>43</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          :
          <fpage>23</fpage>
          . doi:
          <volume>10</volume>
          .4230/LIPIcs.ITCS.
          <year>2017</year>
          .
          <volume>43</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Waardenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huysman</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Sergeeva, In the land of the blind, the one-eyed man is king: Knowledge brokerage in the age of learning algorithms</article-title>
          ,
          <source>Organization Science</source>
          <volume>33</volume>
          (
          <year>2021</year>
          )
          <fpage>59</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Egbert</surname>
          </string-name>
          , E. Esposito,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heimstädt</surname>
          </string-name>
          , Vorhersagen und Entscheiden: Predictive Policing in Polizeiorganisationen,
          <source>Soziale Systeme</source>
          <volume>26</volume>
          (
          <year>2021</year>
          )
          <fpage>189</fpage>
          -
          <lpage>216</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>