<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. Enbergs);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Childcare Benefit Scandal⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maurus Enbergs</string-name>
          <email>m.e.enbergs-1@tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sem J.J. Nouws</string-name>
          <email>s.j.j.Nouws@tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roel I.J. Dobbe</string-name>
          <email>r.i.j.dobbe@tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Delft University of Technology</institution>
          ,
          <addr-line>Jafalaan 5, 2628BX Delft</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Proceedings EGOV-CeDEM-ePart conference</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The introduction of algorithmic decision-making in the social welfare domain has contributed to the emergence and amplification of harm to citizens. In several cases, the use of algorithmic decision-support systems has led to the formation of so-called “digital cages”; administrative exclusion by digital information architectures. To date we still lack comprehensive theoretical concepts to describe and analyze the systemic hazards introduced by algorithmic systems. Our study illustrates the afordances of system-theoretic concepts and methods, drawing on system safety, to understand and analyze algorithmically induced hazards in public governance. We show, on example of the Dutch Childcare Benefit Scandal , that the system safety discipline ofers powerful concepts and tools to understand, prevent, and address algorithmically induced system hazards in social welfare. By applying system safety concepts, our study contributes to the development of sociotechnical assessment approaches for algorithmic systems in public governance.</p>
      </abstract>
      <kwd-group>
        <kwd>system safety</kwd>
        <kwd>algorithmic decision-making</kwd>
        <kwd>systems theoretic process analysis</kwd>
        <kwd>childcare benefit scandal</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Within the Dutch Tax Administration, such a loop was caused by the missing control over the
      </p>
      <p>CEUR
Workshop</p>
      <p>
        ISSN1613-0073
“toezichtslijst” (monitoring list). The list consisted of citizens who were suspected of fraud or gross
negligence. Once a person was on this list, any signal associated with them would be flagged by one of
several big data feed risk models and forwarded to case workers for personal control. Furthermore, until
2019, data from the list was used to train and test these risk models. This closed-loop process meant
that registration on the list significantly increased scrutiny and the chances of (incorrect) determination
of fraud. The average time spent on the list was 4,8 years [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        3. Mental Model Misalignment; is a function of the lack of information and feedback exchange between
system components [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Our example refers to the misalignment of aspects embedded into the mental
model of human controllers. Specifically, this involved individuals who had unknowingly submitted
a faulty tax return. By default, these cases were marked with the “1x1” checkbox prior to manual
assessment. However, caseworkers at the national debt collection center (LIC) had been misinformed
about the implied conventions of the signal. They believed that applications with the ”1x1” box ticket
were verified fraud cases, eliminating the right to a personalized payment plan [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In reality, the “1x1”
box did not determine definitive guilt. Rather, it was a mechanism by which caseworkers could indicate
that, for a specific individual, further assessment was needed [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Final Remarks: This research underscores the critical need for safety engineering methods for
algorithmic interventions in social welfare. System Safety provides practitioners with a structured
framework to navigate the complexities of public algorithmic systems. By illustrating how
wellunderstood safety problems manifested in the Childcare Benefit Scandal, we demonstrate STPA’s
efectiveness in identifying and addressing complex system interactions in digital social welfare systems.
Declaration on Generative AI
During the preparation of this work, the author(s) used Grammarly to: Grammar and spelling check, Paraphrase, and reword. Additionally, a
RAG pipeline utilizing OpenAIAPI-GPT4 was developed to query relevant documents related to the case study. After using this tool/service,
the author(s) reviewed and edited the content as needed and take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rizk</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Lindgren</surname>
          </string-name>
          ,
          <article-title>Automated Decision-Making in the Public Sector: A Multidisciplinary Literature Review</article-title>
          , in: M.
          <string-name>
            <surname>Janssen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Crompvoets</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          <string-name>
            <surname>Gil-Garcia</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>I. Lindgren</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nikiforova</surname>
          </string-name>
          , G. Viale Pereira (Eds.),
          <source>Electronic Government</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>237</fpage>
          -
          <lpage>253</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -70274-7_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Leveson</surname>
          </string-name>
          ,
          <article-title>Engineering a safer world: systems thinking applied to safety, Engineering systems</article-title>
          , MIT Press, Cambridge, Mass,
          <year>2011</year>
          . OCLC: ocn719429220.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R. I. J.</given-names>
            <surname>Dobbe</surname>
          </string-name>
          ,
          <source>System Safety and Artifcial Intelligence</source>
          (
          <year>2022</year>
          )
          <article-title>15</article-title>
          . URL: https://academic-oup-com.tudelft.idm.oclc.org
          <string-name>
            <surname>/</surname>
          </string-name>
          edited-volume/41989/chapter-abstract/377785597?redirectedFrom=fulltext. doi:https://doi-org.
          <source>tudelft.idm.oclc.org/10</source>
          .1093/ oxfordhb/9780197579329.001.0001.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4] PwC, Onderzoek efecten FSV Toeslagen - Rapport - Rijksoverheid.nl,
          <year>2021</year>
          . URL: https://www.rijksoverheid.nl/documenten/rapporten/ 2021/12/03/onderzoek-pwc
          <article-title>-effecten-fsv-toeslagen</article-title>
          , last Modified:
          <fpage>2022</fpage>
          -
          <lpage>03</lpage>
          -14T11:12 Publisher: Ministerie van Algemene Zaken.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>