<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Assessment of International Jurisdiction⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Guillen</string-name>
          <email>andrea.guillen@uab.cat</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emma Teodoro</string-name>
          <email>emma.teodoro@uab.cat</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Law and Technology, Autonomous University of Barcelona</institution>
          ,
          <addr-line>Bellaterra</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>19</volume>
      <issue>2022</issue>
      <fpage>182</fpage>
      <lpage>188</lpage>
      <abstract>
        <p>This paper explores how to design, develop, and deploy trustworthy AI tools in industrial manufacturing. After a brief overview of existing AI ethical frameworks, the paper focuses on actioning the four AI ethical principles identified by the AI HLEG. Given the context-dependency of AI tools, these AI ethical principles are framed within the manufacturing setting. This ethics-based approach requires the operationalization of such principles to truly design, develop, and deploy trustworthy AI systems. To this end, organizational and technical measures applicable to industrial manufacturing are suggested.</p>
      </abstract>
      <kwd-group>
        <kwd>AI ethics</kwd>
        <kwd>industrial manufacturing</kwd>
        <kwd>trustworthy AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The implementation of AI systems in industrial manufacturing brings about numerous benefits,
from less machine downtime to less defects during the production process. Yet, there are
considerable legal, ethical, and societal challenges to be addressed to fulfill their potential.</p>
      <p>In recent years, the public sector, research institutions and private companies have issued
various principles and guidelines for ethical trustworthy AI. However, as AI systems are
contextdependent, these general AI ethical principles need to be adapted to the specific context of
application to successfully imbue them into AI systems. Thus, not only technological aspects of
industrial AI should be considered, but also other industrial requirements such as value creation,
economic growth, human-machine interaction, and legal, ethical, and societal aspects.</p>
      <p>This paper is conceived as a starting point to operationalize AI ethical principles in AI tools for
manufacturing. The paper explores the most predominant AI ethical frameworks and examines
four ethical AI principles contextualized within the industrial manufacturing context.</p>
    </sec>
    <sec id="sec-2">
      <title>2. AI Ethical Frameworks</title>
      <p>
        The public sector, research institutions, and private companies have issued various ethical
frameworks to ensure trustworthy AI. A number of initiatives have aimed to capture this proliferation
and map the landscape of such frameworks. For instance, the EU-funded project SHERPA
CEUR
(Shaping the Ethical Dimensions of Smart Information Systems: A European Perspective) found
over 70 relevant documents [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. AlgorithmWatch AI Ethics Guidelines Global Inventory lists
more than 160 documents, including industry related guidelines developed by Google, IBM and
Microsoft [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of
Autonomous and Intelligent Systems, called ‘Ethically Aligned Design’ was oficially launched
in April 2016 as a collective program of the IEEE, the world’s largest technical professional
organization [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It identified over one hundred and twenty key issues and eight founding
values and principles to be applied to all types of autonomous and intelligent systems which
operate in real, virtual, contextual, and mixed-reality environments [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Namely, (i) human
rights; (ii) well-being; (iii) data agency; (iv) efectiveness; (v) transparency; (vi) accountability;
(vii) awareness of misuse; and (viii) competence.
      </p>
      <p>
        The work published by the High-Level Expert Group on Artificial Intelligence of the European
Commission (AI HLEG), “Ethics Guidelines for Trustworthy AI” [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] provides a set of ethical
principles and requirements that should be embedded from the design into AI solutions to be
deemed trustworthy. According to the AI HLEG there are four high-level ethical principles: i)
human autonomy; ii) prevention of harms; iii) fairness; and v) explicability. These principles
are turned into specific requirements for their practical implementation. These requirements
are: i) human agency and oversight; ii) technical robustness and safety; iii) privacy and data
governance; iv) transparency; v) diversity, non-discrimination, and fairness; vi) environmental
and societal well-being; and vii) accountability.
      </p>
      <p>
        This ethics-based approach can be used to operationalize AI ethical principles into a specific
context of application—AI solutions for industrial manufacturing—which takes into account not
only technological aspects of Industrial AI, but also other industrial requirements such as value
creation, human-AI-interaction, ethical and regulatory aspects [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Operationalizing AI Ethical Principles in Manufacturing</title>
      <p>
        This section follows the AI ethical principles established by the High-Level Expert Group on
Artificial Intelligence (AI HLEG) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which have been adapted to the context of industrial
manufacturing from an action-guiding perspective. This approach allows us to glimpse which
ethical challenges may be faced in this context and recommends organizational and technical
measures.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Human Autonomy</title>
        <p>The principle of human autonomy implies that AI-enabled technologies should be designed,
developed, and deployed in a way that respects and protects fundamental rights and ensures
human agency and oversight.</p>
        <p>AI-enabled technologies must ensure human dignity. In the shopfloor, the objectification and
dehumanisation of operators should be avoided. Workers should be treated as self-determined
subjects whose physical and mental health must be protected. Worker’s dignity might also be
undermined by the consequences that the deployment of AI systems in the workplace may have
on the de-skilling of the labour force and the meaning of work.</p>
        <p>
          The use of AI systems in factories may also lead to an advanced system of surveillance and
monitoring to which operators may be subject [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Surveillance may cause “chilling efects” on
employees and may also negatively impact their freedom, autonomy, and privacy. Therefore,
legal, ethical, and social impact assessments must be conducted to strike the right balance
between the intended benefits of the deployment of technology in the shopfloor and the possible
negative consequences for employees’ ethical values and fundamental rights [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>
          To ensure human agency, operators should be able to make informed autonomous decisions
regarding AI tools outcomes and have the skills to assess and challenge them. Therefore, training
sessions are encouraged to ensure that operators have the knowledge to understand how the
system works and how to interact with it [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>
          The purpose of human oversight is to prevent or minimise the potential risks of AI-enabled
technologies. Meaningful human control can only be achieved if human-centric design
principles and appropriate human-machine interfaces are embedded into the technologies. Additional
measures should be implemented to ensure that operators have the expertise, necessary
competencies, and authority to exercise human control efectively, e.g., training sessions that enable
the understanding of the capacity and limitations of the deployed technology, awareness of
automation bias [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Prevention of Harms</title>
        <p>
          The principle of prevention of harms means that AI-enabled technologies should not cause
harm nor have detrimental consequences for individuals. This implies that operators’ dignity
must be respected, and their mental and physical integrity protected. Particular emphasis must
be placed on the potential harms that technology can cause or exacerbate to workers, who
are considered by the AI HLEG vulnerable people given the power imbalance and information
asymmetries with employers. To minimise the impact of AI-enabled technologies on operators,
a participatory approach could be adopted where workers are involved in the development and
deployment of the technology [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>The potential harms that can be caused by AI-enabled technologies also require addressing: i)
the technical robustness and safety of the technology; ii) privacy and data governance concerns;
and iii) societal and environmental well-being.</p>
        <p>
          Firstly, AI-enabled technologies must be robust, resilient, secure, safe, accurate, reliable, and
reproducible. Technical robustness and resilience should be ensured to prevent the exploitation
of vulnerabilities by third parties and misuse [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Therefore, the existence of potential security
risks must be evaluated at the design, development and deployment phases, and mitigation
measures must be implemented in accordance with the magnitude and likelihood of the risks.
Security and safety measures should also be put in place to enhance operators’ safety and
prevent detrimental consequences. To this end, a fallback plan can serve to ensure safety in
case of a system failure. Likewise, AI-enabled technologies must be accurate. Accuracy rates
should be particularly high when such systems can directly afect individuals, as is the case with
operators whose integrity may be compromised. Accuracy must be monitored on an ongoing
basis and procedures to mitigate and correct potential risks must be implemented. Additionally,
operators need to trust the system to use it, therefore reliability and reproducibility are key
aspects to ensure the adoption of the technology among them [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          Secondly, the prevention of harms to privacy and data protection is paramount given the
potential risks that AI-enabled technologies pose to these fundamental rights through the
processing of massive amounts of personal data, including the unintended collection of personal
data. These rights can also be at stake because personal information can be inferred from
non-personal data [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          Respect for workers’ right to privacy and data protection must be ensured by complying with
the GDPR and by aligning with existing standards or widely adopted protocols. Importantly, in
IoT environments, it is particularly crucial to clarify data ownership, the roles of data controllers
and processors and access to data [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Oversight mechanisms must also be put in place to
ensure data quality (e.g., representativeness in the dataset) and integrity that minimises the
risks of using biased, inaccurate, or compromised datasets. Therefore, processes and datasets
must be scrutinised and documented throughout the AI system’s lifecycle.
        </p>
        <p>
          Lastly, the use of AI-enabled technologies should aim at benefitting society and the
environment. AI systems must be designed, developed, and deployed with sustainability and
environmental friendliness in mind. Therefore, the ecological impact of the system should be
evaluated throughout the system’s lifecycle and measures to reduce such impact should be
encouraged. The social impact of the system should be regularly assessed both at the individual
and societal level. For instance, the evaluation of the impact of the technology on operators
should cover physical and mental health issues, non-discrimination, de-skilling of the workforce,
among others. As for the societal considerations, the impact on the job market and the societal
consequences it may entail should be addressed [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Fairness</title>
        <p>The principle of fairness entails equality, diversity and the prevention of discrimination and
stigmatisation against individuals and groups. Equality requires that all persons by virtue of
their humanity and regardless of age, gender, sexuality, disability, ethnicity or other group or
relevant personal characteristic deserve equal regard and respect.</p>
        <p>Fairness can be achieved by i) promoting diversity, inclusion and non-discrimination; ii)
fostering societal and environmental well-being while reducing potential harms; and iii) adopting
accountability measures.</p>
        <p>
          Firstly, diversity and non-discrimination can be enhanced with oversight processes that
identify, examine, address, and test biases in the datasets and at the design and development
phases [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. From a design perspective, technology should be understandable and accessible to
all operators regardless of their age, abilities, or characteristics. In this regard, the participation
of relevant stakeholders with diverse backgrounds and viewpoints at the diferent stages is
highly encouraged to ensure that diversity is embedded into the system [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>Secondly, AI-enabled technologies should be designed to strive for social and environmental
well-being. Concerning the principle of fairness, the social impact of the system on operators
should be evaluated in terms of causing or exacerbating discrimination, stigmatisation, or
marginalisation.</p>
        <p>
          Lastly, accountability requires the implementation of appropriate technical and organisational
measures to report the system’s performance and provide efective remedy and redress to the
extent possible. Such measures include the assessment of design processes, the underlying
technology, and the data sets used, which allows for the auditability of the system. Auditability
involves reporting the negative impacts of the system, identifying appropriate mitigation
measures, and feeding them into the system [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. These negative impacts can be identified and
assessed through comprehensive impact assessments that must be conducted on a regular basis
[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Accountability also includes providing explanations of the system’s outcomes and the
ability to seek redress.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Explicability</title>
        <p>
          The principle of explicability requires transparency of the AI system – including the datasets,
the inner workings of the system and the business model –which ultimately enables human
oversight [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. For systems to be transparent, traceability measures must be implemented. This
implies that datasets and the technology that underlies the system should be documented, e.g.,
the methods used for designing and developing the system, the methods used to test and validate
it and the outcomes of the system. Given that traceability allows for the identification of the
reasons behind systems’ outcomes, it enables explainability.
        </p>
        <p>
          Explainability means the ability to explain the outcomes made by the system intelligibly
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. To this end, the rationale behind a system’s outcome should be understood and traced
by humans. Therefore, if a system’s outcomes cause harm to operators, explanations of how
the system arrived at it should be provided to the worker in plain language. In this regard,
communication is crucial since operators must be aware that they are interacting with an AI
system in the first place in order to be able to request an explanation. Consequently, operators
must be informed in a clear and understandable manner about their interaction with an AI
system, how the system works and its purpose, as well as its capabilities and limitations [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>AI systems in industrial manufacturing do not only lead to positive impacts but have risks
associated with it of negative efects on the workforce, on the environment, on the company’s
reputation and broader society. Turning AI ethics principles into practice requires the
implementation of technical and organizational measures aimed at ensuring the design and development
of trustworthy AI systems. Thereby, empowering operators, enhancing social inclusion, and
informing up-skilling training programs. In sum, adopting AI technologies in the shopfloor
that are beneficial to humans individually, organizationally and societally.</p>
      <p>This paper is a first attempt to put AI ethical principles into practice in industrial AI. It provides
a starting point for the discussion on how the efective operationalization of AI ethical principles
can be achieved in manufacturing. Further research should focus on the implementation of
the AI ethical requirements corresponding to these AI ethical principles. Namely, i) human
agency and oversight; ii) technical robustness and safety; iii) privacy and data governance; iv)
transparency; v) diversity, non-discrimination, and fairness; vi) environmental and societal
wellbeing; and vii) accountability. This would provide a higher level of granularity that would allow
more specific organizational and technical measures, thereby benefitting the operationalization
of the high-level ethical principles and the design and development of trustworthy AI tools.
Likewise, further research on the operationalization of AI ethical principles in manufacturing
should be tailored to the multiple data-driven technologies used in the shopfloor— for instance,
digital twins, cobots or VR /AR /XR— as they entail diferent legal, ethical and societal risks to
be accounted for.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This research was funded by the OPTIMAI project as part of the European Union’s Horizon
2020 research and innovation programme, under grant agreement No. 958264.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Brey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lundgren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Macnish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andreou</surname>
          </string-name>
          , L. Brooks, Tilimbe Jiya,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lanzareth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Maas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Oluoch</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. Stahl,</surname>
          </string-name>
          <article-title>D3.2 Guidelines for the development and the use of SIS, SHERPA (</article-title>
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .21253/DMU.11316833.V3.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2] AlgorithmWatch,
          <source>AI Ethics Guidelines Global Inventory</source>
          , (
          <year>2020</year>
          ). URL: https://algorithmwatch.org/en/ai
          <article-title>-ethics-guidelines-global-inventory.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[3] IEEE, IEEE Ethics In Action in Autonomous and Intelligent Systems” (n.d.)</source>
          . URL: https://ethicsinaction.ieee.org/.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>AI</surname>
            <given-names>HLEG</given-names>
          </string-name>
          ,
          <article-title>Ethics guidelines for trustworthy AI</article-title>
          , (
          <year>2019</year>
          ). URL: https://digital-strategy.ec.europa.eu/en/library/ethics
          <article-title>-guidelines-trustworthy-ai.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.W.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Drath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ganz</surname>
          </string-name>
          ,
          <article-title>Proposal for requirements on industrial AI solutions</article-title>
          , in: J.
          <string-name>
            <surname>Beyerer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Maier</surname>
          </string-name>
          , O. Niggemann, Eds.,
          <source>Machine Learning for Cyber Physical Systems</source>
          , Springer, Berlin, Heidelberg,
          <year>2021</year>
          , pp.
          <fpage>63</fpage>
          -
          <lpage>72</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -62746-
          <issue>4</issue>
          _
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>I.</given-names>
            <surname>Ajunwa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schultz</surname>
          </string-name>
          , Limitless Worker Surveillance, California Law Review,
          <volume>105</volume>
          (
          <year>2017</year>
          )
          <fpage>735</fpage>
          -
          <lpage>776</lpage>
          . doi:
          <volume>10</volume>
          .15779/Z38BR8MF94.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Fjeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Achten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hilligoss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Srikumar</surname>
          </string-name>
          , Principled Artificial Intelligence:
          <article-title>Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI</article-title>
          , (
          <year>2020</year>
          ). URL:http://nrs.harvard.edu/urn-3:HUL.InstRepos:
          <volume>42160420</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Eurofound</surname>
          </string-name>
          ,
          <article-title>Game-changing technologies: Transforming production and</article-title>
          employment in Europe, (
          <year>2020</year>
          ). URL: https://www.eurofound.europa.eu/publications/report/2020/gamechanging
          <article-title>-technologies-transforming-production-and-employment-in-europe</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[9] Article 29 Data Protection Working Party, Guidelines on Automated individual decisionmaking and Profiling for the purposes of Regulation</source>
          <year>2016</year>
          /679, (
          <year>2017</year>
          ). URL: https://ec. europa.eu/newsroom/article29/items/612053.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>Törpel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Voss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hartswood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Procter</surname>
          </string-name>
          , Participatory Design:
          <article-title>Issues and Approaches in Dynamic Constellations of Use, Design</article-title>
          , and Research, in: M.
          <string-name>
            <surname>Büscher</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Slack</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Rouncefield</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Procter</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hartswood</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Voss, Eds.,
          <string-name>
            <surname>Configuring</surname>
            <given-names>User-Designer</given-names>
          </string-name>
          <string-name>
            <surname>Relations</surname>
          </string-name>
          . Springer London, London,
          <year>2009</year>
          : pp.
          <fpage>13</fpage>
          -
          <lpage>29</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>84628</fpage>
          -925-
          <issue>5</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.W.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Drath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ganz</surname>
          </string-name>
          ,
          <article-title>Proposal for requirements on industrial AI solutions</article-title>
          , in: J.
          <string-name>
            <surname>Beyerer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Maier</surname>
          </string-name>
          , O. Niggemann, Eds.,
          <source>Machine Learning for Cyber Physical Systems</source>
          . Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2021</year>
          : pp.
          <fpage>63</fpage>
          -
          <lpage>72</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          - 62746-
          <issue>4</issue>
          _
          <fpage>7</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Jansen</surname>
          </string-name>
          , P. Brey,
          <year>D4</year>
          .
          <article-title>4: Ethical Analysis of AI and Robotics Technologies</article-title>
          , SIENNA, n.d. https://www.sienna-project.eu/digitalAssets/884/c_884668-l_1
          <article-title>-k_d4.4_ ethical-analysis-</article-title>
          <string-name>
            <surname>-</surname>
          </string-name>
          aiand-r-
          <article-title>-with-acknowledgements</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          , A Right to Reasonable Inferences:
          <article-title>Re-Thinking Data Protection Law in the Age of Big Data</article-title>
          and
          <string-name>
            <surname>AI</surname>
          </string-name>
          ,
          <source>Columbia Business Law Review</source>
          ,
          <fpage>2</fpage>
          <lpage>2019</lpage>
          . doi:
          <volume>10</volume>
          .31228/osf.io/mu2kf.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <article-title>Normative Challenges of Identification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR</article-title>
          ,
          <string-name>
            <surname>Computer</surname>
            <given-names>Law</given-names>
          </string-name>
          &amp; Security Review,
          <volume>34</volume>
          (
          <year>2018</year>
          )
          <fpage>436</fpage>
          -
          <lpage>449</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.clsr.
          <year>2018</year>
          .
          <volume>02</volume>
          .002.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Beyerer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maier</surname>
          </string-name>
          , O. Niggemann, Eds.,
          <source>Machine Learning for Cyber Physical Systems: Selected papers from the International Conference ML4CPS 2020</source>
          . Springer Vieweg,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -62746-4.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Access</surname>
            <given-names>Now</given-names>
          </string-name>
          ,
          <source>Human Rights in the Age of Artificial Intelligence</source>
          , (
          <year>2018</year>
          ). URL: https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and
          <article-title>-</article-title>
          <string-name>
            <surname>Human-Rights</surname>
          </string-name>
          .pdf.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>