<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Index of Artificial Intelligence Systems Ethics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleh Zaritskyi</string-name>
          <email>oleh.zaritskyi@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>av. L.Guzara, 1, Kyiv, 03124</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>169</fpage>
      <lpage>174</lpage>
      <abstract>
        <p>The article deals with the current issues of classification of the challenges that arise and the principles that should been used, during the development and implementation of artificial intelligence systems. In the subject area of AI ethics, the notion of an AI system ethics index has been introduced. The author made a detailed analysis of ideas and methods in the subject area of ethics of artificial intelligence and proposed a general approach for quantifying the level of ethics of developed systems by classifying the main challenges, evaluating them and introducing compensatory measures. The approach reflects the general idea, which could been detailed by specialists from the respective subject areas. The research is purely theoretical in nature, summarizing existing ideas and principles, for the first time putting forward the idea of a quantitative assessment of the question of the ethics of artificial intelligence.</p>
      </abstract>
      <kwd-group>
        <kwd>AI system</kwd>
        <kwd>AI ethics</kwd>
        <kwd>ethics Index</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The intensive development of information technology in the last decade, especially in the field of
artificial intelligence and hardware in the form of neurosynaptic and quantum computers, poses new
challenges to society in its harmonious development in terms of moral and ethical issues, as well as
information security. Numerous AI programs by the world's leading governments published in the past
few years also highlight the urgency of the ethical issues that will arise as these technologies develop.
plan to overcome limitations on AI technology” [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The “AI Next program” begins [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The
Reform publishes a white paper on AI and its impact on policy [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The UK government publishes its “AI Sector Deal” which invests 950M pounds (1.2B USD) to
support research / education, and enhance the UK’s data infrastructure [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Since 2014-15, public, private companies, educational and research institutions have begun to
publish various regulatory documents, materials related to the ethical issues of the development,
implementation and application of artificial intelligence systems. The importance of moral issues in
information systems and AI are also been evidenced by the research on ethical issues highlighted in
separate sections of the systematic AI index reports [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6-8</xref>
        ], which highlight several general principles
that
      </p>
      <p>unite these documents, among them: confidentiality, accountability, transparency, and
explainability.</p>
      <p>The very fact that such documents appear shows that society is beginning to pay serious attention to
such a difficult issue as ethics and human rights in the field of artificial intelligence. However, criticism
should been noted, that has arisen from experts in ethics and human rights in connection with the
possible ambiguous or inaccurate use of existing terms in this field.</p>
      <p>2022 Copyright for this paper by its authors.</p>
      <p>The abstract nature of the introduced principles does not allow us to speak about their adequacy
from the point of view of a correct description of the subject area and, accordingly, about the possibility
of their use to control compliance with ethical norms in the field of AI.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        Research [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6-8</xref>
        ], covering more than a hundred papers produced by various organizations on AI
ethics, identifies the 12 most frequently cited challenges to AI ethics (tabl.1).
      </p>
      <sec id="sec-2-1">
        <title>Ethical Challenges</title>
      </sec>
      <sec id="sec-2-2">
        <title>Accountability</title>
      </sec>
      <sec id="sec-2-3">
        <title>Safety</title>
      </sec>
      <sec id="sec-2-4">
        <title>Human Control</title>
      </sec>
      <sec id="sec-2-5">
        <title>Reliability, Robustness, and</title>
      </sec>
      <sec id="sec-2-6">
        <title>Security</title>
      </sec>
      <sec id="sec-2-7">
        <title>Fairness</title>
      </sec>
      <sec id="sec-2-8">
        <title>Diversity and Inclusion</title>
      </sec>
      <sec id="sec-2-9">
        <title>Sustainability</title>
      </sec>
      <sec id="sec-2-10">
        <title>Transparency</title>
      </sec>
      <sec id="sec-2-11">
        <title>Interpretability and</title>
      </sec>
      <sec id="sec-2-12">
        <title>Explainability</title>
      </sec>
      <sec id="sec-2-13">
        <title>Multi Stakeholder</title>
        <p>engagement</p>
      </sec>
      <sec id="sec-2-14">
        <title>Lawfulness and Compliance</title>
      </sec>
      <sec id="sec-2-15">
        <title>Data Privacy</title>
      </sec>
      <sec id="sec-2-16">
        <title>Definition</title>
      </sec>
      <sec id="sec-2-17">
        <title>All stakeholders of AI systems are responsible in the moral implications of their use and misuse</title>
      </sec>
      <sec id="sec-2-18">
        <title>Throughout their operational lifetime, AI systems should not compromise the physical safety or mental integrity of humans</title>
      </sec>
      <sec id="sec-2-19">
        <title>It assumes control by the developer and end-user in the</title>
        <p>development and use of AI systems, respectively</p>
      </sec>
      <sec id="sec-2-20">
        <title>All systems designed and used must be reliable in use, resistant to external influences and meet information security standards</title>
      </sec>
      <sec id="sec-2-21">
        <title>The development of AI should refrain from using datasets that contain discriminatory biases</title>
      </sec>
      <sec id="sec-2-22">
        <title>Understand and respect the interests of all stakeholders impacted by your AI technology</title>
      </sec>
      <sec id="sec-2-23">
        <title>The AI development must ensure the sustainability of our planet is preserved for future</title>
      </sec>
      <sec id="sec-2-24">
        <title>An AI system should be able to explain its decision making</title>
        <p>process in a clear and understandable manner</p>
      </sec>
      <sec id="sec-2-25">
        <title>Developed AI systems should be understandable in terms of their internal content (construction) and easily explainable in terms of their functionality</title>
      </sec>
      <sec id="sec-2-26">
        <title>Involves multiple independent stakeholders in the</title>
        <p>development and operation of AI systems</p>
      </sec>
      <sec id="sec-2-27">
        <title>All the stakeholders in design of an AI system must always act in accordance with the law and all relevant regulatory regimes</title>
      </sec>
      <sec id="sec-2-28">
        <title>Users must have the right to manage their data which is used to train and run AI systems</title>
        <p>This list is not exhaustive, but shows general trends in AI. Researches show that fairness,
interpretability and explainability, transparency are most mentioned across all documents studied.</p>
        <p>
          Research [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] presents core ethical principles (i.e., respecting autonomy, avoiding harm &amp; doing
good, ensuring justice) and the instrumental principles that primarily link to them. With about 100 sets
of principles published as of today, it is easy to get lost in these separate but similar documents, so the
“Dynamics of AI Principles” is tool for keeping track of, and systematize, the bewildering and growing
number of AI Principles out there. Private companies, governmental agencies, international
organizations, research centers, and professional organizations had published AI principles.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], much attention had been paid to the legal aspects of the development of AI systems from
American legislation system. The report addresses issues such as: privacy, innovation policy, liability
(civil), liability (criminal), agency, certification, labor and taxation.
        </p>
        <p>
          In research report [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] authors mentioned that for millennia, waves of technological change have
been perceived as a double-edged sword for the economy and labor market, increasing output and
wealth but potentially reducing pay and job opportunities for typical workers. Thus, the study
emphasizes the question (SQ11): How has AI affected socioeconomic relationships? We do not see an
unambiguous answer. Perhaps the impact on the economy and the labor market is not as noticeable as
expected, because AI has been localized in certain industries and countries and does not have the proper
level of implementation, i.e. we did not get a critical mass. Thus, an analysis of recent research suggests
several main areas of concern for ethics and human rights scholars: information security (human rights,
adequate historical data for learning samples, etc.) and the impact on human and social well-being.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. AI challenges and classification approaches</title>
      <p>The research methodology involves the study of the key causes of disagreement between the fields
of research in artificial intelligence and ethics, as well as the classification (formalization) of the basic
concepts of the field of study. Obviously, all the disagreements between AI specialists and ethicists in
its classical sense arise from different interpretations of AI terms in terms of ethics. It is necessary to
turn to the definition of ethics and the tasks it addresses in a broad and narrow sense.</p>
      <p>
        Ethics is a philosophical discipline whose subject is morality. Ethics has two main functions –
moraleducational and cognitive-educational, so two areas could been distinguished in ethics - normative
ethics, aimed at teaching about life, and theoretical ethics, studying morality. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>Theoretical ethics is a scientific discipline that examines morality as a special social phenomenon,
finds out what it is, how morality differs from other social phenomena. Theoretical ethics studies the
origin, historical development, regularities of functioning, social role and other aspects of morality. Its
methodological basis is the knowledge, concepts and ideas concerning the scientific knowledge of
morality. Normative ethics searches for a principle (or principles) that governs human behavior, guides
one's actions, establishes criteria for evaluating the moral good, and a rule that can act as a general
principle for all cases.</p>
      <p>Applied (practical) ethics studies particular problems and the application of moral ideas and
principles articulated in normative ethics to specific situations of moral choice. Applied ethics interacts
closely with the social and political sciences and has a number of sections, e.g. business ethics, medical
ethics, computer ethics, etc. Obviously, AI ethics could been classified as a section of applied
(normative) ethics, which is very close to information (computer) ethics. Let us make a brief excursion
into information ethics and consider its principles.</p>
      <p>
        Broadly speaking, computer ("information" or "cyber ethics") ethics investigates the behavior of
people who use information systems, based on which appropriate moral precepts and norms of behavior
are developed [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Computer ethics covers almost all spheres of human activity and deals with
technical, moral, legal, social, political and philosophical issues. The problems analyzed in it could been
roughly divided into several classes:
1. Problems associated with the development of moral codes for users and developers.
2. Problems of protection of property rights, copyrights and basic human rights (rights to privacy
and freedom of speech, obtaining and using information, the right to work, privacy and personal data,
etc.) as applied to the field of information technology.
      </p>
      <p>3. Cyber security, determination of the status of incidents and crimes, that is, predominantly legal
problems, as a rule, formalized in the form of national legislative acts on information security.</p>
      <p>Principles developed in computer ethics:
1. Privacy – a person's right to autonomy and freedom in private life, the right to be protected
from intrusion by authorities and others.
2. Accuracy (accuracy) – compliance with the norms related to the accurate execution of the
instructions for the operation of systems and information processing, honest and socially responsible
attitude to their duties.</p>
      <p>3. Property – inviolability of private property. Adherence to this principle means observance of the
right of ownership of information and copyright norms.</p>
      <p>4. Accessibility – the right of citizens to information, its accessibility at any time and in any place.</p>
      <p>The principles of information (computer) ethics developed are very similar to the ethical principles
of AI that we reviewed in the literature review, but they relate only to data processing and issues of
security and ownership.</p>
      <p>The only issue is that not all these principles have been systematized in the framework of a
corresponding standard and each developer tries to take into account all possible cases of AI impact on
society, which leads to their duplication, different interpretation of sometimes the same principles and
challenges. For the same reason various so to speak analytics are also mixed, for example, direct impact
on society and incorrect historical data, etc.</p>
      <p>The main ethical challenges could been divided into several main groups (fig.1).</p>
    </sec>
    <sec id="sec-4">
      <title>4. Index of AI system ethics</title>
      <p>All challenges and problematic issues that arise in one way or another during the development of AI
systems can be attributed either to the issues of data and algorithms, or to those that affect the economy
and society, or conflict with existing legal norms. Very often the challenges are complex and can
simultaneously positively affect the economy as a whole, while also causing social contradictions, such
as a couple of productivity and employment issues. In the aspect of data processing, special attention
should been paid to the creation of test (training sets) of data, which require strict adherence to the
principles: data neutrality, representative data, accuracy, reliability, openness and diversity.</p>
      <p>Among the many approaches to classifying ethical principles in AI issues, the author would
distinguish four large global groups (fig.2). Figure 2 shows the classification of ethical principles into
four main groups, as well as the relationship of these groups to the principles described in papers
[911]. Group “Safety and Security” includes all the principles that describe the security and protection of
both data and its accuracy and reliability in order to create a secure information technology. A very
important issue is the adequacy of historical data to create correct training samples, which affects both
the manageability of AI and social responsibility. Group “Manageability (Controllability)” describes
the principles that must been followed to create AI software that is manageable, efficient,
understandable to the end user, and controllable by the end user. The principles of this group also imply
a cautious attitude toward the prospective capabilities of the AI system are being creating, which have
not yet been fully clarified by the developer. The development should been conducted at a high
scientific and technical level. The system must be reproducible under different conditions. The
developer must take into account all risks in operating conditions.</p>
      <p>The group “Social Responsibility and law” includes principles that characterize the AI system to be
developed in terms of compliance with social and legal norms of society. Group “Benefits” describes
the principles for assessing the usefulness of the developed AI system from both a tangible and
intangible point of view for the user and society as a whole.</p>
      <p>There is a group of principles (bottom of Figure 2): data neutrality, representative data, accuracy,
reliability, openness, diversity, accessibility, accountability, auditability that are common to the three
groups. These principles relate to data, but their implementation lays the foundation for addressing
safety, manageability, and social responsibility. Using the developed classifiers of challenges and
principles, let us introduce an index of AI system ethics (1):</p>
      <p>N
EAI     Pi  Pi IN ; , (1)</p>
      <p>i1
lim
 Pi max;Pi 0</p>
      <p>EAI  max
negative;
 Pi - is an evaluation of the principles that the AI system fulfills, (+) means positive impact, (-) –
P IN
i</p>
      <p>- is a principle (initiative, IN) necessary to compensate for negative influences.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Summary and Conclusion</title>
      <p>The main condition for using formula (1): the principles of development must correlate with the
challenges posed by the development and minimize their negative impact on society from an ethical
point of view. For example, automation can lead to increased productivity and, as a consequence, to job
losses. This is a serious challenge with the highest ethical rating. We can rate the principle –
P  5
fundamental rights and freedoms (right to work) on a maximum scale, for example – 5 points, i
. At the same time, it will reduce the length of the working day, improving social conditions (if there is
such a point in state programs, for example), so we can assess this initiative also on the maximum scale</p>
      <p>P IN  5
– i . Thus, we are talking about the implementation of initiatives (recommendations for
legislatures and businesses), which can minimize the impact of negative factors (challenges).</p>
      <p>
        The author's top-level classification of AI ethics principles could been detailed by independent
research and introduced as a standard after agreement with all stakeholders. The Index of AI Ethics is
been considered as a general approach to the evaluation of developed AI systems. It requires further
study in terms of a detailed classification of principles in the form of a final list, which will be included
in the upper level groups proposed by the author. The development of evaluation scales for quantitative
assessment of the ethics of AI systems is also been envisaged. The results of research in the field of
ethics of artificial intelligence are closely intertwined with research and approaches to quantify the
technological singularity proposed in work [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. AI ethics in a broad sense could been seen as an applied
part of general ethics, which examines the behavior of people who develop and use AI systems, as well
as the impact of these systems on society. The result of this study is the principles and norms of morality
designed both to solve private practical problems in the development process and to realize the
harmonious development of society with maximum ethical benefit.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A</given-names>
            <surname>Next Generation Artificial Intelligence Development Plan</surname>
          </string-name>
          ,
          <article-title>China copyright and media</article-title>
          ,
          <source>Aug</source>
          .
          <year>2017</year>
          . [Online]. Available: https://chinacopyrightandmedia.wordpress.com/
          <year>2017</year>
          /07/20/a
          <article-title>-nextgeneration-artificial-intelligence-development-plan/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>[2] DARPA announces “$2B+ investment plan to overcome limitations on AI technology”</article-title>
          ,
          <source>Defense Advanced Research Projects Agency</source>
          ,
          <year>2018</year>
          . [Online]. Available: https://www.darpa.mil/newsevents/2018-09-07.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>AI</surname>
          </string-name>
          <article-title>Next program begins</article-title>
          ,
          <source>Defense Advanced Research Projects Agency</source>
          ,
          <year>2018</year>
          . [Online]. Available: https://www.darpa.
          <article-title>mil/work-with-us/ai-next-campaign.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[4] Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy, United states</article-title>
          .
          <source>Congress. House. Committee on oversight and government reform</source>
          ,
          <year>2007</year>
          . [Online]. Available: https://www.hsdl.org/?abstract&amp;did=
          <fpage>816362</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>AI</given-names>
            <surname>Sector</surname>
          </string-name>
          <article-title>Deal</article-title>
          .
          <article-title>Policy pape</article-title>
          . Gov.uk. https://www.gov.uk/government/publications/artificialintelligence-sector
          <article-title>-deal/ai-sector-deal.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shoham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Perrault</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brynjolfsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Manyika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Niebles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lyons</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Etchemendy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Grosz</surname>
          </string-name>
          and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Bauer</surname>
          </string-name>
          .
          <source>Artificial intelligence Index</source>
          .
          <source>2018 Annual Report. Steering Committee</source>
          . Stanford University.
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shoham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brynjolfsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Manyika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Niebles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lyons</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Etchemendy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Grosz</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Mishra</surname>
          </string-name>
          .
          <source>Artificial intelligence Index</source>
          .
          <source>2019 Annual Report. Steering Committee</source>
          . Stanford University.
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shoham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brynjolfsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Manyika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Niebles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lyons</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Etchemendy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Grosz</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Mishra</surname>
          </string-name>
          .
          <source>Artificial intelligence Index</source>
          .
          <source>2021 Annual Report. Steering Committee</source>
          . Stanford University.
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] TOOLBOX: Dynamics of AI Principles, AI ETHICS LAB</article-title>
          . Aiethicslab.com.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>[10] https://aiethicslab.com/big-picture/.</mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>[11] Artificial intelligence and life in 2030. One hundred year study on artificial intelligence. Report of the 2015</source>
          .
          <article-title>Study panel</article-title>
          , Stanford University,
          <year>2016</year>
          . [Online]. Available: https://ai100.stanford.edu/2016-report.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Gathering</surname>
            <given-names>Strength</given-names>
          </string-name>
          ,
          <source>Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report</source>
          . Stanford University,
          <year>2021</year>
          . [Online]. Available: https://ai100.stanford.edu/2021-report/gathering-strength
          <article-title>-gathering-storms-one-hundred-yearstudy-artificial-intelligence.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Julia</given-names>
            <surname>Driver</surname>
          </string-name>
          .
          <article-title>Ethics: The Fundamentals</article-title>
          . Wiley-Blackwell, 1st ed.,
          <year>2006</year>
          . 192 p.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bynum</surname>
          </string-name>
          . Computer and Information Ethics.
          <source>The Stanford Encyclopedia of Philosophy (Spring 2011 Edition)</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>O.</given-names>
            <surname>Zaritskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Ponomarenko</surname>
          </string-name>
          .
          <article-title>Quantitative assessment of technological singularity</article-title>
          ,
          <source>The International Scientific and Technical Journal Problems of control and informatics</source>
          ,
          <year>2022</year>
          . -
          <fpage>№</fpage>
          1. - P.
          <fpage>93</fpage>
          -
          <lpage>111</lpage>
          . DOI: http://doi.org/10.34229/
          <fpage>1028</fpage>
          -0979-2022-1-9.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>