<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Trustworthy AI Systems from Untrustworthy Components: Development von Neumann&apos;s Paradigm using Principle of Diversity</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vyacheslav</forename><surname>Kharchenko</surname></persName>
							<email>v.kharchenko@csn.khai.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aerospace University KhAI</orgName>
								<address>
									<addrLine>Vadym Manko str., 17</addrLine>
									<postCode>61070</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleg</forename><surname>Odarushchenko</surname></persName>
							<email>odarushchenko@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Poltava State Agrarian University</orgName>
								<address>
									<addrLine>Skovorody str., 1/3</addrLine>
									<postCode>36003</postCode>
									<settlement>Poltava</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<postCode>2024</postCode>
									<settlement>Cambridge</settlement>
									<region>MA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Trustworthy AI Systems from Untrustworthy Components: Development von Neumann&apos;s Paradigm using Principle of Diversity</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">6C68C0C505398425BDD0D2293C817D44</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial intelligence</term>
					<term>trustworthiness</term>
					<term>safety</term>
					<term>two-version AI system</term>
					<term>common cause failure 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article discusses possibilities of creating trustworthy and explainable artificial intelligence (AI) and AIbased systems (AIS) using well-known von Neumann's paradigm (VNP). The models of AI and AIS quality are analyzed focusing on the most challengeable attributes related to trustworthiness of AI, safety and security of AISs. Framework of analysis, VPN formulations, methods of implementation, and stages of evolution VNP (in context of dependable and resilient systems and infrastructures) including stage of creating AISs and particularities of implementing the paradigm for various AI quality attributes are described. An approach and mathematical models describing application of diversity principles to built trustworthy AIS out of not enough trustworthy AI components (channels) are developed and investigated. A problem of AIS "immortality", the research results and future steps are discussed.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Motivation</head><p>The development and implementation of methods, tools and technologies of artificial intelligence (AI) take place in three main directions. The first direction concerns the improvement of various services to improve the quality of life, the performance of functions that provide greater comfort and convenience in everyday life, business and finance <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. The second direction is related to the use of AI for developing algorithms and control tools for industry, transport, power stations and grids, etc. <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref>.</p><p>The third direction can be formulated as the one related to the reliability and security problems of artificial intelligence and by analogy with the well-known term safeware proposed by N. Levenson <ref type="bibr" target="#b6">[7]</ref>, it can be defined as AI safeware (AISaW) or AI secureware (AISeW). It is clear that this direction is related to the first two, since reliability and safety issues are very important there.</p><p>There are many cases when the unpredictable and erroneous behavior of AI means led to catastrophic consequences for services and systems of the first and second mentioned directions <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. Their analysis, as well as the forecast of an increase in AI vulnerabilities and threats of cyber attacks, as well as specific failures of intelligent systems, led to the reaction of well-known specialists with a call to slow down and even stop the development and distribution of AI products, the use of service ChatGPT, etc. <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>.</p><p>Therefore, it is urgent to find solutions that would harmonize the first two directions with the third one and ensure the predicted, reliable and safe functioning of AI systems. Such solutions can be based on the use of various types of testing AI behavior, application of redundancy, means of tolerance and protection from the consequences of anomalous behavior caused by hidden vulnerabilities and faults, non-compliance with requirements, as well as failures of software and hardware platforms of intelligent systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2.">State of the art</head><p>In the context of safety and security, artificial intelligence is considered from three positions <ref type="bibr" target="#b11">[12]</ref>: AI as a safety/security object (AI as an asset that must be protected, AIaSO); AI as a means of ensuring safety/security or the so-called AI powered protection (AI as an asset for protection, AIaSP); AI as a means of breaching safety/security or so-called AI powered attacks (AI as an asset for an attack, AIaSA). The same division is possible from the point of view of any other characteristics X (AIaXO, AIaXP, AIaXA) such as reliability, dependability, resilience, trustworthiness and others. According to <ref type="bibr" target="#b12">[13]</ref>, eighth main scenarios can be considered depending on cases Yes/No on three options, for example, for security and AI.</p><p>This research focuses on the first issue when it is necessary to ensure the reliability, safety, and specific characteristics of AI and AI systems using different kinds of redundancy. There are many publications related to direction AIaXO and dedicated to various aspects of assessment, development and implementation of methods and means for providing required characteristics X of intelligent systems <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16]</ref>. However, we attended for further investigation based on classical work of John von Neumann "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" proposed in 1952 <ref type="bibr" target="#b16">[17]</ref>.</p><p>Concept of "wide" dependability as a federative attribute joining reliability, maintainability, availability, safety, integrity <ref type="bibr" target="#b17">[18]</ref> became a logical development of the paradigm "building reliable computer systems with unreliable components" <ref type="bibr" target="#b18">[19]</ref>. It is important to note that von Neumann had in mind the use of the principles of high availability and redundancy, primarily for hardware (HW) that was the main reason of computer system failures. In the process of development of computer technology and information systems, the share of HW failures gradually decreased compared to failures caused by software (SW) design faults <ref type="bibr" target="#b19">[20]</ref>.</p><p>This led to the development of the VNP by other researchers, in particular authors of <ref type="bibr" target="#b20">[21]</ref> and <ref type="bibr" target="#b21">[22]</ref> suggested methodology of N-version programming and the concept of building dependable systems from undependable components correspondingly. The next stage of development is the formulation of the dependability concept for a specific class of systems such as the concept of creating dependable service-oriented systems from not enough dependable web-components with uncertain characteristics <ref type="bibr" target="#b21">[22]</ref>. The methodology of building safe systems and infrastructures from insufficiently safe systems is considered in <ref type="bibr" target="#b22">[23]</ref>. The development of the VNP was extended to cloud IT-infrastructure and I&amp;C systems with multipurpose maintenance <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>.</p><p>As a preliminary conclusion, it should be noted that, on the one hand, the specific features of AI models and tools are not taken into account in a certain way within the framework of the development of VNP principles; on the other hand, a large number of publications regarding the provision of AI trustworthiness, explainability, ethics, etc., almost do not take into account systemic problems of dependability, safety, etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3.">Objectives and structure</head><p>The aim of the research is to analyze possibilities of application von Neumann's paradigm (VNP) and VNP-based solutions to improve trustworthiness and other characteristics of AI systems. The objectives are the following:</p><p>• to discuss AI quality models <ref type="bibr" target="#b23">[24]</ref>, their characteristics and sub-characteristics to determine which of them and how can be improved by use of VNP-based approach;</p><p>• to analyze stages of VNP evolution to justify possible options for implementation of the paradigm for providing trustworthiness and other AI characteristics;</p><p>• to develop and investigate models of trustworthy and safe AI systems which are based on application of diversity principle or version redundancy (VR) on creating redundant channels and implement VNP using such approach. The structure of the paper is as follows: Section 2 analyzes models of AI and AI systems quality focusing on the most challengeable attributes related to trustworthiness of AI, safety and security of AI systems; Section 3 discussed structure and stages of development VNP (in context of dependable and resilient systems and infrastructures) including stage of creating AI systems and particularities of implementing the paradigm for various AI quality attributes; Section 4 presents approach, solution and mathematical models describing application of diversity principles to built trustworthy AI system out of not enough trustworthy AI components (channels); Section 5 discuss the results of investigations and concerns a problem of AI and AI systems immortality; the final Section 6 summarizes and describes future research directions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Model of AI systems quality: trustworthiness</head><p>In order to answer the question of how VNP can be developed and used to improve the specific dependability related characteristics of intelligent systems, it is necessary to determine the features of the AI characteristics. For this, it is suggested to use the quality model suggested in <ref type="bibr" target="#b25">[26]</ref>. The model of AI systems quality consists of two parts or sub-models. First one is the actual artificial intelligence quality model; the second part is the quality model of the software-hardware platform that implements the functional algorithms of artificial intelligence in accordance with the requirements. Table <ref type="table">1</ref> describes a simplified, so-called <ref type="bibr" target="#b25">[26]</ref>  The table defines the characteristics of ethics (ETH), lawfulness (LFL), explainability (EXP), responsibility (RSP), trustworthiness (TST), as well as a list of relevant sub-characteristics and their codification in alphabet order.</p><p>The definition of characteristics and sub-characteristics was performed on the basis of the analysis of a large number of articles and regulatory documents in accordance with the methodology described in <ref type="bibr" target="#b25">[26]</ref>. This technique was based on semantic analysis, selection, and harmonization of definitions. It should be noted that in two years many new normative documents of various levels regarding the characteristics of AI <ref type="bibr" target="#b26">[27]</ref>. However, in our opinion, this does not fundamentally affect the conclusions of this study.</p><p>The second part of the quality model of AI systems, namely their platforms, consists of two subsets: a subset more traditional characteristics, namely <ref type="bibr" target="#b25">[26]</ref>: auditability (ADT), availability (AVL), controllability (CNT), effectiveness (EFS), reliability (RLB), maintainability (MNT), sustainability (SST), usability (USB); a subset of sub-characteristics (so-called group, AIG) crossing with AI trustworthiness sub-characteristics such as accuracy (ACR), diversity (DVS), resiliency (RSL), robustness (RBS), safety (SFT), security (SCR), and sub-characteristics verifiability (VFB) of explainability.</p><p>The main differences between the AI quality model and the SW quality models: presence of specific characteristics, namely ethics, legality, etc.; definition as the main characteristics of trustworthiness, explainability and responsibility; subordination of such important, primary characteristics of traditional (critical) systems as safety, security, resilience and others to the key characteristic of AI trustworthiness; filling explainability with a set of known (VFB) and relatively new sub-characteristics such as comprehensibility, interpretability and others.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Evolution of Von Neumann' paradigm: a stage of developing trustworthy and explainable AI systems</head><p>The development and enhancing of intelligent systems contributes to the further advancement and expansion of the VNP. Initially, the development of VNP for AI systems can take place in an understandable way, when such systems are considered as software and hardware implementation of certain functions and the key is the question of their reliable (dependable) functioning.</p><p>Then VNP can be formulated as "a reliable AI system with insufficiently reliable (AI or any other) components" in the simplest option or detail this formulation in view of the evolution of computer systems as such.</p><p>This approach is quite acceptable if we are talking about the software and hardware platform of the AI system, which is distinguished in the quality model of AI systems, the main component of which is the AI itself (the corresponding models and algorithms). However, if the specific attributes/characteristics of the qualities of AI itself, such as trustworthiness, explainability, ethics and so on, are taken into account, the paradigm should be formulated and developed more carefully. This is due to the fact that, according to own ideology, AI can have so-called natural properties in some characteristics. In particular, it is about natural resilience, robustness, etc. <ref type="bibr" target="#b27">[28]</ref>.</p><p>Therefore, the formulation of VNP for AI systems should be based on the most important and quite specific AI characteristics, first of all, trustworthiness integrating several essential subcharacteristics, such as diversity, resilience, etc. Other specific AI characteristics (explainability, ethics, lawfulness) are interesting to analyze from the point of view of the possibility of applying and implementing VNP for their improvement.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">VNP evolution analysis</head><p>Figure <ref type="figure" target="#fig_0">1</ref> provides describing the stages and entity of paradigm evolution in two-coordinate space "stages (systems/components/component properties) -system properties (reliability, availability, safety, dependability,…)" which is added by methods of VPN implementation. The objects of analysis are evolutionary stages, systems, components, and their properties, which are the material embodiment of the respective stage. Elements of evolution-based methodology for analyzing transformation of VNP were suggested and investigated in work <ref type="bibr" target="#b28">[29]</ref> describing evolution stages without AI systems. A formula of VNP can be presented by the following tuple: VNP = &lt; {Proc}, {CharS}, {Syst}, From, {CharC}, {Comp}&gt;, (1) where {Proc} -a set of processes (synthesis, development, creation,…); {CharS} -a set of characteristics of system (reliable, dependable, safe, secure, resilient, trustworthy,…); {Syst} -a set of systems (device, system, infrastructure,… that are synthesized, developed, created,…); From -a preposition connecting system Syst and its components Comp (Syst X from Comp Y); {CharC} -a set of characteristics of component that part of system Syst (usually they are antipodes of CharS: unreliable or not enough reliable, unsafe, undependable, unresilient, untrustworthy); {Comp} -a set of components (relay, integral circuit/chip, hardware, software, system,… used to built system Syst.</p><p>It is clear that a part of elements-tuples of the SVNP will be empty. Figure <ref type="figure" target="#fig_0">1</ref> describes a fragment of VNP evolution during seven stages (1950-2020 years), every of which is presented by one of the formulations (3) beginning from initial simplest expression "Synthesis of reliable devices form unreliable (not enough reliable) relays" (1950 years, applied method -structure static redundancy) and formulation of 2010 th years "Development (deployment) of dependable IT-infrastructures from undependable systems (services with uncertain and varying characteristics)". As for AI systems (seventh stage, 2020 years), several VNP formulations are also possible depending on which characteristics or sub-characteristics are considered. The most general is "Development of trustworthy (and/or explainable) AI systems from untrustworthy (unexplainable) AI components". VNP can be developed and applied to technologies in which AI-based solutions directly and effectively embedded. This applies, in particular, to IoT/IoE technologies. Known branches of these technologies are integrated with AI, namely the so-called Internet of Artificial Intelligence Things (IoAIT), Internet of Artificial Intelligence (IoAI), Artificial Intelligence of Things (AIoT) and so on.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">VNP for AI systems: options</head><p>AIoT is defined as the combination of Artificial intelligence technologies with the IoT infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics.</p><p>Hence, VNP that was formulated for IoT system as Dependable Internet of Undependable (not enough Dependable) Things, DIoUDT, and implemented by application of redundant nodes and communications, can be reformulated as a Trustworthy Artificial Intelligence from Untrustworthy (not enough Trustworthy) Things, TAIoUTT. More problematic is developing such systems considering characteristics of explainability and ethics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Application of diversity for creating trustworthy AI systems out of untrustworthy AI components</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Principle of diversity for developing trustworthy and safe AI systems</head><p>In the early stages of development, VNP was based exclusively on the use of structural redundancy. Later, when the productivity of eleсtronic components and computers increased, and, as a result, some reserves of time appeared, the redundancy based on the use of such reserves added the structural redundancy. Therefore, the temporal redundancy strengthened the structural one and improved the dynamics of the development of some branches of the VNP. However, the next idea, the idea of N-version programming became truly revolutionary for VNP <ref type="bibr" target="#b20">[21]</ref>. Later, it developed into the principles of multi-version design and multi-version systems <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b24">25]</ref>. Version redundancy together with structural and temporal redundancy formed a specific redundancy base for VNP and fault-and intrusion tolerant systems of a very wide class.</p><p>The application of structural and temporal redundancy cannot protect the AI system, as well as digital systems in general, from failures caused by design faults, programming errors, giving rise to so-called common cause failures (CCF), since they are replicated on backup channels and repeated with additional phases of calculations.</p><p>Version redundancy, which equates to the principle of diversity, when the same task is implemented using different programming languages and teams of programmers, developers, verifiers, different environments and development tools, different software and hardware platforms, different life cycle models etc., significantly reduces the ССF risks <ref type="bibr" target="#b29">[30]</ref>.</p><p>The implementation of the principle of diversity for intelligent systems in terms of actual AI models and algorithms has its essential specificity. The task of classifying and researching types of diversity for AI is an independent task.</p><p>Diversification of the development of models and algorithms can be based on; different methods of construction (synthesis) of neural net model solutions; different methods of their training and retraining; diverse datasets, etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Models for assessing two-version AI systems</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1.">Theoretical-set model</head><p>Let's consider two channel AI system that work according to principle "1 out of 2". Both channels are equipped by embedded testing means that check up/down states of hardware and software components. If the channels have been implemented using the same version Va of AI (for example identical artificial neural networks), sets of input data both channels are described by the formula:</p><formula xml:id="formula_0">IDva = IDvao U IDvau U IDvas , (<label>2</label></formula><formula xml:id="formula_1">)</formula><p>where IDvao is a subset of input data (ID) on which both AI channels using one version work correctly; IDvau is a subset of input data on which work of both AI channels is uncertainty; IDvas is a subset of input data on which both AI channels can work unsafely.</p><p>If the channels have been developed by use of two different versions (with different structure of neural networks or different techniques and datasets for learning and so on <ref type="bibr" target="#b30">[31]</ref>), a set of input data can be divided into the following subsets:</p><p>• input data of correct behavior of the AI versions Va (a set of input data IDvao) and Vb (a set of input data IDvbo); • input data (ID) of correct behavior both AI versions Va and Vb described by set IDvabo = IDvao Ո IDvbo; (11) Сombinations of uncertain and unsafe states of versions are not anlysed, because such cases are identified as an unsafe (set IDvabs). Note, that sets IDvao and IDvbo are defined by datasets that were used for learning and are expected for versions Va and Vb.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2.">Probabilistic models</head><p>Let's develop probabilistic models of one-and two-version two channel AI-systems. Assumptions for these models are the following: failure of the checking and reconfiguration means is failure of AI-systems; failures of the versions (SW and HW) are independent; switching on/off the channels in case of their failures is carried out instantly.</p><p>Dependency of trustworthy work probability for two-channel (duplicated) one-version AI system on the probabilities of states can be calculated using the following formula:</p><formula xml:id="formula_2">PAI1 = [P0 + (1 -P0) PD ] (1 -PU1 -PS1) PR1, (<label>12</label></formula><p>) where: P0 is a probability of the channel up-state; PU1 is a probability that at the inputs there will be data from the set IDvau, which will lead to the transition of the channels and AI system to an uncertain state; PS1 is a probability that at the inputs there will be data from the set IDvas, which will lead to the transition of the channels and AI system to an unsafe state; PR1 is a probability of the checking and reconfiguration means for one-version two-channel AI system. This indicator for two-channel and two-version AI system is determined as follows:</p><formula xml:id="formula_3">PAI2 = [P0 + (1 -P0) PD ] (1 -PU2 -PS2) PR2,<label>(13)</label></formula><p>where: PU2 is a probability that at the inputs there will be data from the set IDvabu, which will lead to the transition of the channels and AI system to an uncertain state; PS2 is a probability that at the inputs there will be data from the set IDvabs, which will lead to the transition of the channels and AI system to an unsafe state; PR1 is a probability of the checking and reconfiguration means for two-version two-channel AI system. Sure that PU2 &lt; PU1 , PS2 &lt; PS1 , PR1 &gt; PR2. Let's calculate</p><formula xml:id="formula_4">δPAI2/AI1 = PAI2 / PAI1 = [(1 -PU2 -PS2) / (1 -PU1 -PS1)] PR2 / PR1 ≈ [(1 -βU2 -βS2) / (1 -βU1 -βS1)] PR2 / PR1,<label>(14)</label></formula><p>where: βU1 = Card IDvau / Card IDva , βS1 = Card IDvas / Card IDva are coefficients (metrics) evaluating relative parts of input data which will lead to the transition of the channels and oneversion AI system to uncertain and unsafe states correspondingly; βU2 = Card IDvabu / Card IDva, βS2 = Card IDvabs / Card IDva are coefficients (metrics) evaluating relative parts of input data which will lead to the transition of the channels and twoversion AI system to uncertain and unsafe states correspondingly.</p><p>If assume that PR1 ≈ PR2, formula (13) will be the following:</p><formula xml:id="formula_5">δPAI2/AI1 ≈ (1 -βU2 -βS2) / (1 -βU1 -βS1), (<label>15</label></formula><formula xml:id="formula_6">) δQAI1/AI2 = (1 -PAI )/ (1 -PAI2) ≈ (βU1 + βS1) / (βU2 + βS2).</formula><p>(16) If part of uncertain and unsafe input data IDvau and IDvas for version of one-version AI system equals O.1 and part of uncertain and unsafe input data IDvabu and IDvabs for versions of two-version AI system equals O.02, risk (probability) of unsafe or potentially unsafe states will be decreased in 5 times.</p><p>It should be noted that the presented analytical models for calculating relevant indicators do not take into account other types of diversity and corresponding faults that can lead to system failures. AI systems are SW-HW solutions, and therefore, like any system with software or programmable hardware means, they are subject to design faults caused by developer errors, imperfection of the technical specifications and so on.</p><p>To tolerate their consequences, the principle of diversity is applied, but it refers purely to the use of different programming languages and technologies, hardware and software platforms, etc. <ref type="bibr" target="#b30">[31,</ref><ref type="bibr" target="#b31">32]</ref>. Note, that such diversity does not tolerate the specific problems of using AI models, which were discussed above. This is confirmed by the experience of using the driverless automotive systems <ref type="bibr" target="#b32">[33]</ref>, where diversity is actually used to protect against software (design) faults and certain HW (physical) faults, which in its absence can cause Common Cause Failures (CCFs) of redundant structures. However, such HW-SW diversity does not protect AI systems against vulnerabilities and complex kinds of CCFs caused by uncertainty of model behavior, and provide a trust guarantee of safe functioning.</p><p>Let's analyze models for one and two model-version AI-systems with one-and two-version SW (systems AI1-1, AI1-2, AI2-1, AI2-2, where first digital describes number of model versions, second one is number of SW versions) considering reliability of SW. The following formulas describe probabilities of up-states of these systems: Hence, taking into account expressions <ref type="bibr" target="#b11">(12)</ref><ref type="bibr" target="#b12">(13)</ref><ref type="bibr" target="#b13">(14)</ref><ref type="bibr" target="#b14">(15)</ref><ref type="bibr" target="#b16">(17)</ref><ref type="bibr" target="#b17">(18)</ref><ref type="bibr" target="#b18">(19)</ref><ref type="bibr" target="#b19">(20)</ref> and the insignificant difference in the probabilities of up-state the checking and reconfiguration means for the systems, the following formulas for calculating their relative differences can be given: (23) These expressions allow specifying impact of diversity on the two levels and formulating requirements to AI versions. Formula ( <ref type="formula" target="#formula_8">22</ref>) describes a simple linear dependency of AI2-2 system benefits in comparison with AI2-1. Formula ( <ref type="formula">23</ref>) is traditional for evaluation of increasing safety due to using diversity in duplicated systems.</p><formula xml:id="formula_7">PAI1-1 = [PHW + (1 -PHW) PD ] (1 -PU1 -PS1) PSWPR1,<label>(17)</label></formula><formula xml:id="formula_8">δPAI1-2/AI1-1 = PSWr [PHWPSWr + (1 -PHWPSWr) PD ] / [PHW + (1 -PHW) PD ], (21) δPAI2-2/AI1-2 = (1 -PU2 -PS2) / (1 -PU1 -PS1) ≈ (1 -βU2 -βS2) / (1 -βU1 -βS1),<label>(22</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">VNP for AI systems: way to immortality</head><p>The VNP began its path of implementation on simple relay and then electronic devices.. Now, after 60 years, we have come (we are approaching) to the synthesis of reliable (trustworthy...) AI from insufficiently reliable (trustworthy) components. John von Neumann wrote about reliable organisms, not about conventional technical systems. He tried to expand the scope of research and consider bio-technical systems as certain heterogeneous formations. Perhaps the next his step would be related to purely biological systems and the ideas of redundancy and reconfiguration would be extended to them.</p><p>Considering the presence of a large number of AI quality characteristics, it is necessary to consider the possibilities of building "better" systems from the "worst" components separately for each of the characteristics. So, within the framework of this article, we got closer to artificial formations ("a little bit of organisms", since AI is a step in this direction) and then it remains to take the next step to reliable "organisms".</p><p>That is, we can conclude that, firstly, the VPN circle closes, so to speak, in the sense of "organisms", and the introduction of artificial intelligence, consideration of its dual nature as an object and a means of ensuring reliability is a way to create and research such reliable organisms. The transfer of AI to a new technological base, such as creating a bio-technical system, can be exemplified by the development of the Australian startup, Cortical Labs <ref type="bibr" target="#b34">[34]</ref>. They are working on a new type of artificial intelligence that combines lab-grown human brain cells with computer chips. This approach further bridges the gap between AI and humans, potentially increasing the level of the various threats.</p><p>Secondly, a reliable organism made of insufficiently reliable components is a step to immortality! The path to it can be made both by reserving biological components and by replacing them with artificial means. As noted in <ref type="bibr" target="#b35">[35]</ref>, there are two threats to the problem of AI immortality. On the one hand, it is the possible loss of renewal that is a consequence of death, which can create the risk of weakening future generations and possible conflicts between them. On the other hand, AI immortality could create an "artificial intelligence-human" relationship similar to a "god-mortal" relationship. But in the context of this study, it is about immortality in view of the various types of failures of AI systems and the possible embedding of components to continue functioning.</p><p>Thirdly, since it is about how to build an organism with a specified value of reliability that is assessed by probability of up-state for a required time, a person can get a tool to check and control this level. Therefore, this person, which may also be a mean of artificial intelligence, will have multiple strategies for ensuring reliability through proactive repair, redundancy and reconfiguration to provide way to immortality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Features and limitations of applying diversity for proving VNP to ensure AI trustworthiness</head><p>Despite the fact that such specific characteristics of AI as trustworthiness, explainability, ethics are top for intelligent systems, the characteristics of reliability, security and resilience are more understandable and familiar for developers and customers. This article did not have the task of delving deeply into the safety and security problems of AI, but they should always be close to the problems of evaluating and ensuring the necessary level of specific characteristics of AI, first of all, trustworthiness and explainability. The issue of determining qualitative, and especially quantitative, requirements for these characteristics is quite complex.</p><p>The principle of diversity can be quite effective from the point of view of as safety and trustworthiness. Regarding the well-known sub-characteristics of security, namely integrity, confidentiality and accessibility, the situation is somewhat more complicated, since application of diversity increases integrity and accessibility measures but can create risks for confidentiality considering the rule of "weak link". Therefore, for a more thorough analysis and evaluation, it is necessary to consider one more the third level in addition to the characteristics and subcharacteristics.</p><p>Complex and contradictory is the question of the expediency of using the diversity principle for increasing ethical and lawfulness indicators. Table <ref type="table" target="#tab_1">2</ref> provides a conclusion about the inappropriateness of using VNP to improve ethical indicators and notes that VR can be applied if the versions will provide different reactions on so called situations with ethically unacceptable alternatives (SEUA). It is theoretically possible to build versions in which such situations will be diversified to provide reducing the risk of SEUA for a common reason. The practical implementation of such a principle, for example, for driverless cars, requires the careful specification of the list of SEUAs, and the development of several AI versions using diversified techniques. Such an opportunity and ways of its implementation are quite complex and interesting for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and future work</head><p>The main contribution of this study is a framework for the formal presentation of VNP and its application in intelligent systems and methods of its implementation to ensure trustworthiness and other specific characteristics of AI. The importance of the proposed models and methods is that they can be detailed and developed to evaluate the feasibility and ways of using VNP methodology in creating trustworthy and safe AI systems.</p><p>However, in our opinion, the AI safe/secureware engineering has to be separated as an independent branch of intelligent engineering. It is fully justified in view of the uncertainty, threats and risks associated with the use of AI systems in critical domains and impact on consequences caused by failed/unpredictable behavior.</p><p>Diversity is a really important and promising principle that can be used to provide key trustworthiness, sаfety and security characteristics of AI systems. This applies to all elements of the triad AIaXO-AIaXP-AIaXA and scenarios of its implementation.</p><p>Future investigation could be connected with development of detailed models, methods and tools for assessing and providing specific characteristics and sub-characteristics of AI and AI systems. These steps should be added by enhancing and developing regulation requirements and justification of quantitative values for them.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Diagram of VNP evolution including stage of AI systems development</figDesc><graphic coords="5,89.83,63.55,424.87,298.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>( 3 )•</head><label>3</label><figDesc>ID of correct behavior of the AI versions Va or Vb only described by two subsets IDvao\ = IDvao \ IDvabo, (4) IDvbo\ = IDvbo \ IDvabo; (5) • ID of uncertain behavior of AI versions Va (a set of input data IDvau) and Vb (a set of input data IDvbu); • ID of uncertain behavior both AI versions Va and Vb described by set IDvabu = IDvau Ո IDvbu; (6) • ID of uncertain behavior of the AI versions Va or Vb only described by two subsets IDvau\ = IDvau \ IDvabu, (7) IDvbu\ = IDvbu \ IDvabu; (8) • ID of unsafe behavior of AI versions Va (set of ID, IDvas) and Vb (set of ID, IDvbs); • ID of unsafe behavior both AI versions Va and Vb described by set IDvabs = IDvas Ո IDvbs; (9) • ID of unsafe behavior of the AI version Va or version Vb only described by two sets IDvas\ = IDvas \ IDvabs, (10) IDvbs\ = IDvbs \ IDvabs.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>PAI1-2 = [PHWPSWr + (1 -PHWPSWr) PD ] (1 -PU1 -PS1) PSWaPR1, (18) PAI2-1 = [PHW + (1 -PHW) PD ] (1 -PU2 -PS2) PSWPR2, (19) PAI2-2 = [PHWPSWr + (1 -PHWPSWr) PD ] (1 -PU2 -PS2) PSWaPR2 , (20) where (in case of independency of failures of HW and SW components of the channels): P0 = PHW PSW; PHW and PSW are probabilities of the HW and SW up-states; PSW = PSWr PSWa, PSWr and PSWa are probabilities of the SW up-state considering relative and absolute faults.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>) δPAI2- 2 /</head><label>2</label><figDesc>AI2-1 = PSWr [PHWPSWr + (1 -PHWPSWr) PD ] / [PHW + (1 -PHW) PD ].</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 Model of AI quality simplifying according to [26]</head><label></label><figDesc>basic AI quality model, which generally has a three-level structure and includes 32 characteristics. The basic version provides a two-level model, with five characteristics of the first level and 16 sub-characteristics that form the second level and are detailed characteristics of AI.</figDesc><table><row><cell>Charac-</cell><cell>Definition</cell><cell>Sub-characteristics</cell><cell></cell></row><row><cell>teristics</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Ethics,</cell><cell>The ability of AI to meet current standards of</cell><cell>Fairness, FRN;</cell><cell>H</cell></row><row><cell>ETH</cell><cell>morality on the results of functioning</cell><cell>Graspability, GRS;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Human agency, HMA;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Redress, RDR</cell><cell></cell></row><row><cell>Lawful-</cell><cell>Ability of AI to comply with laws and regulations</cell><cell>No</cell><cell>R</cell></row><row><cell>ness, LFL</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Explai-</cell><cell>The ability of AI to be understood and predictable</cell><cell>Completeness, CMT;</cell><cell>H</cell></row><row><cell>nability,</cell><cell>in terms of purpose and behavior</cell><cell>Comprehensibility, MH;</cell><cell></cell></row><row><cell>EXP</cell><cell></cell><cell>Interpretability, INP;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Interactivity, INR;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Transparency, TRP;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Verifiability, VFB</cell><cell></cell></row><row><cell>Respon-</cell><cell cols="2">Ability of AI to function considering the expectations No</cell><cell>H</cell></row><row><cell>sibility,</cell><cell>of the client (user) in accordance with ethical norms,</cell><cell></cell><cell></cell></row><row><cell>RSP</cell><cell>legal regulations, as well as to inform him in case of</cell><cell></cell><cell></cell></row><row><cell></cell><cell>possible violation</cell><cell></cell><cell></cell></row><row><cell>Trustwor-</cell><cell>Ability of AI, characterized by the degree of</cell><cell>Accuracy, ACR;</cell><cell>H</cell></row><row><cell>thiness,</cell><cell>confidence of the stakeholders, developers,</cell><cell>Diversity, DVS;</cell><cell></cell></row><row><cell>TST</cell><cell>auditors, etc.) that the AI meets and performs its</cell><cell>Resilience, RSL;</cell><cell></cell></row><row><cell></cell><cell>functions in a predictable manner</cell><cell>Robustness, RBS;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Safety, SFT;</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Security, SCR</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>describes possibilities of VNP implementation for AI systems taking into account various characteristics and sub-characteristics.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 Analysis of methods for VNP implementation for AI characteristics</head><label>2</label><figDesc></figDesc><table><row><cell>AI cha-</cell><cell>AI sub-</cell><cell>VNP</cell><cell cols="2">Methods applied for</cell><cell>Notes</cell><cell></cell><cell></cell><cell></cell></row><row><cell>rac-</cell><cell>charac-</cell><cell>appli-</cell><cell cols="2">VNP implementation</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>teri-</cell><cell>teris-</cell><cell>cation</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>stics</cell><cell>tics</cell><cell>(Y/N)</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>TST</cell><cell>In ge-</cell><cell></cell><cell cols="2">Methods applied for</cell><cell cols="5">Compatability of these methods and means</cell></row><row><cell></cell><cell>neral</cell><cell>Yes</cell><cell cols="2">sub-characteristics</cell><cell cols="4">should be taken into account</cell></row><row><cell></cell><cell>DVS</cell><cell>Yes</cell><cell cols="2">Version redundancy</cell><cell cols="5">Problem is developing and choosing versions</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">with maximal diversity metrics</cell></row><row><cell></cell><cell>RSL</cell><cell>Yes</cell><cell>Proactivity,</cell><cell>version</cell><cell cols="5">The same. Besides, problems is in</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">and structural re-</cell><cell cols="5">development and implementation of means</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">dundancy, dynamical</cell><cell>providing</cell><cell cols="2">proactivity</cell><cell>and</cell><cell>dynamical</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">reconfiguration</cell><cell cols="2">reconfiguration</cell><cell></cell><cell></cell></row><row><cell></cell><cell>RBS</cell><cell>Yes</cell><cell cols="2">Version redundancy</cell><cell cols="5">Problem is developing and choosing versions</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="5">with maximal diversity in point of view input</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>data</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>SFT</cell><cell>Yes</cell><cell cols="2">Version and struc-tural</cell><cell cols="5">Main criteria of implemented methods and</cell></row><row><cell></cell><cell></cell><cell></cell><cell>redundancy</cell><cell></cell><cell cols="4">means is minimizing risks of CCF</cell></row><row><cell></cell><cell>SCR</cell><cell>Yes/</cell><cell cols="2">VR for integrity and</cell><cell cols="5">Use of VR must be defined considering impact</cell></row><row><cell></cell><cell></cell><cell>No</cell><cell>accessibility</cell><cell></cell><cell cols="4">on the different security attributes</cell></row><row><cell></cell><cell>ACR</cell><cell>Yes</cell><cell cols="2">Version, time, struc-</cell><cell cols="5">Main criteria is decreasing total errors</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">tural redundancy</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>EXP</cell><cell>In ge-</cell><cell>No</cell><cell>-</cell><cell></cell><cell>Redundancy</cell><cell>can</cell><cell cols="2">increase</cell><cell>level</cell><cell>of</cell></row><row><cell></cell><cell>neral</cell><cell></cell><cell></cell><cell></cell><cell cols="2">unexplainability</cell><cell></cell><cell></cell></row><row><cell>RSP</cell><cell>In ge-</cell><cell>Yes</cell><cell>Version,</cell><cell>structural</cell><cell cols="4">RSP is dependent on credibility,</cell></row><row><cell></cell><cell>neral</cell><cell></cell><cell cols="2">redundancy, dynami-</cell><cell cols="5">explainability, etc. It should be taken into</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">cal reconfiguration</cell><cell cols="4">account on choice of the methods</cell></row><row><cell>ETH</cell><cell>In ge-</cell><cell>No</cell><cell>-</cell><cell></cell><cell cols="5">VR can be applied if the versions will provi-</cell></row><row><cell></cell><cell>neral</cell><cell></cell><cell></cell><cell></cell><cell cols="5">de different reactions on situations with</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">ethically unacceptable alternatives</cell></row><row><cell>LFL</cell><cell>In ge-</cell><cell>No</cell><cell>-</cell><cell></cell><cell cols="5">VR can be applied if the versions will provide</cell></row><row><cell></cell><cell>neral</cell><cell></cell><cell></cell><cell></cell><cell cols="5">different reactions on so called situations</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="5">with unacceptable alternatives in point of</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">view lawfulness</cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Applications of Artificial Intelligence in the Economy, Including Applications</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Rahmani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Rezazadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Haghparast</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">C</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Ting</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2023.3300036</idno>
	</analytic>
	<monogr>
		<title level="m">in Stock Trading, Market Analysis, and Risk Management</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="80769" to="80793" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A Comprehensive Survey on Artificial Intelligence for Unmanned Aerial Vehicles</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Jhawar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Chamola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sikdar</surname></persName>
		</author>
		<idno type="DOI">10.1109/ojvt.2023.3316181</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Open J. Veh. Technol</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="1" to="26" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Principle and method of deception systems synthesizing for malware and computer attacks detection</title>
		<author>
			<persName><surname>Kashtalian</surname></persName>
		</author>
		<author>
			<persName><surname>Lysenko</surname></persName>
		</author>
		<author>
			<persName><surname>Savenko</surname></persName>
		</author>
		<author>
			<persName><surname>Sochor</surname></persName>
		</author>
		<author>
			<persName><surname>Kysil</surname></persName>
		</author>
		<idno type="DOI">10.32620/reks.2023.4.10</idno>
	</analytic>
	<monogr>
		<title level="j">Radioelectron. Comput. Syst</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="112" to="151" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Application of AI in Maritime Transportation</title>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.3390/jmse12030439</idno>
	</analytic>
	<monogr>
		<title level="j">J. Mar. Sci. Eng</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="1" to="4" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The Application of Artificial Intelligence Technology in Shipping: A Bibliometric Review</title>
		<author>
			<persName><forename type="first">G</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Jiang</surname></persName>
		</author>
		<idno type="DOI">10.3390/jmse12040624</idno>
	</analytic>
	<monogr>
		<title level="j">J. Mar. Sci. Eng</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="1" to="21" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Advances in Artificial Intelligence Methods Applications in Industrial Control Systems</title>
		<author>
			<persName><forename type="first">E</forename><surname>Carpanzano</surname></persName>
		</author>
		<idno type="DOI">10.3390/app13010016</idno>
	</analytic>
	<monogr>
		<title level="j">Appl. Sci</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">16</biblScope>
			<biblScope unit="page" from="1" to="24" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>Editorial of the Special Issue</note>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Leveson</surname></persName>
		</author>
		<author>
			<persName><surname>Safeware</surname></persName>
		</author>
		<title level="m">System safety and computers</title>
				<meeting><address><addrLine>Boston</addrLine></address></meeting>
		<imprint>
			<publisher>Addison-Wesley</publisher>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Understanding and Avoiding AI Failures: A Practical Guide</title>
		<author>
			<persName><forename type="first">R</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Yampolskiy</surname></persName>
		</author>
		<idno type="DOI">10.3390/philosophies6030053</idno>
	</analytic>
	<monogr>
		<title level="j">Philosophies</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Sources of Risk of AI Systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Steimers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schneider</surname></persName>
		</author>
		<idno type="DOI">10.3390/ijerph19063641</idno>
	</analytic>
	<monogr>
		<title level="j">Int. J. Environ. Res. Public Health</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="1" to="26" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Knight</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dave</surname></persName>
		</author>
		<ptr target="https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/" />
		<title level="m">Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT, Business</title>
				<imprint>
			<date type="published" when="2023-03-29">March 29, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023" />
		<title level="m">The Bletchley Declaration by Countries Attending the AI Safety Summit</title>
				<imprint>
			<date type="published" when="2023-11-02">1-2 November 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Security-Informed Safety Analysis of Autonomous Transport Systems Considering AI-Powered Cyberattacks and Protection</title>
		<author>
			<persName><forename type="first">O</forename><surname>Illiashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Babeshko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fesenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">Di</forename><surname>Giandomenico</surname></persName>
		</author>
		<idno type="DOI">10.3390/e25081123</idno>
	</analytic>
	<monogr>
		<title level="j">Entropy</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1" to="35" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">AI powered attacks against AI powered protection: classification, scenarios and risk analysis</title>
		<author>
			<persName><forename type="first">O</forename><surname>Veprytska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/dessert58054.2022.10018770</idno>
	</analytic>
	<monogr>
		<title level="m">12th International Conference on Dependable Systems, Services and Technologies (DESSERT)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Explainability of AI methods, applications and challenges: A comprehensive survey</title>
		<author>
			<persName><forename type="first">W</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdel-Basset</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hawash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Ali</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ins.2022.10.013</idno>
	</analytic>
	<monogr>
		<title level="j">Inf. Sci</title>
		<imprint>
			<biblScope unit="volume">615</biblScope>
			<biblScope unit="page" from="238" to="292" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems</title>
		<author>
			<persName><forename type="first">E</forename><surname>Hohma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lütge</surname></persName>
		</author>
		<idno type="DOI">10.3390/ai4040046</idno>
	</analytic>
	<monogr>
		<title level="j">AI</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="904" to="925" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wanner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L.-V</forename><surname>Herm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Heinrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Janiesch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electron. Mark</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="2079" to="2102" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components</title>
		<author>
			<persName><forename type="first">J</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><surname>Von</surname></persName>
		</author>
		<idno type="DOI">10.1515/9781400882618-003</idno>
		<editor>C. E. Shannon, J. McCarthy</editor>
		<imprint>
			<date type="published" when="1956">1956</date>
			<publisher>Princeton University Press</publisher>
			<biblScope unit="page" from="43" to="98" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Basic concepts and taxonomy of dependable and secure computing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Avizienis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-C</forename><surname>Laprie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Randell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Landwehr</surname></persName>
		</author>
		<idno type="DOI">10.1109/TDSC.2004.2</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Dependable Secur. Comput</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="11" to="33" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Building reliable embedded systems with unreliable components</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Peng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICSES 2010 Intern. Conference on Signals and Electronic Circuits</title>
				<meeting><address><addrLine>Gliwice, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">An empirical investigation of fault types in space mission system software</title>
		<author>
			<persName><forename type="first">M</forename><surname>Grottke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Nikora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Trivedi</surname></persName>
		</author>
		<idno type="DOI">10.1109/DSN.2010.5544284</idno>
	</analytic>
	<monogr>
		<title level="m">Intern. Conf. on Dependable Systems &amp; Networks</title>
				<meeting><address><addrLine>Chicago, IL, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="447" to="456" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">The N-Version Approach to Fault-Tolerant Software</title>
		<author>
			<persName><forename type="first">A</forename><surname>Avizienis</surname></persName>
		</author>
		<idno type="DOI">10.1109/TSE.1985.231893</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Software Engineering</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="1491" to="1501" />
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">On composing Dependable Web Services using independent web components</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gorbenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Romanovsky</surname></persName>
		</author>
		<idno type="DOI">10.1504/IJSPM.2007.014714</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Simulation and Process Modeling</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">1/2</biblScope>
			<biblScope unit="page" from="45" to="54" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Multilevel Fuzzy Logic-Based Approach for Critical Energy Infrastructure&apos;s Cyber Resilience Assessment</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Brezhniev</surname></persName>
		</author>
		<idno type="DOI">10.1109/DESSERT.2019.8770034</idno>
	</analytic>
	<monogr>
		<title level="m">10th International Conference on Dependable Systems, Services and Technologies (DESSERT)</title>
				<meeting><address><addrLine>Leeds, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="213" to="217" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">NPP-Smart Grid Mutual Safety and Cyber Security Assurance</title>
		<author>
			<persName><forename type="first">E</forename><surname>Brezhniev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ivanchenko</surname></persName>
		</author>
		<idno type="DOI">10.4018/978-1-6684-3666-0.ch047</idno>
	</analytic>
	<monogr>
		<title level="m">Research Anthology on Smart Grid and Microgrid Development</title>
				<meeting><address><addrLine>USA</addrLine></address></meeting>
		<imprint>
			<publisher>IGI-Global</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Dependability Assurance Methodology of Information and Control Systems</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ponochovnyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<idno type="DOI">10.32620/reks.2020.3.05</idno>
	</analytic>
	<monogr>
		<title level="j">Radioelectron. Comput. Syst</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="43" to="58" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fesenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Illiashenko</surname></persName>
		</author>
		<idno type="DOI">10.3390/s22134865</idno>
	</analytic>
	<monogr>
		<title level="j">Development and Application</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="1" to="32" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>Sensors</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<idno>ISO/IEC TR 24030:2024</idno>
		<title level="m">Information technology. Artificial Intelligence. Use cases</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and Methods</title>
		<author>
			<persName><forename type="first">V</forename><surname>Moskalenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moskalenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kuzikov</surname></persName>
		</author>
		<idno type="DOI">10.3390/a16030165</idno>
	</analytic>
	<monogr>
		<title level="j">Algorithms</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="1" to="34" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Evolution of von Neumann&apos;s paradigm: Dependable and green computing</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gorbenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/EWDTS.2013.6673090</idno>
	</analytic>
	<monogr>
		<title level="m">East-West Design &amp; Test Symposium (EWDTS 2013)</title>
				<meeting><address><addrLine>Rostov on Don, Russia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<idno>CR- 7007</idno>
		<title level="m">Diversity Strategies for Nuclear Power Plant Instrumentation and Control Systems</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
		<respStmt>
			<orgName>Office of Nuclear Regulatory Research</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">NUREG/</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Advanced Artificial Intelligence Models and Applications</title>
		<idno type="DOI">10.3390/books978-3-0365-9133-9</idno>
	</analytic>
	<monogr>
		<title level="j">Mathe-matics</title>
		<editor>Tao Zhou</editor>
		<imprint>
			<biblScope unit="page">182</biblScope>
			<date type="published" when="2023-10">October 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Overview of AI-Models and Tools in Embedded IIoT Applications</title>
		<author>
			<persName><forename type="first">P</forename><surname>Dini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Diana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Elhanashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Saponara</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics13122322</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="1" to="27" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Tim</forename><surname>Julitz</surname></persName>
		</author>
		<author>
			<persName><surname>Maurice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Antoine</forename><surname>Tordeux</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Computer-Aided Design of faulttolerant Hardware Architectures for Autonomous Driving Systems</title>
		<author>
			<persName><forename type="first">Manuel</forename><surname>Löwer</surname></persName>
		</author>
		<idno type="DOI">10.1017/pds.2023.105</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Engineering Design</title>
				<meeting><address><addrLine>ICED23, Bordeaux, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-07-28">24-28 July 2023</date>
			<biblScope unit="page" from="1047" to="1056" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">The Australian startup building computers out of human brains</title>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Van Boom</surname></persName>
		</author>
		<ptr target="https://www.capitalbrief.com/article/the-australian-startup-building-computers-out-of-human-brains-a30d6821-cf2d-47db-b0cf-85a6fe294cb8/preview/" />
	</analytic>
	<monogr>
		<title level="j">Capital Brief</title>
		<imprint>
			<date type="published" when="2024-01-18">18 January 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title level="m" type="main">Risks and Dangers of Artificial Intelligence</title>
		<author>
			<persName><forename type="first">Mike</forename><surname>Thomas</surname></persName>
		</author>
		<ptr target="https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence/" />
		<imprint>
			<date type="published" when="2024-07-25">July 25, 2024</date>
			<pubPlace>Built</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
