<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main"></title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vitaliy</forename><surname>Tsyganok</surname></persName>
							<email>vitaliy.tsyganok@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Institute for Information Recording</orgName>
								<orgName type="institution">National Academy of Sciences of Ukraine</orgName>
								<address>
									<addrLine>M. Shpaka str. 2</addrLine>
									<postCode>03113</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>Volodymyrs&apos;ka str. 64/13</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">National Technical University of Ukraine &quot;Igor Sikorsky Kyiv Polytechnic Institute&quot;</orgName>
								<address>
									<addrLine>Beresteysky ave. 37</addrLine>
									<postCode>03056</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Viktor</forename><surname>Holota</surname></persName>
							<email>v.holota@ukr.net</email>
							<affiliation key="aff3">
								<orgName type="institution">Yevheniy Berezniak Military Academy</orgName>
								<address>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksandr</forename><surname>Hryhorenko</surname></persName>
							<email>gregorenko@ukr.net</email>
							<affiliation key="aff3">
								<orgName type="institution">Yevheniy Berezniak Military Academy</orgName>
								<address>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Serhiy</forename><surname>Burukin</surname></persName>
							<email>s.burukin@i.ua</email>
							<affiliation key="aff4">
								<orgName type="institution">Customertimes Corp. in New York</orgName>
								<address>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff5">
								<orgName type="department">Information Technologies and Security</orgName>
								<address>
									<addrLine>November 30</addrLine>
									<postCode>2023</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">BC5C7999CDD7D8F19879C39FB6EF9945</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:58+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>artificial intelligence tools</term>
					<term>cybersecurity</term>
					<term>expert threat assessment</term>
					<term>collective expertise 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recently, the international environment has been actively seeking consensus on the need to take collective coordinated measures to identify, monitor, measure and minimize risks from the development of artificial intelligence (AI) at both the global and national levels. This study is based on the analysis of potential threats to the intensive development of AI, which was conducted by a team of experts using the Consensus2 software system for distributed collection of expert information. As a result of the group expert assessment, a list of the most influential threats was compiled and their relative importance was determined.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>With the recent rapid development and implementation of artificial intelligence (AI) systems, such as generative linguistic models, the task of complying with the cybersecurity of such systems has become urgent. The growing relevance and interest in security tasks in the field of AI, which have certain features compared to traditional cybersecurity tasks of information protection, is evidenced by a number of publications in this area <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref>. These, and a number of other works, cover research related to the security of AI systems, their stability, etc.</p><p>Recently, the international community has been actively seeking consensus on the need to take collective, coordinated measures to identify, monitor, measure, and minimize risks from AI development at both the global and national levels.</p><p>In October 2023, the UN Artificial Intelligence Advisory Body was established to develop general recommendations for assessing risks, opportunities, and mechanisms for international governance of AI technologies. These recommendations should be made public on the eve of the Future Summit (September 2024) and form the basis of the Global Digital Compact <ref type="bibr" target="#b3">[4]</ref>.</p><p>In November 2023, the first-ever AI Security Summit was held in Blechley Park (UK). The event was attended by high-ranking officials from the governments of the UK, the USA, China, Japan, France, Germany, Canada, Italy, Spain, India, Israel, South Korea, Singapore, Switzerland, Ukraine, Turkey, the United Arab Emirates, Saudi Arabia, Kenya, Rwanda, and Nigeria, as well as heads of leading AI service companies, including OpenAI, Google, Meta, Microsoft, and others <ref type="bibr" target="#b3">[4]</ref>.</p><p>The analysis of the final communiqué and the comments of the summit participants shows that the international community is really concerned about the potential threat of AI getting out of human control. Other risks recognized include: the formation of biased attitudes of AI towards certain things; abuse of AI capabilities in the areas of cybersecurity, biotechnology, and disinformation.</p><p>The participants of the event signed a final communiqué, which states that joint efforts should be focused on the following key areas:  identification of common risks to AI security, their scientific understanding and justification based on the collected evidence base;  development of appropriate policies by the signatory countries to reduce these risks, recognizing the possible differences in approaches and the need to involve development companies in solving this problem. The main issues are the development of risk assessment methods and tools for testing the safety of AI technologies <ref type="bibr" target="#b4">[5]</ref>. The Deputy Head of the Ministry of Digital Transformation of Ukraine H. Dubinsky took part in the summit on behalf of our country <ref type="bibr">[6]</ref>.</p><p>It is noteworthy that the summit participants, primarily the heads of development companies, agreed on the need to delegate to governments the authority to control the safe development of AI. One of the main problems that, according to the summit participants, needs to be addressed immediately is the development of standards for identifying, monitoring, and measuring AI-related risks.</p><p>During the summit, British Prime Minister R. Sunak announced the creation of the AI Security Institute, which will test and authorize the use of new AI services before they enter the market. A similar institution is being created in the United States. According to R. Sunak, work on new breakthrough AI technologies is expected to be completed next year. That is why two international AI security summits are scheduled for 2024 -in the Republic of Korea and France <ref type="bibr" target="#b5">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Research</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Identifying threats in AI development</head><p>The first stage of this process should be a comprehensive understanding of the problem of potential threats that may arise as a result of the rapid development of artificial intelligence. In order to identify and formulate a list of possible threats caused by the development of AI, a collective examination was conducted with the involvement of nine experts from the Institute of Information Registration Problems of the National Academy of Sciences of Ukraine, the Ministry of Defense of Ukraine, the Yevheniy Berezniak Military Academy, and one of the leaders of the domestic market, Infozahyst.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Collaborative expert analysis using software tools</head><p>The examination was carried out remotely using the System of Distributed Collection and Processing of Expert Information for Decision Support Systems "Consensus-2" <ref type="bibr" target="#b6">[8]</ref>. This study focused more on potential threats from uncontrolled human use of AI tools, which are currently developing rapidly, and less on threats from the consequences of AI's possible release from human control and independent decision-making.</p><p>In accordance with the group decomposition technology implemented in the Consensus-2 system, the collective examination included several stages. These stages essentially combined the process of decomposition (separation of components) of the problem "The threats posed by the development of AI tools" with the subsequent assessment of the relative importance of these threats.</p><p>At the initial stage, after the formation of the expert group, each expert, operating remotely through the system's web interface, had to personally compile a list of threats that, in his or her opinion, are the most influential in the problem situation under consideration. It is important for the experts involved in the examination to understand what is meant by the term "most influential threats." Within the framework of the subject area modeling concept, the influential factors that are introduced as components of the model are those that have a relative impact of at least 10% of the total value of all influences on a particular current situation. In this case, the total number of the most influential threats will not exceed 10 (or 7±2 according to classical recommendations <ref type="bibr" target="#b7">[9]</ref>), which makes it possible to obtain a sufficiently adequate model of the problem situation and confidently operate with a set of threats in the expert assessment of their impact.</p><p>Based on the results of this stage, the experts formulated a total of 46 major threats, according to their authors, arising from the rapid development of AI (5-7 threats were independently identified by each expert).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Prioritizing and rating major AI threats</head><p>The next step was to group wording that is identical in content, since during autonomous work there is a high probability that different experts will formulate the same threat in different ways. It should be noted that generative artificial intelligence tools <ref type="bibr" target="#b8">[10]</ref><ref type="bibr" target="#b9">[11]</ref><ref type="bibr" target="#b10">[12]</ref><ref type="bibr" target="#b11">[13]</ref> can and have been used to combine formulations into groups of identical content under the control of an expert organizer (knowledge engineer). Thus, the grouping of formulations of the same content was carried out in an automated mode using AI tools. For the effective use of these AI tools, it is advisable to indicate in the prompter the desired number of groups of identical (similar) formulations no more than 15, so that in the future, after a group assessment with the rejection of insufficiently important threats, 7±2 would remain, which are the most important from the point of view of not an individual expert, but the expert group as a whole.</p><p>In order to integrate AI tools into the program code of an expert information gathering and processing system such as Consensus-2, it is advisable to use an application programming interface (API) that allows you to control a number of AI tool parameters that are not available through the web interface. For example, when it comes to the options available through the ChatGPT API versus the web interface, there are several key points worth noting: 1. Tone or style of response: Through the API, you can customize the style or tone of a response in more detail. For example, you can ask Chat to work in a certain format (e.g., business or more informal) or set certain text restrictions. 2. Saving and managing context: The API gives you more control over context preservation. For example, in the web interface, saving the context of a conversation is automatic, whereas through the API, you can define which context to save or delete. 3. Customized models or model variants: Through the API, you can select specific GPT models that are only available through the API, or define specific parameters that are not supported in the web interface. 4. Context length: Through the API, you can manage the parameters related to the limit on the number of tokens in the request and response, which is not always available through the web interface. 5. Additional formatting or response structure options: Through the API, you can have more control over the formatting of the output, such as the structure of responses in JSON or other formats. 6. Real-time queries: Using the API, you can integrate ChatGPT into more complex applications to perform real-time queries, with dynamic data processing, which is not possible through the web interface.</p><p>That is, the API allows for more detailed customization of the model's behavior and its interaction with other systems. This makes it possible to obtain a better result of grouping wording by meaning and significantly increase the level of automation of this process. Sometimes, it is even possible to make the process of selecting groups of formulations of the same content fully automatic, or with only minor intervention by the organizer of the examination -the knowledge engineer.</p><p>It should be noted that an important process of combining individual expert knowledge and transforming it into collective knowledge is used, when, in fact, subjective expert knowledge is combined and objectified and transformed into group knowledge ("collective intelligence"), which is more reliable than the original individual knowledge. The final result of the group work in the form of a graph obtained from the decomposition of the concept "The threats posed by the development of AI tools" into its components -threats, is shown in Fig. <ref type="figure">1</ref>.</p><p>Figure <ref type="figure">1</ref>: Image of the graph of decomposition of the concept "The threats posed by the development of AI tools" within the web-interface of the "Consensus-2" system At this stage, the impersonal wording (without indicating their authorship) was grouped into similar content around the following issues: loss of human creativity; decline in human employment; violation of the right to privacy; conducting dangerous experiments; (un)intentional manipulation of data; granting AI the right to make decisions to take human life; and AI making false decisions to take human life.</p><p>Subsequently, the experts who provided the wording of the threats were involved in voting for the best wording in each group of similar wording. An example of an expert voting for the best wording among those of the same content is shown on the screen form corresponding to an episode of the expert's work (Fig. <ref type="figure" target="#fig_0">2</ref>). The decision was made by majority vote, provided that all experts were considered equally competent (the relative competence of the expert in the group in the issues under consideration was not taken into account). Such a rather significant simplification of expert procedures was allowed due to restrictions on the duration and cost of examinations, but it is recommended that in the future such procedures be carried out using the methods of pairwise comparisons <ref type="bibr" target="#b12">[14,</ref><ref type="bibr" target="#b13">15]</ref>, taking into account the relative competence of the expert in the matter under consideration <ref type="bibr" target="#b14">[16]</ref>. Since the relative competence of the experts in the group was not taken into account and, in this case, multiplication of the expert opinion by the relevant competence coefficient of the expert who provided this opinion was not performed when calculating the resulting rating, parity may often arise when determining the best wording among those of the same content. In this case, several wording may receive the same highest rating, and then the best wording is chosen among them at random. This disadvantage is eliminated (minimized) by taking into account the relative competence of the experts in the group. It should be noted that recent experimental studies <ref type="bibr" target="#b15">[17]</ref> have shown the importance of taking into account the relative competence of experts in the group in group evaluation. By simulating expert assessments, the experiment showed the need to take into account the competence of the so-called small groups, in which the number of experts does not exceed 2-3 dozen. Given the high cost of expert labor, conventional expert evaluations are usually conducted with the involvement of small groups, and therefore, taking into account competence is necessary.</p><p>Determining the relative competence of experts in a group is the subject of another study <ref type="bibr" target="#b16">[18]</ref>, the fundamental principle of which is to determine the relative weight of an expert within an expert group. Moreover, the weight of an expert can be determined only in relation to a specific issue that is currently under consideration. When assessing competence, it is proposed to take into account three components: self-assessment, mutual assessment and the objective component of the expert's assessment.</p><p>Determining the relative competence of an expert for further consideration in the examination is also a laborious process, requires considerable time and is a costly procedure. For this reason, this procedure was not performed during this stage of the examination related to the generalization of expert knowledge.</p><p>At this stage, when voting for the best phrasing among a group of similar options, participants had the option to select "none of the listed" as well as any specific wording (see Fig. <ref type="figure" target="#fig_0">2</ref>). If the majority of experts selected this option, none of the threat formulations from that group were included in the final list of threats. In this case, the expert team concluded that the threat either lacked significant impact compared to other formulations or did not qualify as a threat at all.</p><p>In essence, such group voting by selecting the best wording or not selecting any of the wording is the definition of the existing links/influences between concepts in the model of the subject area formed by the group of experts, as well as the definition of essential concepts that have a significant impact. From this point of view, group modeling, in which a team of experts participates, is a process of selecting and including in the model only important elements that have a significant impact on the functioning of the system whose model is being built. Moreover, the materiality of the impact is determined by the group of experts at the stage of building the model structure and, subsequently, is specified in the course of impact assessment at the next stage. Thus, as a result of this stage of the examination, eight formulations of threats caused by the development of AI tools were selected (see Table <ref type="table" target="#tab_0">1</ref>).</p><p>The final step was to determine the relative importance of the threats from the list. Given that the list of threats resulting from the previous stage of the assessment includes only the most important threats, from the point of view of the group of experts, this list should include those threats whose relative importance is not less than 0.1 (10% of the impacts of all threats). This caveat should be taken into account by each expert when formulating threats and identifying impacts in the group threat model.</p><p>Figure <ref type="figure" target="#fig_1">3</ref> shows the interface of the expert's workstation with the proposal "Form a list of the most significant factors (goals) that affect &gt; 10% of the achievement of the goal "The threats posed by the development of AI tools"". The determination of a numerical rating -the relative importance of the most significant influence factors -is usually carried out using a whole arsenal of expert evaluation methods. The most effective methods that give the most reliable results include methods of obtaining and processing expert information based on pairwise comparisons <ref type="bibr" target="#b12">[14]</ref>. In order to avoid pressure on the expert during the evaluation, it is necessary to give the expert the opportunity not to perform a particular pairwise comparison, for example, due to a conflict of interest, due to the expert's lack of information on the subject matter of the expertise, lack of competence of the expert, etc. In other words, the comparison of each pair of alternatives is not mandatory, and preference in expert evaluation is given to methods that use and process incomplete pairwise comparisons <ref type="bibr" target="#b14">[16,</ref><ref type="bibr" target="#b17">19,</ref><ref type="bibr" target="#b18">20]</ref>.</p><p>The aggregation of MSEs to find the vector of priority weights is carried out with the obligatory determination of consistency <ref type="bibr" target="#b19">[21,</ref><ref type="bibr" target="#b20">22]</ref>, because in the case of aggregation of inconsistent estimates, an unreliable result can be obtained, for example, the average body temperature of patients in a hospital, because this value is not informative. It is important to determine the level of sufficient consistency for further aggregation (the so-called consistency threshold) for a particular assessment and the requirements for the required level of confidence in the results of this assessment <ref type="bibr" target="#b21">[23]</ref>. This level should serve as an indicator that the aggregation of valuations is legitimate and the resulting aggregated valuation will be reliable. In the case of insufficient consistency, when it is below the consistency threshold, feedback to the expert is required, in which the expert is contacted again with a proposal to revise his or her previous assessment and increase the level of consistency above the threshold <ref type="bibr" target="#b19">[21]</ref>.</p><p>In the study under consideration, given the limited expert resources, especially the time for conducting the examination, the use of pairwise comparisons and feedback methods is considered laborious and inappropriate in this situation. Therefore, in order to simplify and speed up the group examination, it is proposed to conduct it on a point scale, and to consider expert assessments as consistent without verification. To some extent, such concessions reduce the reliability of the examination results, but for this study these concessions are justified. Figure <ref type="figure">4</ref>: An example of an expert's interface in the Consensus-2 system for determining the importance of threats Therefore, in this study, it was decided to conduct a group expert assessment on a 7-point scale. An example of an expert's work on assessing the importance of threats is shown in Fig. <ref type="figure">4</ref>.</p><p>The result of the group examination was a numerical threat rating (Fig. <ref type="figure" target="#fig_2">5</ref>), which is formed by summing up the points given to a certain wording by each of the group of experts. Again, the competence of the experts was not taken into account. It should be noted that the point scale assessment has a number of disadvantages that can lead to manipulations in group expert assessment. It should be borne in mind that if the maximum number of evaluation points (the size of the rating scale) is set incorrectly, a situation may arise where the rating of one expert distorts the general opinion (judgment) of the entire group of experts. For example, if five experts evaluate a certain alternative on a 100-point scale and four of them give equally low scores, the fifth expert will still be able to significantly increase the rating of this alternative relative to the others by giving it the maximum score. The above example shows that the size of the rating scale should be consistent with the number of experts in the group.</p><p>In addition, manipulations are possible when experts collude and give a certain alternative an overestimated score, while at the same time lowering the score for the rest of the alternatives. Other manipulations are also possible when evaluating in point scales. In view of the above, it is proposed to use relative scores in the course of further evaluations. This is, in fact, an alternative to the use of point scales. In addition, it is advisable to provide an opportunity for a particular expert not to perform a particular assessment. That is, any expert can skip any assessment.</p><p>In addition to the total number of points in the third column, Table <ref type="table" target="#tab_0">1</ref> also shows a numerical threat rating -relative values whose sum equals one. That is, each of these values represents a share of the total threat from the development of AI.</p><p>In future studies, when conducting expert evaluation, it is proposed to abandon the use of point scales, since such evaluations are highly dependent on the dimension of the survey scale. If the scale dimension for evaluation is not chosen correctly, it is possible that one expert can offset the assessments of other members of the expert group. This may be due to the significant impact of a particular expert's score on the average score on a scale with a large number of points. It is also desirable to take into account the competence of the group members <ref type="bibr" target="#b15">[17]</ref>, which increases the reliability of generalized expert assessments, thereby objectifying the result obtained on the basis of subjective data.</p><p>As already mentioned, AI tools, namely ChatGPT 3.5, were involved in this process as part of the experiment. This generative linguistic AI model can be useful in successfully formulating a query for the task of grouping identical wording. That's why we used this chatbot to answer a question about the threats that could arise from the development of artificial intelligence. To this question, ChatGPT 3.</p><p>5 generated an answer about potential threats in the field of AI, namely (here is the content generated by the chatbot without edits):  Loss of jobs: the development of automation and autonomous systems may lead to the loss of jobs in industries where people perform repetitive tasks;  Data privacy and security: The increased ability of AI to analyze and process information can lead to data privacy and security issues if appropriate protection measures are not in place;  AI weapons: the use of artificial intelligence in the field of military technology may raise ethical issues and increase the risk of uncontrolled hostilities;  Ethical considerations: The development and use of AI also raises a number of ethical issues, including determining responsibility for AI actions, transparency of systems, and issues of equitable and multicultural use;  dependence on technology: as AI develops, society may become more dependent on these technologies, which may have negative social and economic consequences;  Failure to control: AI can develop so rapidly that society may not be able to adequately control its expansion and application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Future directions for ethical AI development and control</head><p>In addition to the described expertise on identifying threats caused by the development of AI tools, the expert community raised the question of the conditions for AI to possibly get out of human control. One of the many expert opinions was the original expert opinion that AI may cease to be subject to humans and surpass them in terms of mental abilities when AI is able to plan and conduct experiments independently, i.e., to acquire knowledge independently. It is also important that AI is able to test this knowledge in practice, because the philosophical principle "practice is the criterion of truth" is more relevant than ever.</p><p>To prevent AI from getting out of human control, it seems important to control the access of AI systems to expert knowledge, to sensitive behavioral information about a person, and to any sources of knowledge. In addition, there was an expert opinion that it is necessary to limit the access of AI tools to verify their generated results in practice.</p><p>Additional commentary from ChatGPT 3.5: "It is important to emphasize that many of these issues can be addressed with proper regulation, the development of ethical standards, and the inclusion of different stakeholder groups in the decision-making process. AI has great potential to develop and improve lives, but its development also requires attention to possible negative consequences."</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusions</head><p>In the course of the collective examination, the potential threats to the intensive development of artificial intelligence were analyzed using the Consensus-2 system of distributed collection and processing of expert information for decision support systems. A list of the most influential threats was compiled and their relative importance was determined as a result of group expert evaluation.</p><p>The study showed the existence of potential threats from the intensive development of AI, such as loss of human creativity; reduction of employment; violation of the right to privacy; conducting dangerous experiments; (un)intentional manipulation of data; granting AI the right to make decisions to take human life; and making false decisions to take human life by AI.</p><p>Researching and responding to potential threats from AI requires intensified efforts by both the state and civil society. In order to address this problem in a more systematic and qualitative way, it is necessary to: − actively participate in key international events on the above-mentioned issues, such as AI security summits (conferences) initiated by leading countries and the UN; − raise the level of digital literacy of the population. It is important to implement the Roadmap for AI regulation in Ukraine, which should, among other things, help ordinary citizens learn how to protect themselves from AI risks; − institutionalize research on this issue by creating new government institutions (such as the AI Security Institutes already established in the United States and the United Kingdom) with the involvement of private sector experts who will deal with AI security at the national level; − develop probable scenarios of potential threats from the development of AI with appropriate indicators and measures to minimize the identified threats. Further research in this area is planned to be continued with the involvement of a wider range of experts and using appropriate intelligent technology to generate probable scenarios for the realization of potential threats from the development of AI.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Form for voting for the best wording of the interface of the automated workstation of the expert in the Consensus-2 system</figDesc><graphic coords="4,71.76,477.12,451.44,210.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Interface of the expert's work in the Consensus-2 system to form a list of the most significant threats caused by the development of AI tools</figDesc><graphic coords="6,71.76,71.76,448.56,66.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Interface of the automated workstation of the expert organizer at the stage of group expert assessment of the importance of threats</figDesc><graphic coords="7,71.76,162.48,451.44,214.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Numerical threat rating generated as a result of the examination</figDesc><table><row><cell>Ordinal</cell><cell>Threat</cell><cell cols="2">Points Numerical</cell></row><row><cell>rating</cell><cell></cell><cell></cell><cell>rating</cell></row><row><cell>1</cell><cell>Granting AI the right (rule) to make decisions on taking human life</cell><cell>47</cell><cell>0,1526</cell></row><row><cell></cell><cell>(both military and civilian spheres);</cell><cell></cell><cell></cell></row><row><cell>2</cell><cell>Disinformation of people through fake news, fake news and fake</cell><cell>45</cell><cell>0,1461</cell></row><row><cell></cell><cell>sources of information;</cell><cell></cell><cell></cell></row><row><cell>3</cell><cell>Over-reliance on artificial intelligence technology can lead to a loss</cell><cell>42</cell><cell>0,1364</cell></row><row><cell></cell><cell>of human creativity;</cell><cell></cell><cell></cell></row><row><cell>4</cell><cell>Problems with employment;</cell><cell>38</cell><cell>0,1234</cell></row><row><cell>5</cell><cell>Increased threats to human freedom due to simplified</cell><cell>36</cell><cell>0,1169</cell></row><row><cell></cell><cell>identification;</cell><cell></cell><cell></cell></row><row><cell>6</cell><cell>With the help of AI, anyone will be able to conduct dangerous</cell><cell>35</cell><cell>0,1136</cell></row><row><cell></cell><cell>experiments (in biochemistry, genetics, cognitive psychology,</cell><cell></cell><cell></cell></row><row><cell></cell><cell>nuclear physics, etc.);</cell><cell></cell><cell></cell></row><row><cell>7</cell><cell>The introduction of AI in the PDRS may lead to errors in</cell><cell>34</cell><cell>0,1104</cell></row><row><cell></cell><cell>forecasting and, accordingly, in the decision-making of the expert;</cell><cell></cell><cell></cell></row><row><cell>8</cell><cell>Shift in military and industrial leadership between countries that</cell><cell>31</cell><cell>0,1006</cell></row><row><cell></cell><cell>can train advanced AI (e.g., to detect targets in complex</cell><cell></cell><cell></cell></row><row><cell></cell><cell>environments) and those that cannot due to lack of hardware or</cell><cell></cell><cell></cell></row><row><cell></cell><cell>insufficient training data;</cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Security-Informed Safety Analysis of Autonomous Transport Systems Considering AI-Powered Cyberattacks and Protection</title>
		<author>
			<persName><forename type="first">O</forename><surname>Illiashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Babeshko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fesenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Di Giandomenico</surname></persName>
		</author>
		<idno type="DOI">10.3390/e25081123</idno>
	</analytic>
	<monogr>
		<title level="j">Entropy</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page">1123</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fesenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Illiashenko</surname></persName>
		</author>
		<idno type="DOI">10.3390/s22134865</idno>
		<ptr target="https://doi.org/10.3390/s22134865" />
	</analytic>
	<monogr>
		<title level="j">Development and Application</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page">4865</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>Sensors</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and Methods</title>
		<author>
			<persName><forename type="first">V</forename><surname>Moskalenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moskalenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kuzikov</surname></persName>
		</author>
		<idno type="DOI">10.3390/a16030165</idno>
		<ptr target="https://doi.org/10.3390/a16030165" />
	</analytic>
	<monogr>
		<title level="j">Algorithms</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page">165</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<ptr target="https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023" />
		<title level="m">The Bletchley Declaration by Countries Attending the AI Safety Summit</title>
				<imprint>
			<date type="published" when="2023">November 1-2, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="https://news.un.org/en/story/2023/10/1142867" />
		<title level="m">New UN Advisory Body aims to harness AI for the common good</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="https://www.gov.uk/government/speeches/prime-ministers-speech-at-the-ai-safety-summit-2-november-2023" />
		<title level="m">Prime Minister&apos;s speech at the AI Safety Summit</title>
				<imprint>
			<date type="published" when="2023-11-02">November 2, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<idno>No. 75023 dated 11</idno>
		<ptr target="https://dss-lab.org.ua/applications/consensus-2" />
		<title level="m">Software web-system for distributed collection and processing of expert information for decision support systems</title>
				<imprint>
			<date type="published" when="2017">/17/2017</date>
		</imprint>
	</monogr>
	<note>Certificate of copyright registration for the work</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The magical number seven, plus or minus two: some limits on our capacity for processing information</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Review</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="81" to="97" />
			<date type="published" when="1956">1956</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title/>
		<author>
			<persName><surname>Ghatgpt</surname></persName>
		</author>
		<ptr target="https://chat.openai.com" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="https://galactica.org" />
		<title level="m">Galactica</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Embracing ChatGPT: Implications of Emergent Language Models for Academia and Libraries</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Houston</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Corrado</surname></persName>
		</author>
		<idno type="DOI">10.1080/07317131.2023.2187110</idno>
		<ptr target="https://doi.org/10.1080/07317131.2023.2187110" />
	</analytic>
	<monogr>
		<title level="j">Technical Services Quarterly</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="76" to="91" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI</title>
		<author>
			<persName><forename type="first">S</forename><surname>Atlas</surname></persName>
		</author>
		<ptr target="https://digitalcommons.uri.edu/cba_facpubs/548" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Method of paired comparisons using feedback with expert</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Totsenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Tsyganok</surname></persName>
		</author>
		<idno type="DOI">10.1615/JAutomatInfScien.v31.i7-9.480</idno>
		<ptr target="https://doi.org/10.1615/JAutomatInfScien.v31.i7-9.480" />
	</analytic>
	<monogr>
		<title level="j">Journal of Automation and Information Sciences</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="86" to="97" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">An Approach to Reducing the Number of Pair-Wise Alternative Comparisons During Individual and Group Decision-Making</title>
		<author>
			<persName><forename type="first">Vitaliy</forename><surname>Tsyganok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oleh</forename><surname>Andriichuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sergii</forename><surname>Kadenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yaroslava</forename><surname>Porplenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oksana</forename><surname>Vlasenko</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-94910-5_9</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-94910-5_9" />
	</analytic>
	<monogr>
		<title level="m">System Analysis &amp; Intelligent Computing</title>
				<editor>
			<persName><forename type="first">Michael</forename><surname>Zgurovsky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Nataliya</forename><surname>Pankratova</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1022">1022. 2022</date>
			<biblScope unit="page" from="163" to="183" />
		</imprint>
	</monogr>
	<note>Theory and Applications</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Group Incomplete Paired Comparisons with Account of Expert Competence</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Z</forename><surname>Zgurovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Totsenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Tsyganok</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0895-7177(04)90511-0</idno>
		<ptr target="https://doi.org/10.1016/S0895-7177(04)90511-0" />
	</analytic>
	<monogr>
		<title level="j">Mathematical and Computer Modelling</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="349" to="361" />
			<date type="published" when="2004-02">February 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Significance of Expert Competence Consideration in Group Decision Making using AHP</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Tsyganok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Kadenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">V</forename><surname>Andriichuk</surname></persName>
		</author>
		<idno type="DOI">10.1080/00207543.2012.657967</idno>
		<ptr target="https://doi.org/10.1080/00207543.2012.657967" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Production Research</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="issue">17</biblScope>
			<biblScope unit="page" from="4785" to="4792" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Methods and Systems for Decision-making Support</title>
		<author>
			<persName><forename type="first">V</forename><surname>Totsenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Algorithmic aspect</title>
				<meeting><address><addrLine>Kyiv</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
	<note>Naukova dumka</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">The (logarithmic) least squares optimality of the arithmetic (geometric) mean of weight vectors calculated from all spanning trees for incomplete additive (multiplicative) pairwise comparison matrices</title>
		<author>
			<persName><forename type="first">Bozóki</forename><surname>Sándor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tsyganok</forename><surname>Vitaliy</surname></persName>
		</author>
		<idno type="DOI">10.1080/03081079.2019.1585432</idno>
		<ptr target="https://doi.org/10.1080/03081079.2019.1585432" />
	</analytic>
	<monogr>
		<title level="j">International Journal of General Systems</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="362" to="381" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Incomplete pairwise comparison matrices based on graphs with average degree approximately 3</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Szádoczki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bozóki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Juhász</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kadenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Tsyganok</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10479-022-04819-9</idno>
		<ptr target="https://doi.org/10.1007/s10479-022-04819-9" />
	</analytic>
	<monogr>
		<title level="j">Annals of Operations Research</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A method for improving the consistency of expert assessments in the course of a dialog</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">D</forename><surname>Roik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Tsyganok</surname></persName>
		</author>
		<idno type="DOI">10.35681/1560-9189.2018.20.2.142915</idno>
		<ptr target="https://doi.org/10.35681/1560-9189.2018.20.2.142915" />
	</analytic>
	<monogr>
		<title level="j">Storage &amp; Processing</title>
		<imprint>
			<biblScope unit="page" from="85" to="95" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>Data Recording</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Method for determining and improving the consistency of expert estimates in supporting group decision-making. System research and information technology</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Tsyganok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">D</forename><surname>Roik</surname></persName>
		</author>
		<idno type="DOI">10.20535/SRIT.2308-8893.2018.3.10</idno>
		<ptr target="https://doi.org/10.20535/SRIT.2308-8893.2018.3.10" />
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="110" to="121" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Determining adequate consistency levels for aggregation of expert estimates</title>
		<author>
			<persName><forename type="first">Tsyganok</forename><surname>Vitaliy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Olenko</forename><surname>Andriy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roik</forename><surname>Pavlo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vlasenko</forename><surname>Oksana</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2410.03012</idno>
		<idno type="arXiv">arXiv:2410.03012v1</idno>
		<idno>stat.ME] 3</idno>
		<ptr target="https://doi.org/10.48550/arXiv.2410.03012" />
	</analytic>
	<monogr>
		<title level="j">Methodology (stat.ME</title>
		<imprint>
			<date>Oct 2024 9p</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
