<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ORCID:</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Secure Software-Defined Storage</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergei A. Petrenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Diana E. Vorobieva</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexei S. Petrenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Innopolis University</institution>
          ,
          <addr-line>Universitetskaya St, 1, Innopolis</addr-line>
          ,
          <country>Republic of</country>
          <addr-line>Tatarstan, 420500, 420500</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Saint-Petersburg Electrotechnical University «lETI»</institution>
          ,
          <addr-line>ul. Professora Popova, 5, St Petersburg, 197022</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Relevance of the development of the special software - defined data storage is due to the need to ensure the required security and stability of digital platforms and the imperfection of known models, methods, tools, server virtualization and distributed storage to work in conditions of growing security threats. Presented are the main results of solving the above problem based on software-defined approach (Software-Defined Storage), as well as author's models and methods of similarity of cloud computing in the framework of the federal project "Information Security" of the national program "Digital Economy of the Russian Federation”. It is important to note that this made it possible to develop and offer a special hypervisor for solving problems of dynamic control of the semantics of digital platforms functioning based on similarity invariants. To set up an optimal algorithm for the behavior of the program-defined repository of similarity and dimensional invariants, we have proposed the well-known methods of machine learning and depth learning.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Software-Defined Storage</kwd>
        <kwd>cybersecurity</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>artificial neural network</kwd>
        <kwd>machine learning</kwd>
        <kwd>deep learning</kwd>
        <kwd>big data</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Data), the increasing volume of cloud computing, the implementation of the object model of data storage
and, of course, the rapid growth of security threats have greatly influenced the development of storage.
One of the first solutions to meet the new storage requirements is Amazon Web Services (AWS), a cloud
service based on the public cloud computing platform of the same name, with Amazon Simple Storage
Service (Amazon S3) as its object storage. The first solution was followed by a number of similar ones,
including Microsoft, Google, IBM and others. In 2015 IBM purchased Cleversafe startup with data
storage object model, and then released on the storage market a corresponding solution called IBM</p>
      <sec id="sec-1-1">
        <title>Cloud Object Storage. Also known solutions are Hitachi Content Platform (HDS), Elastic Cloud</title>
      </sec>
      <sec id="sec-1-2">
        <title>Storage (Dell EMC) and Nautilus (Dell EMC) and others. For example, the solution Nautilus (Dell</title>
        <p>
          EMC) was one of the first to work with Internet streaming data of things (IoT/IIoT). According to
experts, these solutions are best suited to work with poorly structured and unstructured data [
          <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6-9,
1924</xref>
          ]. According to the estimates of analytical companies Gartner and IDC, the three leaders of
SDSsystems include solutions Dell EMC, IBM and VMware.
        </p>
        <p>
          Also, according to analysts, the market for SDS solutions will evolve towards improving the three
main models of access and data storage, namely, file, block and object. Their average annual growth
rates for the period from 2017 to 2020 were 10.5%, 7.5% and 16.2% respectively [
          <xref ref-type="bibr" rid="ref23 ref24 ref25">23-25</xref>
          ]. At the same
time, hyper-converged SDS solutions, which are understood as solutions based on hyper-converged
infrastructure (Hyper-converged infrastructure, HCI) - a highly integrated platform that accumulates all
the necessary structures, resources and tools - computing, network and data storage proper - to solve the
problems were in greater demand. High performance of hyper-converged SDS solutions is ensured by
using flash arrays, hybrid storage model implementation, as well as integration with cloud computing
orchestration systems.
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>The known HCI solutions include: Nutanix, SimpliVity (part of HPE), ScaleIO from Dell EMC,</title>
        <p>VMware (vSphere - for server virtualization; vSAN - for creating high-performance hyper-converged
storage for virtual machines on flash arrays and vCenter - for managing vSphere environments), NetApp
and Cisco (FlexPod - for creating hyper-converged storage on Cisco equipment and NetApp SolidFire
flash arrays) and others. Let us briefly consider the features of the above mentioned HCI solutions:</p>
        <p>VMwarev Sphere - is a platform for virtualizing information infrastructure of a typical digital
enterprise (previously - VMware Infrastructure). The solution implies simultaneous use of ESXi-host
(x86) and vCenter Server for their centralized management. The features of the solution include: high
initial cost (expensive licenses), limited support for guest operating systems, dependence on external
storage for fault-tolerance scenarios, expensive implementation of distributed storage - VMware VSAN,
etc.</p>
        <p>Nutanix is a hyper-converged platform that supports VMware API for integration with data
warehouses (VAAI). The features of this solution include high initial cost, limited set of server options
and others.</p>
      </sec>
      <sec id="sec-1-4">
        <title>Simplivity - is a platform that is based on x86 servers, PCIe cards and proprietary FPGA hardware.</title>
        <p>Devices of this platform are delivered under the brand name OmniCube ™ and include computing tools,
data storage and Ethernet hardware with VMware ESXi hypervisor. Features of the solution include high
dependence on proprietary FPGA hardware, a limited set of supported server options and others.</p>
        <p>Rosplatform - one of the first domestic hyper-converged products that allows you to build appropriate
platforms based on conventional (commodity) and relatively inexpensive servers with drives, greatly
increasing the degree of useful use of equipment and the level of manageability of the platform as a
whole. The features of this solution include high performance and scalability of distributed storage,
support for virtualization in system containers, compatibility with OpenStack, compatibility with
hardware (x86) of well-known manufacturers, a wide range of supported guest operating systems.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Self-healing SDS solutions</title>
      <p>
        It was required to transform the observed data models into a special kind of model, which allows
controlling the semantics of digital platforms under real operating conditions in order to solve the
problem. For this purpose, the author's models and methods of similarity and dimensions were used
[
        <xref ref-type="bibr" rid="ref42 ref43 ref44">2532, 42-44</xref>
        ]. This allowed us to propose and implement the following prospective concept for storing
similarity and dimensional invariants ("three -in-one"):
• placement of data processing and storage models, in terms of likeness and dimensional
invariants, on the same server nodes of Linux system containers;
      </p>
      <p>• use of hypervisor virtual machines for dynamic control of semantics of digital platforms
functioning based on likeness and dimensional invariants;</p>
      <p>• accumulation and use of reference instances of similarity and dimensional invariants for prompt
self-recovery of computations and prevention of transitions of digital platforms to irreversible
catastrophic states under conditions of heterogeneous mass cyber-attacks by cybercriminals, including
those previously unknown.</p>
      <p>Here the main idea is to build a system of relationships between the dimensions of processed and
stored data as follows.</p>
      <p>Let each operator of some digital platform be represented as a sum of functions φ:
.</p>
      <p>
        In this case, the provisions of the theory of dimensions and similarity [
        <xref ref-type="bibr" rid="ref30 ref31 ref32 ref33 ref34 ref35 ref36 ref37 ref38 ref39">30-39</xref>
        ] allow creating a system
of requirements to the dimensions of xj, resulting from the following considerations (the record [X]
stands for "dimensions of X"):
where
and
fu (x1, x2 ,..., xn ) = 0 , and = 1, 2, …, r,
      </p>
      <p>q
fu (x1, x2 ,..., xn ) = ∑ϕ us ( x1, x2 ,..., xn )
s=1</p>
      <p>n
ϕ us ( x1, x2 ,..., xn ) = ∏ x jα jus</p>
      <p>j=1
and after logarithmization:
[ϕ us ( x1, x2 ,..., xn )] = [ϕ uq ( x1, x2 ,..., xn )]</p>
      <p>,
j=1
 n   n 
∏ x jα jus  = ∏ x jα juq 
 j=1   j=1</p>
      <p>n n
∏[ x j ]α jus = ∏[ x j ]α juq
j=1
 ,</p>
      <p>,
j=1
n
∏[ x j ]α jus −α juq = 1</p>
      <p>,
n
∑ (α jus −α juq )⋅ ln[x j ] = 0
j=1</p>
      <p>,
u = 1, 2, …, r ; s = 1, 2, …, (q–1).
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)</p>
      <p>
        Then the necessary criterion of semantic correctness of the observed digital platform is the existence
of a solution in which none of the variables (ln[xj]) is turned to zero. Here, to solve this problem one
can use trivial equivalent equation transformations of the system recorded in the matrix form [
        <xref ref-type="bibr" rid="ref42 ref43 ref44">42-44</xref>
        ].
      </p>
      <p>Let us now perform a critical analysis of possible variants of constructing the required SDS-system
and propose a number of architectural solutions suitable for the task of storing likeness and dimensional
invariants.</p>
      <p>At rare access to data archives in the form of likeness and dimension invariants for the purpose of
dynamic control of semantics of functioning of digital platforms, these data can be stored on file servers.
For example, in autonomous dual-controller storage systems or on local disks of distributed storage
systems with multiple redundancy. However, it is not enough to work with the mentioned data in real
time. It requires large capacity "active data warehouses" with high performance and continuous retrieval
and storage requirements for reference likeness and dimensional invariants. Indeed, single-controller
solutions can lead to downtime risks, and dedicated hardware solutions (based on traditional NAS and
SAN) will require significant support and maintenance resources. In addition, the operation of
distributed storage systems will lead to long time delays and increase overhead due to the need to place
multiple copies of data on the network nodes.</p>
      <p>In practice, the organization of similarity storage and dimensional invariants for dynamic control of
the semantics of digital platforms has been compared with the tasks of organizing storage of virtual
machines with a high transactional load (Online Transaction Processing, OLTP) and cloud hosting, as
well as the organization of high-performance computing (HPC) and streaming video processing (Media
&amp; Entertainment, M&amp;E). Here it was necessary to provide an active traffic to reference and observed
similarity and dimensional invariants, and also to provide hundreds of terabytes of memory on disks for
storage of "passports" of functioning of observed digital platforms and corresponding "memory
snapshots". The I/O load each time differed greatly: by the volume of transmitted data, by the type of
addresses (random/threaded), by read/write proportions, by transfer protocols, etc. Accordingly, it was
necessary to have a flexible enough organization of the storage system of similarity and dimensional
invariants, which differed both in terms of media sets and RAID algorithms and I/O interfaces.</p>
      <p>
        It should be noted that solving the task "on the forehead" by selecting special hardware data storage
(based on traditional NAS and SANs) that meets the requirements of performance, reliability and fault
tolerance will cost quite a lot (thousands and even millions of dollars). Therefore, it was decided to
implement a suitable software model of data management [
        <xref ref-type="bibr" rid="ref40 ref41">40, 41</xref>
        ], the cost of which is an order of
magnitude lower than that of traditional storage. At the same time, it became possible to make a free
choice of data carriers, as well as ways to access them and storage scaling scenarios. In addition, it is
possible to flexibly adjust performance and fault tolerance parameters, select service services, provide
the required level of security and stability, etc. For example, a suitable alternative to hardware
dualcontroller storage of similarity and dimensional invariants is a cluster of two storage servers with shared
access to a single disk space. In this case, the container with disks (enclosure, in fact - JBOD) can be
connected to the SAS HBA management servers via block direct access protocol (low latency, high
bandwidth). In this case, the server software is responsible for working with logical data volumes, their
backup, information recovery in case of disk failures, switching between cluster nodes and related
services.
      </p>
      <p>Let us consider in detail possible variants of organizing software-defined storages of similarity and
dimensional invariants for dynamic control of semantics of digital platforms functioning in conditions
of growing security threats.</p>
      <sec id="sec-2-1">
        <title>Windows Server solution 2016 (2012)</title>
        <p>The peculiarities of such a solution include:
- RAID - with Storage Spaces policy technology (2-way or 3-way mirror) provides performance at
the hardware RAID 10 level;</p>
        <p>- Spaces - virtual disks collected from SSD/HDD logical pools provide high-capacity HDD for "cold"
data, and high-performance SSD for "hot" data. Dynamic capacity allocation is supported;
- Automatic Tiering - In a two-level SSD/HDD storage scheme, the file system in the background
tracks access to blocks of data and on a set schedule (for example, once a day) moves popular blocks to
a fast layer (SSD), with a granularity of 1 MB;</p>
        <p>- Write-back cache - smoothes write peaks to the virtual disk by SSDs from the pool, increasing
IOPS performance;</p>
        <p>- SMB 3.0 - a network protocol that provides applications with access to third-party server data:
shared files are presented to all nodes of the Scale-Out File Server (SOFS) cluster, and in case of
failures, the client application is automatically serviced by the working nodes. (Microsoft recommends
using direct RDMA memory access network adapters to offload server processors and reduce data access
delays);</p>
        <p>- SOFS - provides data availability and continuity of file services: a cluster of servers applies for data
in shared containers (Shared SAS JBOD);</p>
        <p>- Shared SAS JBOD - shared storage is used for server cluster on SSD/HDD disks. In this case, the
capacity is increased by adding ordinary NL SAS disks to JBOD, as well as new JBODs with the whole
disks (it is possible to use relatively inexpensive SAS-switches); in dedicated industrial storage, even
the disks themselves will cost more: HDD - in times, SSD - by an order of magnitude.</p>
        <p>Windows Server 2016 has the functionality of synchronous replication and distributed storage on the
local disks of the Storage Spaces Direct server cluster.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Jovian DSS based solution.</title>
        <p>The solution for storing likeness and dimensional invariants based on Jovian DSS is a Linux software
(and ZFS file system). Here, the file system with built-in support for hybrid RAM/SSD/HDD pools
provides high performance and scalability of the solution. At the same time, repositories of similarity
and dimensional invariants are built into NAS and SAN environments and provide services related to
volumetric data: dynamic capacity allocation, snapshots, compression, deduplication.</p>
        <p>Two servers on Intel Xeon E5 26xx processors and JBOD shared access are required to build a cluster
of high availability data with NFS- and iSCSI-connection in the minimum configuration. The features
of such a solution include:</p>
        <p>Scalability - 128-bit ZFS file system does not limit storage capacity with volumes up to a zetabyte
on any number of disks (in JBOD storage clusters with a large number of capacitive disks 6-12 Gbit
SAS is connected to management servers);</p>
        <p>Data security - RAID arrays (activated remotely via command line) handle failures of up to three
disks at a time; an unlimited number of snapshots are supported, which is useful for disaster data
recovery;</p>
        <p>Multi-layer caching - along with the file system, caching algorithms are inherited, and popular files
are sent to one of the categories "frequently used" and "recently accessed" - separate caching areas in
the RAM of the server nodes and to the SSD;</p>
        <p>Hybrid storage pools - utilize SSD I/O performance and high HDD capacity in a single management
logic;</p>
        <p>On-the-fly data compression and deduplication - this is how to save disk space and reduce storage
overhead (deduplication ratio can reach 3:1, when, for example, for 3TB of data recording 1TB of
physical disk space is enough);</p>
        <p>Thin provisioning - virtual allocation of disk space allows you to increase storage capacity without
reformatting, eliminates overspending of disks (they can be put into operation as needed);</p>
        <p>Environmental optimization - servers can be easily adapted to the external load and set of services:
selection of processors, RAM capacity, SSD pools, network interfaces. 10-40 GB Ethernet allows coping
with the “heaviest” requests and provides access to similarity variants and dimensions in the broadband
range with minimal delays.</p>
        <p>ОС RAIDIX based solutions</p>
        <p>The horizontally scalable data storage of NetApp FAS or EMC Isilon level could be used to solve the
task. NetApp internal file storage system with recording everywhere (Write Anywhere File Layout,
WAFL) is characterized by high performance - both for files and block access data (SAN). This file
system is deeply integrated with the RAID manager. For example, RAID-DP writes data in full stripes
("random" writes are "sequential"), which provides "fast" RAID in striping mode with double parity
(protection against simultaneous failure of two disks as in RAID 6). And with Flash Pool and Flash
Cache technologies, an optimal balance of performance and capacity is achieved in hybrid systems with
an SSD layer above the main HDD capacitive array. However, test results have shown that when an
array is filled and data is highly fragmented in the form of similarity and dimensional invariants, there
is some WAFL performance loss. Despite the operation of the background defragmentator ("garbage
collector") under the OS, 10-30% of the space had to be left free for predictable performance of intensive
recording. It was found that if reading and writing have similar organization, the performance drop is
not noticeable, but in case of heterogeneity of data location in stream reading there were problems.</p>
        <p>Therefore, we decided to use the native RAIDIX operating system to organize a software-defined
storage of similarity and dimensional invariants. The mentioned OS was created on the basis of the
classic RAID (read-modify-write) approach and is characterized by high-speed algorithms of data
storage and acceptable performance. For example, it provides performance of a RAID group with
simultaneous failure of up to three disks (RAID 7.3) and even 32 simultaneously (RAID N+M) without
hardware RAID controllers. The RAIDIX operating system demonstrates high speed of checksum
calculation, high reliability and efficient use of useful disk space. At the same time, it allows storing
and processing similarity and dimensional invariants on standard server hardware, using well-known
block (FC, iSCSI, SAS, InfiniBand) and file (SMB, NFS, AFP) access protocols. And to increase the
productivity of transactional operations, SSD-caching is provided.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Self-healing SDS clusters</title>
      <p>Since the volume of images of similarity invariants and "passports" of semantics of behavior of
digital platforms can reach hundreds of terabytes (which are dozens of HDDs), we used SATA drives
of corporate series (or related to them NL SAS). At the same time, the disks for storing the invariants of
similarity were taken to an external JBOD - a dense container with duplicated I/O, power and ventilation
modules. Here, multi-channel connection of JBOD to the head server ("controller") via SAS 6-12Gb
guarantees minimal delays and wide access bandwidth to similarity invariants and dimensions stored on
disks.</p>
      <p>In the presented variant of SDS solution, continuity of operation is ensured by RAIDIX Failover
Cluster (FC or 40 GbE) - a high performance platform with high data availability (without a single point
of failure). The "dual-controller" software-defined storage of similarity and dimensional invariants
consists of two servers, to which JBOD of shared access was connected. Each controller can serve a
different RAID group. In the Active-Active cluster, the nodes are connected by an interface with low
latency FC, SAS 12 Gb or InfiniBand (the cache of both controllers is always synchronized and in a
coherent state). If one of the controllers is lost, it takes a few seconds to restore the SDS system.</p>
      <p>JBOD has two independent I/O modules with expanders-duplicators. Due to the dual connection of
NL SAS drives, data on them is available when any I/O module is lost (as opposed to SATA on the same
platform). In addition, NL SAS serve a greater depth of the queue than SATA, which gives an array
performance gain with the same mechanics of hard drives (in terms of cost NL SAS drives practically
do not differ from SATA of the same capacity). SAS protocol also includes integrity control of T10 CRC
along the whole way of extraction of reference similarity and dimensional invariants, from disk to
control unit (comparison and response to security incidents).</p>
      <p>Thus, FC 8-16Gb/s infrastructure is responsible for delivery and extraction of hybrid multithreaded
similarity and dimensional invariants at consistently high speed (without failures). Embedding FC
Storage Cluster RAIDIX in the existing environment significantly increased the storage volume of
similarity and dimensional invariants, and improved their overall processing performance.
Dualchannel FC HBA 8 or 16Gb/s were supplied to the cluster nodes, the array as a block access device
(LUN) was introduced into the SAN and automatically configured to solve the task of dynamic control
of semantics of digital platforms based on similarity and dimensional invariants. The metadata controller
allowed assigning access rights to groups of administrators of the considered solution.</p>
      <p>A variant of the data storage solution for similarity and dimensional invariants based on the NAS
cluster is shown in Figure 2. This solution used relatively inexpensive computing and network devices
10-40 Gb/s (with the prospect of replacement by devices up to 100 Gb/s). From the previous version of
the storage solution based on FC-cluster this solution differs in external interfaces (put 10-40 Gb
Ethernet NICs) and file exchange protocols (SMB, NFS, AFP). The server nodes of the two solutions
considered are identical: Shared SAS JBOD is connected to the cluster nodes (Table 1).</p>
      <p>Performance test results of two designed and built clusters (FC and NAS) are shown in Figure 3 (AJA
System Test 2.1 and IO Meter 2008.06.18RC2 tests were used to simulate single and multithreaded load).
The second group of tests measured performance with two 512K/1M/8M block size initiators.</p>
      <sec id="sec-3-1">
        <title>A solution based on clusters of several nodes</title>
        <p>The functionality of traditional file systems was not enough to accomplish this task. Here are the
known limitations of classic file systems:
- metadata and data are stored in the same partitions;
- files are "smeared" into partitions, and access delays occur;
- mechanism to prevent defragmentation is absent;
- lack of scalability by size, performance, number of files, folder nesting, etc.;
- "non-native" cross-platform.</p>
        <p>It was necessary to use cluster file systems, including Hyper FS from Scale Logic, which provided
high scalability and simultaneous access to data from different operating systems (in particular, through
file gateways). As a result, a technical solution (Figure 3) was designed and implemented for storing
similarity and dimensional invariants based on RAIDIX software and the Hyper FS cluster file system,
which allowed organizing a single address space for block and file access.</p>
        <p>The advantages of the obtained solution are as follows:
- up to 4 billion files in one directory;
- up to 4096 partitions, which can be combined into one FS;
- lack of a single point of failure;
- dynamic file system extension in terms of capacity and performance without downtime;
- support for the latest versions of popular operating systems - Mac/Windows/Linux.</p>
        <p>It is important to note that Hyper FS for SAN has allowed transforming multiple file systems or iSCSI
disk arrays into a storage cluster that supports simultaneous editing and playback of data from multiple
client machines, provides high performance and shared access within a single namespace. The system
has an optional metadata controller (MDC) with redundancy structure, full redundancy SAN structure
with metadata mirroring and supports multiple path configuration in Fibre Channel and iSCSI
environments. At the same time, it does not have a single point of failure and provides high stability of
similarity and dimensional invariants storage.</p>
        <p>The use of Scale-Out NAS systems for dynamic control of digital platforms semantics (Figure 4)
allowed to create consolidation up to 64 nodes in a cluster with simultaneous access via different
protocols (SMB v2/v3, NFS v3/v4, FTP/FTPS, HTTP/HTTPS/WebDAV) and load balancing between
nodes (Round-Robin, Connection Count, Load Node), as well as support for Active Directory.</p>
        <p>Essentially, it became possible to extend the functions of the SDS-system, namely, to offer the
following additional services:
- optimization of the system for large and small files;
- support for user and folder quotas;
- SNMP monitoring over SNMP for SONG and MDC;
- LDAP/Active Directory support - the ability to use the local user base or integrate with Active</p>
      </sec>
      <sec id="sec-3-2">
        <title>Directory;</title>
        <p>- possibility to use ACL on all supported operating systems.</p>
        <p>Thus, the solution based on RAIDIX and HyperFS is characterized by high performance, single
address space, simultaneous access via different protocols, low latency, high extensibility, file and block
access to similarity invariants.</p>
        <p>The proposed approach uses the multiple storage nodes (Data storage), dynamically allocating
information between them and balancing the load; architecture - to add to the system new storage nodes
on demand, without the need to transfer data and change the configuration of the system. A clear
advantage of this solution is the ability to simultaneously handle data stored on one or more storage
devices and from a large number of workstations at the block level and with high performance, which
is impossible in a classic SAN architecture. On the whole, the RAIDIX software solution in combination
with the Hyper FS file system meets the requirements in terms of speed and fault tolerance, and provides
simultaneous parallel operation with hybrid similarity invariants and dimensions. The solution also
minimizes the cost of hardware upgrades when creating storage clusters, expanding the existing
infrastructure horizontally without downtime or performance degradation.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Solution based on virtualization cluster on VMware</title>
        <p>Today server virtualization is one of the most effective ways to deploy most private and public
clouds, development and testing environments, and enterprise applications. It reduces the cost of
ownership of the system by saving on power and space occupied, eliminate dependence on specific
branded hardware and increase uptime.</p>
        <p>Let us list the following features of the proposed solution (Figure 5):
- Various connection protocols are used to connect VMware ESXi and data storage: FCP, iSCSI, NFS.</p>
        <p>Virtual machines (VM) can use the corresponding files (configuration and vDISKs). VMware
functions related to data storage (VMotion, VMware DRS, VMware HA and VMware Storage
VMotion) can be used;
- Achieved performance depends on the server used for data storage (RAID controller and disk
functions). The maximum possible hardware bandwidth is supported. Scalability elasticity is
achieved without loss of speed as the number of virtual machines and parallel highly loaded data
streams increases;
- Good compatibility: VMware ESX 5.0/5.1/5.5/6.0 and higher virtualization platforms are supported;</p>
      </sec>
      <sec id="sec-3-4">
        <title>KVM (Kernel-based Virtual Machine); RHEV (Red Hat Enterprise Virtualization), Microsoft Hyper</title>
      </sec>
      <sec id="sec-3-5">
        <title>V Server, XenServer.</title>
        <p>Here the solution hardware infrastructure includes 10 Supermicro servers with Broadcom HBA cards
and Mellanox InfiniBand adapters. ISCSI over InfiniBand is chosen as the fastest way to synchronize in
this configuration. ISCSI over Ethernet was used for automatic provisioning.</p>
        <p>The proposed solution uses three RAID 6i partitions and on average three LUNs per partition on each
server. All servers have VMware ESXi 5.1 and VCenter 5.1 with virtual machines (VM). The VMs serve
as data storage for special applications, backup servers, file servers and more.</p>
        <p>The selected configuration ensures efficient processing of random data and high reliability. In
general, the solution is characterized:
•
•
•
•
fail-safe storage of reference likeness and dimensional invariants;
flexible virtualization of existing information infrastructure;
high performance of transactional applications;
high availability of data - "three nines" (P = 0.999).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>The development of a software-defined data warehouse was carried out under the federal project
"Information Security" of the national program "Digital Economy of the Russian Federation". In the
course of the work, the possible variants of SDS-solutions for storing similarity and dimensional
invariants were designed and implemented in order to introduce the semantics dynamic control of
typical digital platforms functioning of the Russian Federation digital economy. The proposed options
of SDS-solutions flexibly and more efficiently use servers of different types in the following main
modes: hyper-convergence, computing virtualization, data storage.</p>
      <p>Hyper-convergence. The servers simultaneously install components of computing virtualization,
storage, local disks and others. Servers are assembled into local clusters with the ability to access the
cloud. A special client refers to the storage of similarity and dimensional invariants using internal
protocols, eliminating the need to create classic iSCSI-targeting.</p>
      <p>Computing Virtualization. Diskless servers deliver their computing power using the cloud as a virtual
machine environment. This scheme maintains the required level of computing power, and if necessary,
adds the storage capacity of similarity and dimensional invariants.</p>
      <p>Storage of data. Local hard drives are used to increase total cloud storage capacity. This scheme is
necessary if you want to increase storage capacity at the expense of relatively inexpensive low-power
servers filled with physical disks.</p>
      <p>It is important that this approach, in contrast to other well-known approaches to organizing
softwaredefined data storage, ensures the required security and stability of the information infrastructure of
modern digital enterprises in conditions of growing security threats, including organizing work on a
level above the computers, network equipment, storage network and means of cybersecurity and fault
tolerance - the above devices and means have become software-defined components. Such software
configuration of management (based on the methods of Machine Learning and Deep Learning) itself
decides on which nodes to physically place the software-defined components, monitors the "health" of
components of the information infrastructure in a heterogeneous mass cyber attacks of attackers
(including previously unknown), decommissions unusable and connects new components of the said
infrastructure. At the same time, security administrators only set basic configuration parameters, and
the system independently determines on which physical nodes to place the necessary resources
(computing, network and data storage) and how to manage them automatically.</p>
      <p>Further research areas should be included:
• Development of trusted SMART hypervisors (Storage Hypervisor), which can be run and
finetuned based on the methods of Machine Learning (ML) and Deep Learning (DL) - to solve the task in a
controlled critical infrastructure on servers, virtual machines, within classic hypervisors and in the
storage network;</p>
      <p>• Creation of special system software on open source, Storage Virtual Software, eliminating
dependence on specific manufacturers and providing open, secure and scalable data management to
ensure the required security and stability;</p>
      <p>• Development of application software, Control Planes, responsible for creation, configuration,
maintenance of storage policies and broadcasting them to lower levels of resources and services to solve
the problem of dynamic control of the semantics of typical digital platforms of the Digital Economy of
the Russian Federation;</p>
      <p>• Creation of additional services of safe and efficient use of similarity and dimensional invariants,
Data Services, to ensure the required level of information security and cyber resilience.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgements</title>
      <p>The publication was carried out with the financial support of Russian Foundation for Basic Research
(RFBR) in the framework of the scientific project No. 20-04-60080.</p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Data center storage: cost-effective strategies implementation and management</article-title>
          ,
          <source>Auerbach Publications</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Reinsel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gantz</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Rydning</surname>
          </string-name>
          ,
          <article-title>"Data age 2025: The evolution of data to life-critical. Don't focus on big data; focus on the data that's big"</article-title>
          ,
          <source>International Data Corporation (IDC) White Paper</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Macedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Paulo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pereira</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Bessani</surname>
          </string-name>
          ,
          <article-title>"A Survey and Classification of SoftwareDefined Storage Systems"</article-title>
          , ACM Computing Surveys, May
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Verbitski</surname>
          </string-name>
          et al.,
          <article-title>"Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases"</article-title>
          ,
          <source>ACM SIGMOD</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Canadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dong</surname>
          </string-name>
          et al.,
          <source>RocksDBCloud:A Key-Value Store for Cloud Applications</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Belgaied</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Paulsen</surname>
          </string-name>
          ,
          <article-title>"Improving Cassandra Latency and Resiliency with NVMe over Fabrics"</article-title>
          ,
          <source>NVMe Developer Days</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Fong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Neumann and H.-S. P. Wong</surname>
          </string-name>
          ,
          <article-title>"Phase-Change Memory-Towards a StorageClass Memory"</article-title>
          ,
          <source>IEEE Tran. Electron Devices</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Al-Badarneh</surname>
          </string-name>
          et al.,
          <article-title>"Software Defined Storage for cooperative Mobile Edge Computing systems"</article-title>
          ,
          <source>Proc. IEEE SDS</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P. X.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Karandikar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carreira</surname>
          </string-name>
          and S. Han,
          <article-title>"Network Requirements for Resource Disaggregation"</article-title>
          ,
          <source>Proc. OSDI</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <article-title>"LegoOS: A Disseminated Distributed OS for Hardware Resource Disaggregation"</article-title>
          ,
          <source>OSDI</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hilmi</surname>
          </string-name>
          et al.,
          <article-title>"Analysis of Network Capacity Effect on Ceph Based Cloud Storage Performance"</article-title>
          ,
          <source>IEEE TSSA</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Shankar</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>"Performance Study of Ceph Storage with Intel Cache Acceleration Software: Decoupling Hadoop MapReduce and HDFS over Ceph Storage"</article-title>
          , IEEE CSCloud,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>I.</given-names>
            <surname>Adams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Keys</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Mesnier</surname>
          </string-name>
          ,
          <article-title>"Respecting the block interface - computational storage using virtual objects"</article-title>
          ,
          <source>USENIX HotStorage</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Patterson</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Stoica</surname>
          </string-name>
          ,
          <article-title>"Cloud Programming Simplified: A Berkeley View on Serverless Computing"</article-title>
          ,
          <source>Berkeley Tech. Report EECS-2019-3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Just</surname>
          </string-name>
          ,
          <article-title>"Crimson: A New Ceph OSD for the Age of Persistent Memory and Fast NVMe Storage"</article-title>
          ,
          <source>USENIX Vault</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Teng</surname>
          </string-name>
          ,
          <article-title>"Erasure Coding in Object Stores: Challenges and Opportunities"</article-title>
          ,
          <source>ACM PODC</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Raghunath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chagam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sen</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Gohad</surname>
          </string-name>
          ,
          <article-title>"Rethinking Software Defined Cloud Storage for Disaggregation"</article-title>
          ,
          <source>Proc. IEEE Service and Cloud Computing</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lagisetty</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Merchant</surname>
          </string-name>
          ,
          <article-title>"Flash Reliability in Production: The Expected and the Unexpected"</article-title>
          ,
          <source>USENIX FAST</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Klimovic</surname>
          </string-name>
          et al.,
          <article-title>"Pocket: Elastic ephemeral storage for serverless analytics"</article-title>
          ,
          <source>USENIX OSDI</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Xu</given-names>
            <surname>Kim</surname>
          </string-name>
          et al.,
          <article-title>"Finding and Fixing Performance Pathologies in Persistent Memory Software Stacks"</article-title>
          ,
          <source>ASPLOS</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Fedorova</surname>
          </string-name>
          ,
          <article-title>"Getting storage engines ready for fast storage devices"</article-title>
          , MongoDB Engineering Journal,
          <year>March 2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>R.</given-names>
            <surname>Meredith</surname>
          </string-name>
          ,
          <string-name>
            <surname>All-NVMe Performance Deep Dive Into Ceph SNIA Flash Memory</surname>
            <given-names>Summit</given-names>
          </string-name>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          et al.,
          <source>High Performance Ceph All Flash Array Software Defined Storage Solutions with NVM Technologies Flash Memory Summit</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sen</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>"Distributed Block Storage using NVMe-oF"</article-title>
          ,
          <source>SNIA Storage Developer Conference</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>L.</given-names>
            <surname>Baumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Abraxas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Militano</surname>
          </string-name>
          and
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Bohnert</surname>
          </string-name>
          ,
          <article-title>"Monitoring Resilience in a Rookmanaged Containerized Cloud Storage System"</article-title>
          ,
          <source>IEEE European Conf. on Networks and Comm. (EuCNC)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>X.</given-names>
            <surname>Espinal</surname>
          </string-name>
          et al.,
          <article-title>"CERN data services for LHC computing"</article-title>
          ,
          <source>Journal of Physics: Conference Series</source>
          , vol.
          <volume>898</volume>
          , no.
          <issue>062028</issue>
          , pp.
          <fpage>8</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bitzes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Sindrilaru</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <article-title>"Scaling the EOS namespace - new developments and performance optimization"</article-title>
          ,
          <source>EPJ Web Conferences</source>
          , vol.
          <volume>214</volume>
          , no.
          <issue>04019</issue>
          , pp.
          <fpage>8</fpage>
          ,
          <year>2018</year>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gonzalez</surname>
          </string-name>
          Labrador et al.,
          <article-title>"CERNBox: the CERN cloud storage hub"</article-title>
          ,
          <source>EPJ Web of Conferences</source>
          , vol.
          <volume>214</volume>
          , no.
          <issue>04038</issue>
          , pp.
          <fpage>8</fpage>
          ,
          <year>2018</year>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Taylor</surname>
          </string-name>
          et al.,
          <article-title>"Consolidation of cloud computing in ATLAS"</article-title>
          ,
          <source>Journal of Physics: Conferences Series</source>
          , vol.
          <volume>898</volume>
          , no.
          <issue>052008</issue>
          , pp.
          <fpage>8</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>L.</given-names>
            <surname>Bauerdick</surname>
          </string-name>
          et al.,
          <article-title>"Experience in using commercial clouds in CMS"</article-title>
          ,
          <source>IOP Conf. Series: Journal of Physics: Conferences Series</source>
          , vol.
          <volume>898</volume>
          , no.
          <issue>052019</issue>
          , pp.
          <fpage>8</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>O.</given-names>
            <surname>Gutsche</surname>
          </string-name>
          et al.,
          <article-title>"CMS Analysis and Data Reduction with Apache Spark"</article-title>
          ,
          <source>Journal of Physics: Conference Series</source>
          , vol.
          <volume>1085</volume>
          , no.
          <issue>042030</issue>
          , pp.
          <fpage>6</fpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Markov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barabanov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsirlov</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>Models for Testing Modifiable Systems</article-title>
          . In Book:
          <article-title>Probabilistic Modeling in System Engineering</article-title>
          , by ed.
          <source>A.Kostogryzov. IntechOpen</source>
          ,
          <year>2018</year>
          , Chapter
          <issue>7</issue>
          , pp.
          <fpage>147</fpage>
          -
          <lpage>168</lpage>
          . DOI:
          <volume>10</volume>
          .5772/intechopen.75126.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Markov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barabanov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsirlov</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>Periodic Monitoring and Recovery of Resources in Information Systems</article-title>
          . In Book:
          <article-title>Probabilistic Modeling in System Engineering</article-title>
          , by ed.
          <source>A.Kostogryzov. IntechOpen</source>
          ,
          <year>2018</year>
          , Chapter
          <issue>10</issue>
          , pp.
          <fpage>213</fpage>
          -
          <lpage>231</lpage>
          . DOI:
          <volume>10</volume>
          .5772/intechopen.75232.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>Martinez</given-names>
            <surname>Pedreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Grigoras</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Yurchenko</surname>
          </string-name>
          ,
          <article-title>"JAliEn: the new ALICE high-performance and high-scalability Grid framework"</article-title>
          ,
          <source>EPJ Web of Conferences</source>
          , vol.
          <volume>214</volume>
          , no.
          <issue>03037</issue>
          , pp.
          <fpage>8</fpage>
          ,
          <year>2018</year>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>S.</given-names>
            <surname>Al-Kiswany</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ripeanu</surname>
          </string-name>
          ,
          <article-title>"A Software-Defined Storage for Workflow Applications"</article-title>
          ,
          <source>2016 IEEE International Conference on Cluster Computing (CLUSTER)</source>
          , pp.
          <fpage>350</fpage>
          -
          <lpage>353</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gracia-Tinedo</surname>
          </string-name>
          et al.,
          <article-title>"IOStack: Software-Defined Object Storage"</article-title>
          ,
          <source>IEEE Internet Computing</source>
          , vol.
          <volume>20</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>10</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gracia-Tinedo</surname>
          </string-name>
          et al.,
          <article-title>"Crystal: Software-Defined Storage for Multi-tenant Object Stores"</article-title>
          ,
          <source>Proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST '17)</source>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>256</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>R.</given-names>
            <surname>Macedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Paulo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pereira</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Bessani</surname>
          </string-name>
          ,
          <article-title>"A Survey and Classification of Software-Defined Storage Systems"</article-title>
          , ACM Computing Surveys, vol.
          <volume>53</volume>
          , no.
          <issue>48</issue>
          , pp.
          <fpage>38</fpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>H.</given-names>
            <surname>Rousseau</surname>
          </string-name>
          et al.,
          <article-title>"Providing large-scale disk storage at CERN"</article-title>
          ,
          <source>European Physics Journal Conferences</source>
          , vol.
          <volume>214</volume>
          , no.
          <issue>04033</issue>
          , pp.
          <fpage>7</fpage>
          ,
          <year>2018</year>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <surname>Barabanov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Markov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsirlov</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>Procedure for Substantiated Development of Measures to Design Secure Software for Automated Process Control Systems</article-title>
          .
          <source>In Proceedings of the 12th International Siberian Conference on Control and Communications</source>
          (Moscow, Russia, May
          <volume>12</volume>
          -14,
          <year>2016</year>
          ).
          <article-title>SIBCON 2016</article-title>
          . IEEE,
          <volume>7491660</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . DOI:
          <volume>10</volume>
          .1109/SIBCON.
          <year>2016</year>
          .
          <volume>7491660</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <surname>Barabanov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grishin</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Markov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsirlov</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>Current Taxonomy of Information Security Threats in Software Development Life Cycle</article-title>
          .
          <source>In: 2018 IEEE 12th International Conference Application of Information and Communication Technologies (AICT)</source>
          .
          <source>IEEE (17-19 Oct</source>
          <year>2018</year>
          , Almaty, Kazakhstan).
          <year>2018</year>
          , pp.
          <fpage>356</fpage>
          -
          <lpage>361</lpage>
          . DOI:
          <volume>10</volume>
          .1109/ICAICT.
          <year>2018</year>
          .8747065
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Sergei</surname>
            <given-names>Petrenko</given-names>
          </string-name>
          ,
          <article-title>Developing a Cybersecurity Immune System for Industry 4</article-title>
          .0, ©2020 River Publishers, River Publishers Series in Security and Digital Forensics. ISBN:
          <volume>9788770221887</volume>
          , eISBN:
          <volume>9788770221870</volume>
          , 386 p., https://www.riverpublishers.com/book_details.php?book_id=
          <volume>764</volume>
          (Scopus).
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>Sergei</given-names>
            <surname>Petrenko</surname>
          </string-name>
          . Cyber Resilience, ISBN:
          <fpage>978</fpage>
          -
          <lpage>87</lpage>
          -7022-11-60 (Hardback) and
          <fpage>877</fpage>
          -022-11-62 (Ebook) © 2019 River Publishers, River Publishers Series in Security and Digital Forensics, 1st ed.
          <year>2019</year>
          ,
          <volume>492</volume>
          p.
          <volume>207</volume>
          <fpage>illus</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <surname>Petrenko</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Cyber resilient platform for internet of things (IIoT/IoT)ed systems: survey of architecture patterns</article-title>
          .
          <source>Voprosy kiberbezopasnosti</source>
          .
          <source>2021. N</source>
          <volume>2</volume>
          (
          <issue>42</issue>
          ). P.
          <volume>81</volume>
          -
          <fpage>91</fpage>
          . DOI:
          <volume>10</volume>
          .21681/
          <fpage>2311</fpage>
          - 3456-2021-2-
          <fpage>81</fpage>
          -91.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>