<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Deep Blockchain to Enable Scalable Web Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yajna Pandith</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bengaluru</institution>
          ,
          <addr-line>Karnataka</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The work delves into the exploration of deep blockchain architecture involving the introduction of higher-layer blockchains, which summarize their blocks through anchor transactions integrated into the blocks of lower-layer blockchains. The architecture is structured as follows: (I) Layer 1 - MainNet: This layer serves as the repository for registered Layer 3 blockchain roots and Layer 2 Block Merkle roots. (II) Layer 2 - Plasma Cash chain: This layer facilitates the storage of Plasma tokens, which can be redeemed for bandwidth, along with Layer 3 Block hashes. (III) Layer 3 - Multiple blockchains: These blockchains leverage the storage and bandwidth capabilities provided by Layer 2, enabling seamless packaging of NoSQL/SQL transactions and similar database operations. Sparse Merkle Trees is employed extensively, demonstrating their eficacy in delivering provable data storage through the use of Deep Merkle Proofs. Our objective is to present results highlighting re-markable throughput and low latency for Layer 3 blockchains built upon economically secure Layer 2 Plasma Cash blockchains. Collectively, these advancements lay a solid foundation for the development of scalable web applications. Our research paves the way for innovative solutions in various industries that can scale modern web applications successfully, ensuring unwavering data integrity, enhanced security, and optimized eficiency.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;deep blockchain</kwd>
        <kwd>data storage</kwd>
        <kwd>web applications</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>into blocks, which are interlinked through a parent hash
within each block to reference the preceding block.
Together, these blocks serve as a ledger, with block hashes
employed to spot the ultimate state:
Layer 1 blockchains such as Ethereum and Bitcoin, on
their own, cannot support the latency and throughput
needs for modern web applications [1]. Attempting to
support higher throughput or lower latency with naive  +1 ≡ Π ( ,  ) (2)
solutions (e.g. larger blocks, lower security consensus
abllgoockricthhamins,s e[t2c]..) Istaicsruificnensetcheesscaorrye tboenmeafitkseofthleasyeersa1cri-  ≡ (. . . , (0 , 1 , . . .)) (3)
ifces in the name of scalability for blockchains: when
one blockchain is capable of storing and retrieving state,
then another blockchain’s summary state variables may Π ( ,  ) ≡ Ω ( , Υ (Υ ( , 0 ), 1 , . . .)) (4)
be stored there. This can be done in layers, where Layer
i+1 blockchain’s state is stored in Layer i blockchains and where Ω  is the block finalization state transition
funceach blockchain uses a well-motivated consensus engine tion for layer ,  is the th block of layer  (which
to achieve Byzantine fault tolerance [3]. Using this lay- collates transactions and other components), and Π  is
ered approach, the key elements of a deep blockchain ar- the block-level state transition function for layer .
chitecture can be specified. The blockchain paradigm [ 4] In a deep blockchain system, a blockchain layer  is
that forms the backbone of all decentralized consensus- said to be connected to layer  + 1 if:
based transaction systems to date is as follows. A valid
state transition for a blockchain of Layer  is one which
comes about through a transaction :
1. there exists a transaction mapping function Λ +1
mapping blocks at layer  + 1 into transactions
 at layer  for all layer  + 1 blocks +1
 +1 = Υ ( ,  )</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
where Υ  is the Layer  blockchain state transition
function, while   enables components to retain arbitrary
state between transactions. Transactions are organized
2. there exists a mapping function Ξ () retrieving
from blockchain layer  a mapping  (+1) of
the blocks state of layer  + 1 for all blocks +1
A natural choice for transaction mapping Λ +1(+1)
may be to include a block hash +1 of the block +1
 ≡ Λ +1(+1)
Ξ () ≡  (+1)
(5)
(6)
as a transaction  [5], and for the lower layer to
provide the block hash back (see Figure 1 (left)). This paper,
demonstrates a deep blockchain system for provable
storage, situating a “Plasma Cash” design [6] in a Layer 2
Blockchain and NoSQL/SQL/File Storage for any number
of Layer 3 Blockchains (see Figure 2).
      </p>
      <p>Historically, the low-throughput high-latency of Layer
1 blockchains resulted in immediate pressure to drive
activities of-chain [ 7], but only a few “of-chain” attempts
can be considered deep blockchains because they lack
the connected blockchains. Layer  + 1 and layer  may
be explicitly connected in a deep blockchain system for
many diferent reasons:
1. Higher throughput services at layer  + 1 may be</p>
      <p>paid for using the value held in layer  currency
2. Storing a limited set of information in layer  + 1
in layer  may support the security and
provenance of layer 
3. Proof of fraud at layer  + 1 can be used for
eco</p>
      <p>nomic consequences at layer 
The nascent label “Layer 2” encompasses many newly
developing notions ranging from state channels to almost
any approach that may help Layer 1 scale (e.g. bigger
blocks), but the term “deep blockchain” is not used for all
Layer 2 notions but specifically for any situation where
one or more blockchains are connected in the above way.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Layer 2: Plasma Cash</title>
    </sec>
    <sec id="sec-3">
      <title>Blockchain</title>
      <p>Seminal insights on multi-layer blockchains were put
forth by [8], which have inspired many “Plasma”
designs, and specifically motivated our implementation of
what has been termed “Plasma Cash” for tracking
storage and bandwidth balances. The Layer 2 Plasma Cash
blockchain is connected to Layer 1 using the following
trust primitives:
• User Deposit: When Alice wishes to use the
services enabled by the Layer 2 blockchain,
Alice deposits some Layer 1 currency   (.01
ETH or 1 WLK) in a Layer 1 contract function
(createBlockchain); the deposit event results
in Alice owning a Layer 2 token  through a Layer
2 Deposit transaction included in a Layer 2 block.
• User Transfer: When Alice wishes to transfer
her Layer 2 token  to another user Bob or the
Plasma operator Paul, Alice signs a Layer 2 token
transfer transaction specifying the recipient and
the previous block. This Layer 2 transaction is
included on the Layer 2 blockchain by Paul.
• Layer 2 Block Connection: The operators of the
Layer 2 blockchain mints new Layer 2 blocks
2 with a collation of Layer 2 transactions 2
(with a consensus protocol such as Quorum RAFT
and POA in permissioned networks or Ethereum
Casper for permissionless networks) from the
User Deposit and User Transfer transactions.
Each Layer 2 block 2 has its Merkle Root 2
submitted to Layer 1 with a transaction 2 =
submitBlock(2, ) recorded in a Layer 1 block
1. The recipient Bob of a token transfer must
receive the full history of all transactions from
Alice and verify it against these Merkle Roots 2
stored in Layer 1, all the way to the original
deposit. If any transaction in the history cannot be
verified by Bob, Bob cannot accept Alice’s token
as payment.
• User Exit: When Alice wishes to withdraw her
token  for Layer 1 cryptocurrency, she calls
startExit function with the last 2 transactions1
which can be verified against and Merkle proofs
that must match the stored Merkle roots to be
a valid exit; if no one challenges the exit, Alice
receives the outstanding token balance within a
short time period when exits are finalized.
• User Challenges: If the operator Bob or another
user Charlie notices that Alice’s exit attempt is
invalid, it submits a Merkle proof and rewarded
when a valid challenge indicates a invalid exit.
1As to why two, two is indicative, but not conclusive concerning
Alice’s ownership, therefore a user challenge process is required.
Remarkably, users of the Layer 2 blockchain can conduct
their business securely even when the Layer 2 operator
has 100% control over the Layer 2 blockchain! Any sign
of malicious operator Paul and the Layer 2 users can exit,
and all Layer 2 token values remain secure. How can
practitioners reconcile instincts to pursue this objective:
Blockchain 1.0 Objective: Maximize decentralization.
with an obviously centralized operator? The answer is
to pursue a more nuanced objective of</p>
      <p>Blockchain 2.0 Objective: Maximize the cost of
successful attacks.</p>
      <sec id="sec-3-1">
        <title>With the Plasma Cash construct, the Blockchain 2.0</title>
        <p>Objective is achieved with:</p>
      </sec>
      <sec id="sec-3-2">
        <title>1. Layer 1 Smart Contracts supporting a Layer 2</title>
        <p>Connection to Layer 1 storage that collectively
make the cost of attacking the Layer 2 blockchain
the same as the cost of attacking the Layer 1
blockchain – for Ethereum and Bitcoin Layer 1
blockchains, this is the famous “51% attack”, for
others it might be whatever is required to control
the state of that Layer 1 Blockchain.
2. Layer 1 Cryptocurrency being used for value
transfer of services between users of the Layer
2 Blockchain and the Layer 2 operator mediated
through deposits, token transfers and exits
mediated by Layer 1 constructs
With the Layer 2 Block Connection and trust primitives
in place, Layer 2 can operate at much higher throughput
than Layer 1 because of its reduced consensus, but
continuing to inherit Layer 1’s cost of attack and achieving
the more fundamental objective. Therefore practitioners
of deep blockchain engineering must develop diferent
instincts, incorporating diferent software trust
primitives between diferent constructed layers to achieve the
same objective depending on the structure of between
layers and the value unlocked in each.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Deep Blockchains for Provable</title>
    </sec>
    <sec id="sec-5">
      <title>Data Storage</title>
      <sec id="sec-5-1">
        <title>The specific deep blockchain system that has developed</title>
        <p>extends the Blockchain 2.0 Objective up one more layer
by incorporating trust primitives (Block transactions,
Sparse Merkle Trees) in provable NoSQL, SQL and
Storage services, shown in Figure 2: Layer 3 NoSQL, SQL and
Storage blockchains rest on the storage and bandwidth
services of Layer 2, which supervene on the
decentralized computation and payment services of Layer 1. Our
work follows Ethereum SWARM’s foundational work on
storage and bandwidth [9] which outlines the following
ideas that is situated in multiple layers:</p>
        <p>• A chunk of bytes  is stored in Cloudstore using
256-bit hash  = () as the key to retrieve .
Nodes that request a chunk by key  can verify
correctness of the value  returned from
Cloudstore simply by checking if  = ().
• Insurers of chunks can earn Layer 1 currency
with valid Merkle proofs; Failure to provide valid
proofs result in severe insurance payouts
• Bandwidth consumed by a node, when hitting the
nodes threshold must result in signed payments
Layer 1 blockchains were initially developed without the
concern for storage models being competitive with cloud
computing platforms or even a passing concern for
bandwidth; the birth of Bitcoin and Ethereum Layer 1 focused
on birthing trustless payments and trustless
computation mediated by a peer-to-peer network, rather than
about nodes providing decentralized storage [10]. In
contrast, decentralized storage networks, as manifested in
Ethereum SWARM and many other systems, promises to
have a large peer-to-peer network of nodes sharing the
responsibility to keep a portion of the world’s data and
compensated proportionately for the commodity storage and
bandwidth they provide. In these networks, a distributed
hash table (typically, with Kademlia routing layers) is
used for logarithmic look ups of chunks, but in practice,
(2()) retrieval times are just not competitive with
modern UI expectations or typical developer
expectations. Nevertheless, decentralized storage networks have
a critical role to play in providing censorship-resistance.
Rather than layer 3 rest solely on a decentralized
storage network (which is slow but resilient and
censorshipresistant), layer 3 can rest on both decentralized storage
networks and mature modern cloud computing platforms.
Again, the Blockchain 1.0 Objective must be displaced in
favor of the Blockchain 2.0 Objective: in this sense, more
storage variety increases the cost of attack.</p>
        <p>Putting the elements together in a deep blockchain
system for provable storage:
• Layer 1 blockchain: When a developer
wishes to have a Layer 3 blockchain for
NoSQL/SQL/Storage, they send Layer 1 currency
into createBlockchain(blockchainName
string) on MainNet; this can be refunded
with a dropBlockchain(blockchainName
string) operation (taking place of startExit).
When storage is used in blockchainName
through the activities of Layer 3 blockchains (as
recorded by the Layer 2 blockchain below), this
balance goes down. Balances can added to and
withdrawn by the owner of the blockchain.
• Layer 2 Plasma Cash Blockchain: The storage and
retrieval of chunks in Cloudstore are exposed to
Layer 3 blockchains with the following 2 APIs
(see Appendix A):
– storeChunk(, , ,  ) - stores a
keyvalue pair mapping (, ) in Cloudstore,
backed by Layer 2 token  (signed with )
received from the Layer 1 transaction.
– retrieveChunk(, ,  ) - retrieves a
key-value pair mapping (, ) in
Cloudstore, backed by Layer 2 token  (again,
signed with ), and returning the balance
of  used so far</p>
      </sec>
      <sec id="sec-5-2">
        <title>The Layer 2 operator will store via Cloudstore</title>
        <p>in as many regions and cloud providers as
necessary to insure the chunk as follows: A new
type of Layer 2 block transaction insures a set
of chunks recorded through storeChunk calls.
The cause of these chunks are from any Layer
3 blockchain needing storage and bandwidth,
where bandwidth is used in retrieveChunk
calls. When a Layer 3 blockchain mints Layer 3
blocks, the Layer 3 blocks themselves contain a
Cloudstore key that references a list of chunks
written in the Layer 3 block. The block itself is
stored in Cloudstore with another storeChunk
call, signed by the Layer 3 blockchain owner,
and the block hash 3 is submitted by the Layer
3 blockchain to the Layer 2 blockchain via a
submitBlock(3, ) block transaction. This
enables the Layer 2 blockchain to meter the
cumulative storage of blockchainName and deduct
from the balance originally deposited in the
createBlockchain(blockchainName string)
operation (approximately every 24 hours),
passing on Cloudstore costs to Layer 3 blockchains.
Notably, Layer 3 blocks themselves are recorded
with storeChunk(, , ,  ) to store the layer 3
block in Cloudstore and then results in a call to
submitBlock(3 , ):
3 ≡ submitBlock(3 , )
(7)
Because the block storage is signed and because
the block transactions are signed, Layer 2
operators collect storage payments with the layer 3
blockchain operator consent, forming a kind of
“state channel” within the deep blockchain. Taken
together, this is the Layer 3 Block Connection, as
seen in Figure 1. The Layer 2 block consists of:
– the transaction root  2 that utilizes the
SMT structure to represent just the tokens
 1,  2, . . . spent in block</p>
        <p>2 ≡ KT(( 1,  21 ), ( 2,  22 ), ...) (8)
– the token root   for all tokens  , ...</p>
        <p>≡ KT(( 1,  21 ), ( 2,  22 ), ...)
(9)
– array of token transactions 2
– array of block transactions ˜ 2 from all
Layer 3 blockchain operators using Layer
2 services
– an account root, using an SMT to store an
accounts “balance” and a list of tokens held
by that account.
• Layer 3 blockchains: Any number of Layer 3
blockchains that utilize storage and bandwidth
can be layered on top of the Layer 2 blockchain,
regularly submitting lists of chunks based on the
structure of the Layer 3 blockchain.</p>
        <p>– For NoSQL + File Storage, there is a key
for each row of NoSQL or File, and a value
for the row (a JSON record) or raw file
contents. The root hash changes when any
table is added/removed or when any table
schema is updated, and where each table
has a root hash that changes when any
record of the table is changed; any new
database content results in new chunks,
where the chunk is referenced by the hash
of its content.
– For SQL, there is a root hash for each
database, where the root hash changes
when any table schema is updated, and
where each table has a root hash that
changes when any record of the table is
changed; any new database content results
in new chunks, where the chunk is
referenced by the hash of its content.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Both NoSQL and SQL Blockchains is described in</title>
        <p>Section 5.</p>
        <p>• delete() - deletes the key by inserting the null</p>
        <p>value for  into the SMT
• get() - gets the value from the SMT through
node / chunk traversal
Just as with Layer 1 blockchain nodes, running Layer
2 and Layer 3 blockchains consists of running a node
within the framework of a decentralized system, retriev- Typically, block proposals with SMTs as a core data
strucing and relaying messages about new transactions and ture involve bulk combinations of the above, with many
new blocks. Wolk’s blockchain implementations of the inserts and deletes mutating the content of many chunks,
Layer 2 and Layer 3 originated from Ethereum’s go- and the Merkle root only being computed as a final step.
ethereum and JPMorgan’s Quorum RAFT code bases, Sparse Merkle Trees are best suited for a core primitive
written in Golang. RAFT is used for both Layer 2 and over more familiar Binary Merkle Trees (BMTs) because:
Layer 3 implementations due to its simple model of
finality. For each blockchain, Golang package is created
containing each of the interfaces specified in Appendix
A, and adapted Quorum RAFT code to conform to these
interfaces. There is no explicit assumption that
permissioned consensus algorithms be used, however. The
choice of RAFT was made purely out of simplicity, its
maturity as a code base, and its capacity for high
throughput – any consensus protocol that achieves finality can
ift within this deep blockchain architecture. For both the
Layer 2 and Layer 3 blockchains, Sparse Merkle Tree is
used to support provable data storage.
• when an id (a tokenID, a document key in NoSQL,
a URL in File storage, a table root in SQL) is
mapped to a value, you can guarantee that the id
has exactly one position in the tree, which you
don’t get with BMTs.
• when an id is NOT present in the SMT, you can
also prove it with the same mechanism. This
approach proves beneficial in situations where</p>
        <p>Bloom filters produce erroneous matches.
• A Merkle proof for the id mapped to a specific
value is straightforward, and because of
sparseness the number of bytes required is much less
than the depth of the tree</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>4. Sparse Merkle Trees and</title>
    </sec>
    <sec id="sec-7">
      <title>Provenance</title>
      <sec id="sec-7-1">
        <title>The key concept behind SMTs is the eficient representa</title>
        <p>tion of included IDs using  hashes at  out of 2 leaves.</p>
        <p>Each ID, represented as a -bit number, is associated with
The Sparse Merkle Tree (SMT) is a persistent data struc- either a null value or its corresponding hash at a leaf node.
ture that map fixed -bit keys to 256-bit values in an Instead of using a Merkle proof consisting of 64 32-byte
abstract tree of height  with 2 leaves for any set I: hashes from the leaf to the root, a compact
representation can be achieved using proofBits, a -bit value
I = {(k0 ∈ B, v0 ∈ B256), (k1 ∈ B, v1 ∈ B256), . . .} (e.g., uint64). Each bit in proofBits indicates whether
(10) the sisters on the path to the root use default hashes
The function of the SMT is to provide a unique Merkle (0) or non-default hashes stored in proofBytes. The
root hash that uniquely identifies a given set of key-value proofBytes array exclusively consists of non-default
pairs I, a set containing pairs of byte sequences. Each key hashes, while the value stored at the leaf level is the
stored in the SMT defines a Merkle branch down to one 32-byte RLP hash.
of 2 leaves, and the leaf holds only one possible value for For the Layer 2 Block Connection, a call to
that key in I. The bits of the -bit key define the path to checkMembership(bytes32 leaf, bytes32 root,
be traversed, with the most significant bit at height  − 1 uint64 tokenID, uint64 proofBits, bytes
and least significant bit at height 0. Following [11] and proofBytes helper function in Ethereum MainNet can
[12], to compute the Merkle root of any SMT in practice take proofBits and proofBytes and prove that a exit
and allow for the ideal computation of Merkle branches or challenge is valid if it matches the Merkle roots
profor the  Merkle branches, it is useful pre-compute a set vided by the Plasma operator in a call to
of default hashes (ℎ) for all heights h from 0 . . .  − 1
levels: (shown in Figure 3) submitBlock(bytes32 root)
• At level 0, (0) ≡ (0)
• At level ℎ, (ℎ) ≡ ((ℎ − 1), (ℎ − 1))
Logarithmic insertion, deletion and retrieval operations
on the SMT are defined with elemental operations:
• insert(, ) - inserts the key by traversing
chunks using the bytes of</p>
        <p>Similarly, when a user receives a token from another
user, they must obtain the tokenID, along with  raw
transaction bytes and  Merkle proofs. Each Merkle proof
corresponds to a specific block and verifies the token
spend. It’s important to note that a non-spend can also
be proven, where the leaf is represented by (0).</p>
        <p>In the optimal scenario, an SMT representing a
single key-value mapping ( = 1) reduces the proof size
transactions per Layer 2 block (500 × 10 × 8640100× 6 31655 ).</p>
        <p>When incorporating these 2,378 transactions into an SMT,
given that 2(2378) = 11.2, you will have a densely
populated set of nodes, mostly consisting of non-default
hashes, from levels 63 down to approximately level 53.</p>
        <p>Below that, you will have only one tokenID extending
all the way to level 0. The proof size would amount to
32 bytes per level, resulting in a total of 320 bytes for 10
levels.</p>
        <p>= 64 is decided instead of  = 256 because:
• collisions are still unlikely at q=64 ... until it is</p>
        <p>around 4B keys
• the proofBits are 24 bytes smaller (uint64
in</p>
        <p>stead of uint256)
• less gas is spent in checkMembership on all 0</p>
        <p>bits in proofBits
• smaller 64-element array of default hashes
com</p>
        <p>puted instead of 256 hashes
Reducing the frequency of hashing leads to decreased gas
Figure 3: Sparse Merkle Tree Illustration: Merkle branches consumption and increased user satisfaction, particularly
for 2 64-bit keys 1 = 001..00 and 2 = 101.. hold in the Level 2 block connection. In this context, it ensures
(1) and (2) in a unique SMT root  for a 2 key set that collisions between circulating tokenIds can be
definiI = {(1, 1), (2, 2)}. Since there are only keys in this tively ruled out during deposit events. Moreover, you can
tree, the default hashes (ℎ) (outlined in red) appear starting combine the fixed length proofBits and variable length
at level 62, so the branches 1, 2 (shown in blue circles) proofBytes into a single proof bytes input for exits, i.e.
have Sparse Merkle proofs using default hashes from level 0 startExit(uint64 tokenID, bytes txBytes1,
Ttohilesvmela6k2e,swfohricvhercyantibneysppreocoiffiseadnidn laowpreorogfaBsictosstspaornamMeatienr-. bytes txBytes2, bytes proof1, bytes proof2,
Net. int blk1, int blk2) The analogous challenge
interfaces will then have fewer argument inputs in the same
way.</p>
        <p>The sparseness of the SMT derives from the
observasignificantly. Instead of a 64 × 32 byte proof, the en- tion that keys will extremely rarely share paths at
intire path from level 0 to level 63 consists of default creasingly lower heights and naturally will share paths
hashes, and proofBits is a 64-bit value filled with zeros at increasingly higher paths. This lends itself to a
repre(0x0000000000000000). In this case, proofBytes is sentation where the SMT is chunked by byte k, where
empty, and the uint64 value is 0, resulting in the most traversing the SMT from a root chunk (representing a
compact proof size possible: 8 bytes. range of keys from 0 to 264-1) down to an intermediate</p>
        <p>In the following favorable scenario, considering 2 chunk with just one leaf involves processing one
addiids (for example, 0x01234... and 0x89abc...), the tional byte, which each chunk of data storage having
proof of spend for each token would include a sin- up to 256 child chunks specifying a range of keys each
gle non-default hash at the topmost level 63, and child posessing a range that is 2516 smaller. Just as with a
proofBits would consist of the value 1 followed by radix tree, the SMT is traversed from root to leaf, with an
63 zeros (0x8000000000000000). The resulting proof additional byte of the key causing a read of a chunk that
size would be 40 bytes. represents up to 256 branches and the hashes of all the</p>
        <p>In typical scenarios, SMTs exhibit high node density branches, utilizing default hashes. Golang "smt"
packin the upper levels, ranging from level  − 1 down to age is implemented and a "cloud" package to map SMT
approximately level 2(). To illustrate this, consider operations into Cloudstore.
a situation where you have 10MM Layer 2 tokens, and
each token undergoes 500 transactions per token per year.</p>
        <p>This results in a total of 5B transactions for the 10MM 5. Layer 3 Blockchains
tokens annually. Assuming a Layer 2 block frequency
of 15s/block, these 5B transactions would be distributed With the foundations of Layer 2 providing storage and
across 2.1MM blocks per year, with an average of 2,378 bandwidth paid for with Layer 2 tokens, any number
of Layer 3 blockchains may be constructed. The
construction of a NoSQL and SQL blockchain is detailed
here. At a high level, Layer 3 blockchains collate SQL
and NoSQL transactions in Layer 3 blocks submit block
transactions to Layer 2, and Layer 2 collate token and
block transactions with Merkle Roots of token root and
blocks submitted and included in transactions to Layer 1
blockchain. It then becomes possible to aggregate
multiple proof of inclusions at the highest layers all the way to
MainNet with Deep Merkle Proofs, which is illustrated
here.</p>
        <sec id="sec-7-1-1">
          <title>5.1. Layer 3 NoSQL Blockchain and Deep</title>
        </sec>
        <sec id="sec-7-1-2">
          <title>Merkle Proofs</title>
          <p>To support Layer 3 NoSQL transactions in a NoSQL
blockchain, the Layer 3 blockchain has a layer 3 block
structure defined as collating a set of NoSQL records
along with a Layer3KeyRoot of a Sparse Merkle Tree
managing a set of key-value pairs of “documents”.</p>
          <p>All NoSQL records are encrypted using counter mode
(CTR) encryption defining operations (,  ) and
(,  ) and utilizing a database encryption key 
known only to the layer 3 blockchain user. Three
operations are defined, each of which map into the SMT data
structure:
• SetKey(, ) - stores arbitrary , ,
through a storeChunk(k, v) Layer 2
operation and a Layer 3 SMT operation on 
(insert((), ((,  ))))
• GetKey() - retrieves the value  stored in the</p>
          <p>SetKey(, ) operation, through Layer 3
operation on  get(H(k)) which returns ℎ followed
by (retrieveChunk(ℎ),  )
• DeleteKey() - removes  from the NoSQL
database, by storing ((), 0) in the SMT;
subsequent calls to GetKey(k) will not return a value.</p>
          <p>[SetKey(1 = 0b001...00,
The minting of a new Layer 3 NoSQL Block
consists of taking each of the Layer 3 transactions
(SetKey, DeleteKey), executing storeChunk Layer 2
API calls for its users. Unless two transactions operate A Deep Merkle Proof is formed through the
aggreover the same key , all transactions can be executed in gation of each proof of inclusion across each layer of
parallel. If multiple transactions operate over the same blockchain connections in a deep blockchain down to
key, only the last received transaction will have its muta- Layer 1. In our 3 layer deep blockchain with the Layer
tion succeed. 3 NoSQL blockchain layered on the Layer 2 Storage /
example (shown in Figure 4) Plasma Cash blockchain, there is a Layer 2-3 connection
• In Layer 3 Block 302, the user wishes store doc- and a Layer 1-2 connection. So a full Deep Merkle Proof
ument ID 1 with key 1 mapped to encrypted that a NoSQL document 1, 1 is included in the deep
value 1 and document ID 2 mapped to encrypted blockchain all the way up to MainNet consists of:
value 2. The user can submit 2 Layer 3 NoSQL
transactions:
1. Layer 3 proof of inclusion of ((1), (1)) in</p>
          <p>Layer 3 block Layer3KeyRoot – in our example,
this would be that the value (1) hashes up to
SMT root 302 = 0x83fc...
• Finally, a Layer 1 Block (e.g. 10,000,002) will
be proposed by some MainNet miner including
the above Layer 2 submitBlock transaction and
eventually be finalized by the Layer 1 consensus
protocol.</p>
        </sec>
      </sec>
      <sec id="sec-7-2">
        <title>2. Layer 2 proof of inclusion of</title>
        <p>((concat(blockchainName, k))) in
Layer 3 block Layer3KeyRoot – in our
example, this would be that the Layer 3 block
hash 0b101..11 hashes up to SMT root
2002 = 0x4d69..
3. Layer 1 proof of inclusion of the Layer 2 block
hash in the blockHash array of the Layer 1
Smart Contract – in our example, this is that
blockHash(2002) = 0xe8db
In our implementation, deep Merkle proofs are provided
in response to GetKey(,  ) to the layer 3 blockchain
users as an optional deep boolean parameter and when
true, returns the full combination of:
• Layer 3 Block, which includes Layer3KeyRoot
• proofBits and proofBytes for the</p>
      </sec>
      <sec id="sec-7-3">
        <title>Layer3KeyRoot, which are shown to match</title>
        <p>(), ( )
• Layer 2 Block, which includes BlockRoot
• proofBits and proofBytes for the</p>
        <p>BlockRoot, which are shown to match
the Layer 2 Block Hash
• Layer 1 blockHash record of the Layer 2 block</p>
        <p>number</p>
        <p>The concept of a deep Merkle Proof is not limited to 3
layer deep blockchains, nor is the concept only applicable
to NoSQL blockchains – the concept applies to multiple
layers of proof of inclusion enabled through the general
layering processes of deep blockchain systems generally.
The minting a layer 3 block consists of the leader
compiling each SQL transaction into a set of instructions
to executed by a “SQL Virtual Machine” (SVM) based
of of the widely used SQLite’s virtual machine. In this Our current implementation has a full implementation
model, a virtual machine has a program counter that in- of single table operations thus far, but with relational
crements or jumps to another line after the execution of database operations approachable with the same
dynameach opcode instruction. For example, a SQL statement ics:
of "Select * from person" received by a node is mapped
into a interpretable set of opcodes like this:
{"n":0,"opcode":"Init","p2":8,"p4"
":"select * from person"}
{"n":1,"opcode":"OpenRead","p2"
:2,"p4":"2"}
{"n":2,"opcode":"Rewind","p2":7}
{"n":3,"opcode":"Column","p3":1}
{"n":4,"opcode":"Column","p2"
:1,"p3":2}
{"n":5,"opcode":"ResultRow","p1"
:1,"p2":2}
{"n":6,"opcode":"Next","p2"
:3,"p5":1}
{"n":7,"opcode":"Halt"}
{"n":8,"opcode":"Transaction","p3"
:3,"p4":"0","p5":1}
{"n":9,"opcode":"Goto","p2":1}
• Database Schema chunk: represents up to 32
tables belonging to the “blockchainName”. Each
table is identified by name (up to 32 bytes) and
has a table chunk;
• Table chunk: represents up to 32 columns
belonging to a specific “table”. Each column is identified
by name (up to 27-bytes) and additional
information: its column type (integer, string, float, etc.),
whether it is a primary key, and any index
information; a 32-byte chunk ID points to a potential
index chunk, if the column is indexed A table must
have at least one primary key.
• Index chunk: a B+ tree, composed of intermediate
“X” chunks and data “D” chunks. Each X chunk
has 32-byte pointers to additional X chunks or D
chunks. D chunks form a ordered doubly linked
list, and contain pointers to record chunks.
• Record chunk: a 4K chunk of data that holds a</p>
        <p>JSON record for a keyed value.
• When the owner of a Layer 3 blockchain creates
a new database, the owner chunk is updated and
database chunk is created and the owner chunk
is updated with the new database chunk
information. If this is the first database created by the
owner, the root hash of the owner is set for the
ifrst time. The root hash of the database is set for
the first time.
• When the owner of a Layer 3 blockchain creates
a new table, the database chunk is updated and
table chunk is created and the database chunk
is updated with the new table information. This
also causes the owner chunk to be updated with
the new database chunk information. The root
hash of the table is set for the first time in the
child chain.
• When the owner of a Layer 3 blockchain creates
or updates a table, this creates or changes the
database schema chunk. The database chunk is
then updated with the new schema information,
which in turn causes the owner chunk to be
updated with the new database chunk information.
• When an owner creates a new record in a table
with a SQL statement such as</p>
        <p>insert into account (id, v)")
values (42, "minnie@ethmail.com")
the index chunks (X chunks and D chunks) are
updated with new primary key information and
a record chunk is created in JSON form
Because the index chunk changes, the table chunk
changes. The root hash of the table is set for the
ifrst time in the child chain. When an owner
updates a record in a table with a SQL statement
lie</p>
        <p>update account set v =
the record has a new chunkID because of the new
JSON content
and so one or more index chunks are updated
with a new chunkID.
• When an owner drops a database, the owner
chunk is updated globally. Additionally, any
tables associated with the database at the time of
deletion should have their root hashes updated.
• When an owner deletes a table, the root hash of
the table is updated, the schema chunk is updated,
and the database chunk is updated with the new
schema chunk info and removing the table name.</p>
        <p>The owner chunk is then updated with the new
database chunk info..</p>
        <p>When the leader node of a Layer 3 SQL blockchain mints
a Layer 3 block, it must include in its Layer 3 block:
• the SQL transactions – where for each table
referenced in the SQL, the leader must retrieve the
previous root hash of the table in the SMT and
execute the SVM operations for that table against
that SMT’s data.
• the Chunks newly written through the execution
of the SQL transactions, where chunks are only
created, and never “updated”.
• a new Layer3KeyRoot transactions and calls
submitBlock(3, ): for all tables updated from
the SQL transactions, each table has a new root
hash. Using the Layer3KeyRoot, any layer 3
which proceeds just as in the NoSQL blockchain, with
the analogously structured Deep Merkle Proof. Where in
the NoSQL chain, each NoSQL document / row updated
resulted in an updated leaf in the SMT for the newly
updated document, now with the SQL chain, each SQL
statement supports a new table root hash change in an
update leaf in the SMT.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>6. Paying for Storage and</title>
    </sec>
    <sec id="sec-9">
      <title>Bandwidth</title>
      <p>
        The Layer 3 blockchain users who store NoSQL/SQL/File
data with storeChunk operations give the Layer 2
operator permission to charge for bandwidth and storage
in two diferent ways:
1. Bandwidth is paid for through (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) users
signing retrieveChunk(, ,  ) calls to retrieve
data and obtaining recent balances, where
each call uses up a tiny amount of bandwidth
backed for with a token  originated by the
createBlockchain call; (2) users signing a new
updateBalance(, ,  ) request originated by
operator and agreeing to making a payment for
incurred bandwidth usage and the latest token
owner balance. An updateBalance response by
users is mapped into layer2 transaction, where
incurred bandwidth cost is deducted from token’s
owner balance ( ) and added to operator’s
allowance  ( ).
2. Storage is paid for through Block Transactions
submitBlock(3, ) signed by the Layer 3
blockchain operator - because chunks are
identiifed directly inside Layer 3 blocks, a tally of the
number of bytes used in each new layer 3 block
is added to the SMT. The Layer 1 contract then
exposes storageCharge interface to the Layer
2 operator where a recently signed Layer 2 block
transaction (containing a tally of the number of
bytes, signed by the Layer 3 blockchain operator)
is used to deduct the layer 3 operator’s balance
since the last time it was called. This is detailed
below.
node can respond to a SQL SELECT query by re- In this way Layer 3 Blockchains pay for the services of the
trieving the the previous hash of any table from Layer 2 blockchain. The lifecycle of a short lived Layer 3
the SMT. Using the Layer3KeyRoot, any layer blockchain is shown in Figure 5, which is expounded in
3 node can respond to a SQL SELECT query by the next section.
retrieving the the previous hash of any table from
the SMT
      </p>
      <sec id="sec-9-1">
        <title>With a newly minted Layer 3 block , the Layer 3 SQL</title>
        <p>blockchain can submit a layer 2 block transaction 3 for
the Layer 3 Block 3
submitBlock(3, )</p>
        <sec id="sec-9-1-1">
          <title>6.1. Layer 2 Plasma Tokens for Bandwidth</title>
        </sec>
        <sec id="sec-9-1-2">
          <title>Payments</title>
          <p>In this section it is explained how Layer 2 Tokens can
form a unidirectional payment channel, where each signed
retrieveChunk call is not a transaction to be included
in a Layer 2 block (and has no nonce to increment) but
simply indicative of "permission to return some data and
decrement my token balance"; where the Layer 2 operator
can check the signature against its record of the current
owner as a condition of looking up the chunk.</p>
          <p>The Layer 2 operator must have tally aggregation
capability that can aggregate numerous signed calls together
and compute that token  has some new balance ( ). In</p>
        </sec>
        <sec id="sec-9-1-3">
          <title>6.2. Layer 1 Storage Insurance</title>
          <p>Because every single write of a Layer 3 blockchain is
included in sequentially ordered layer 3 blocks (each of
which identify a set of Chunk IDs) the layer 3 blockchain
forms an itemized list of signed insurance requests that
form a Layer 1 unidirectional state channel initiated by
the deposit into createBlockchain. Assuming no
challenges exist, if the Layer 2 operator that receives a Layer
3 block identifying a set of chunks can provide a recent
• Storage Challenge-Response: CRASH proofs. If
at any time, the Layer 3 blockchain wishes
to challenge Layer 2’s inept storage (due to a
missing block or missing chunk included in the
block), it may do so by demanding a CRASH
proof of a specific layer 3 block, revealing

(which must match the  in txbytes) by calling:
negligible. In regular conditions, the Layer 3 blockchain
can see its storage fees through storageCharge; when
the balance approaches zero, the Layer 3 blockchain must
deposit additional Layer 1 currency to its blockchain
balance at Layer 1. Finally, a call to dropBlockchain
must permit the layer 2 operator the opportunity to claim
a final</p>
          <p>storageCharge and close out the bandwidth
balance of  before finalizing exits (see Figure 5). Since
there are two sources of demand (storage charge and
bandwidth charges), the layer 2 blockchain must check
that the sum of both sources equal the available balance
for the layer 3 blockchain.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>7. Discussion</title>
      <p>There have been many approaches scaling blockchain
architecture to support higher throughput and lower
latency:
• Changing the security
model of Layer 1
blockchains (c.f. NEO, EOS’s approach)
• Incremental improvements to Layer 1 or Layer
0 that don’t change security model (c.f. larger
blocks)
• Having many separate chains, using sharding
• State Channels
• Layer 2 Plasma solutions</p>
      <sec id="sec-10-1">
        <title>This paper focussed on the last approach, and de</title>
        <p>scribed how using the core ideas behind Layer 2 Plasma
Cash can be extended to a deep blockchain system,
forming the basis for provable data storage for widely used
NoSQL + SQL developer interfaces. The concept of Deep
Merkle proof for a 3 layer deep blockchain system is
illustrated here and shown its conceptual viability,
borbe reasonable relative to Layer 1 Ethereum gas ized, not by demanding that every component be
doguint64, rowing state channel concepts for Layer 3 NoSQL and
bytes) SQL blockchains to pay for storage and bandwidth.</p>
      </sec>
      <sec id="sec-10-2">
        <title>Deep learning architectures have advanced numerous</title>
        <p>high-scale applications in every industry in a way that
is not about one specific deep learning algorithm – and
instead about an approach that could not be achieved
through dogmatic faith in single-layer “neural” networks.
In an analogous way, deep blockchain architectures could
have the potential to enable a wide range of high-scale
applications in a way that might not be achieved through
dogmatic faith in Layer 1 scaling innovations alone.</p>
      </sec>
      <sec id="sec-10-3">
        <title>Blockchain practitioner instincts are to be wary of</title>
        <p>centralized consensus protocols and centralized storage.</p>
        <p>However, our use of non-local storage can be
rationalmatically decentralized, but by considering how attack
vectors are reduced through judicious use of some
notso-decentralized components. The attacks on storage are
limited in nature due to:
• verifiability of chunks, where all ,  pairs re- node devolves into a dumb storage layer with some
failtrieved from non-local storage are verifiable ei- ure or attack probability (12..22 for layer 2 blockchains,
ther due to (a)  being verified to the hash of the 11..11 for layer 1 blockchains) – depending on this
value  returned (b)  being directly included and probabilistic model, the cost of attack may be divined.
signed by a trusted party. In this sense the attack However, it seems most likely that motivated parties
vector is limited to the private key would attack the centralized control behind each layer
• the use of Ethereum SWARM (currently in POC3) (c.f. via EIP999, mining pools arewedecentralizedyet.com,
as a censorship-resistant cloud storage provider. governments asking the Cloudstore providers to block
In the event that the Layer 2 blockchain provider Layer 2 operator’s accounts) – in this sense the
probabilisloses access to its Cloud Storage backend, higher tic independence in concentrated eforts to attack layer
layer backends can simply request chunks using 1 and 2 would be highly suspect. For this reason, our
the Kademlia-based DHT of Ethereum SWARM. true faith relies in Ethereum SWARM’s resource updates
Generally this censorship-resistance comes at the ([9]), where chunks may be keyed not by the hash of
cost of higher latency responses. their content but with a resource key, which can be used
• cryptoeconomic incentives, wherein if a data for the block data without an index mechanism; all
restorer can prove (with a Merkle branch) that a source updates are signed so the reader can authenticate.
piece of data can no longer be accessed but has Ethereum SWARM, because of its use of Kademlia-like
been included on chain through a valid Merkle protocol, is not naturally as fast as other components in
branch Cloudstore, but kicks in when all Layer 2 blockchains</p>
        <p>Cloudstore fail or when Layer 1 itself is attacked (via 51%</p>
        <p>It is believed the combination of decentralized stor- attacks, or unknown POS failures). If other
decentralage and cloud computing storage increases the cost of ized storage services provided similar provable storage
attack and that the Blockchain 1.0 Objective of Maxi- as Ethereum SWARM’s resource update, so long as Layer
mize decentralization must be altered in favor of the more 3 blockchain does not go Byzantine, only one answer can
nuanced Blockchain 2.0 Objective Maximize cost of at- surface, making for unstoppable layer 3 blockchains.
tack, which ultimately will lead to more secure and re- The Layer 3 NoSQL and SQL blockchains developed
liable blockchain systems. One gets the best of both in this paper operated under an assumption that the
worlds: from centralized storage one gets low-latency, NoSQL + SQL transactions should be private data
sehigh-throughput infrastructure, and from decentralized cured by an encryption key known only to the operators
storage one gets resilience and censorship resistance. of the Layer 3 blockchain. This protects the Layer 3</p>
        <p>Concerning the use of a single centralized Layer 2 op- blockchain from operators of Layer 2 blockchain and any
erator, it is highlighted that in all cases where Layer 1 cur- Cloudstore. However, the same problem as with standard
rency is deposited (in createBlockchain), because use databases (MySQL, MongoDB, DynamoDB, etc.) exists
of the “Plasma Cash” design pattern, the owner of the to- with our current implementation of NoSQL/SQL Layer
kens may withdraw its balance on the Layer 1 blockchain. 3 blockchains: once someone gets access to a Layer 3
This is a surprising result: that checks and balances on to- blockchain node holding the database encryption key or
ken ownership are possible through the use of the Layer private key, the entire database is compromised.
There1 blockchain despite the Plasma operator being in 100% fore, provenance and immutability of the NoSQL/SQL
control; if users discover that the Layer 2 blockchain op- database state changes, as manifested in Deep Merkle
erators are malicious, they can be certain they can get Proofs, diferentiate a Layer 3 blockchain from standard
the value of their tokens back, and if the data is kept in databases. The small latency incurred with permissioned
resilient Ethereum SWARM (or if they have kept their protocols (RAFT, POA) and negligible cost should be
data locally), they can move to another Layer 2 operator welcomed when provenance and immutability are of
using the same protocol. paramount concern.</p>
        <p>This shows a deep blockchain that has a higher cost of Many other Layer 3 blockchains can be constructed
attack than the deep blockchain illustrated in Figure 2, uti- using the Layer 2 storage and bandwidth infrastructure:
lizing 2 Layer 2 blockchains (each with their own Cloud- a chain that represents the evolving state of ERC721
tostore) and 1 Layer 1 blockchains, each receiving the kens, a chain that represents a cryptocurrency exchange
same Layer 3 submitBlock and Layer 2 submitBlock where your money can never be stolen, and so forth. The
transactions respectively: state of the Layer 3 blockchain is not stored locally but
Because the retrieval of layer  + 1 data from layer  can instead kept in Cloudstore with storage and bandwidth
be verified by layer  + 1 (checking block data: does the costs properly accounted for using the Layer 2 tokens,
block hash match the block content? is it signed? does it themselves based on Layer 1. Layer 3 and Layer 2 nodes
have a parent hash? etc.; checking chunks: does the hash are therefore “light nodes” in that they can quickly catch
of the chunk data equal the chunk key), each lower layer up to the latest state by asking the layer 2 and layer 1
blockchains for the most recent finalized block. This [2] Y. Meshcheryakov, A. Melman, O. Evsutin, V.
Mois not possible to do for the Layer 1 blockchain, how- rozov, Y. Koucheryavy, On performance of
ever. However, it is possible, and interesting to adapt a pbft blockchain consensus algorithm for
iotLayer 1 blockchain of Ethereum and make it a Layer 3 applications with constrained devices, IEEE Access
blockchain. Computation (Ethereum gas costs) can con- 9 (2021) 80559–80570.
sume Layer 2 token balances in state channels along with [3] J. Yoo, Y. Jung, D. Shin, M. Bae, E. Jee, Formal
bandwidth, contract storage can use SMTs mapped to modeling and verification of a federated byzantine
Cloudstore (instead of Patricia Merkle Tries kept in local agreement algorithm for blockchain platforms, in:
store) submitted in blocks to the Layer 2 blockchain, and 2019 IEEE International Workshop on Blockchain
the consensus machinery can be put in a modern sharded Oriented Software Engineering (IWBOSE), IEEE,
Proof-of-Stake framework to achieve high-throughput 2019, pp. 11–21.
low-latency ambitions of Ethereum 2.0, with all layer 3 [4] G. Wood, et al., Ethereum: A secure decentralised
nodes. The expectation would be that a Layer 3 Ethereum generalised transaction ledger, https://karl.tech/
blockchain would have massively lower costs due to ra- plasma-cash-simple-spec/, 2014.
tional models of storage and bandwidth. Other deep [5] U. Rahardja, A. N. Hidayanto, N. Lutfiani, D. A.
blockchain systems can be developed with diferent com- Febiani, Q. Aini, Immutability of distributed hash
putational primitives than the EVM, such as Amazon’s model on blockchain node storage, Sci. J.
InformatLambda or Apache Hadoop. ics 8 (2021) 137–143.</p>
        <p>It is believed that there can be many deep blockchain [6] K. Floersch, Plasma cash simple spec, https://karl.
systems developed with higher layers resting on many tech/plasma-cash-simple-spec, 2018.
Layer 1 blockchains, even to the point where multiple [7] E. Gaetani, L. Aniello, R. Baldoni, F. Lombardi,
Layer 1 systems are dropped and many more added to pro- A. Margheri, V. Sassone, Blockchain-based database
vide more or less Layer 2 security. The same can be said to ensure data integrity in cloud computing
envifor any layer to benefit higher layers. If the blockchain ronments (2017).
at layer  changes its consensus algorithm from Quorum [8] J. Poon, V. Buterin, Plasma: Scalable autonomous
RAFT to pBFT or Casper Proof-of-Stake, the layer  + 1 smart contracts, http://plasma.io/plasma.pdf , 2017.
benefits; higher layer blockchains are supervenient on [9] V. Trón, A. Fischer, D. A. Nagy, Swarm: a
decenLayer 1, so innovations on Layer 1 are inherited by all tralised peer-to-peer network for messaging and
deep blockchain systems. It is hoped that many deep storage (2018). Forthcoming.
blockchain systems can explore high throughput low la- [10] S. K. Panda, A. A. Elngar, V. E. Balas, M. Kayed,
tency scale through some of the design patterns explored Bitcoin and blockchain: history and current
applihere. cations, CRC Press, 2020.
[11] B. Laurie, E. Kaspe, Revocation transparency, https:
//www.links.org/files/RevocationTransparency.</p>
        <p>References pdf , 2017.
[12] R. Dahlberg, T. Pulls, R. Peeters, Eficient sparse
merkle trees: Caching strategies and secure (non-)
membership proofs, in: Secure IT Systems: 21st
Nordic Conference, NordSec 2016, Oulu, Finland,
November 2-4, 2016. Proceedings 21, Springer, 2016,
pp. 199–215.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gracy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Jeyavadhanam</surname>
          </string-name>
          ,
          <article-title>A systematic review of blockchain-based system: Transaction throughput latency and challenges</article-title>
          , in: 2021
          <source>International Conference on Computational Intelligence and Computing Applications (ICCICA)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>