<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Predicting pseudo-random number generator output with sequential analysis⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmytro Proskurin</string-name>
          <email>proskurin.d@stud.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maksim Iavich</string-name>
          <email>miavich@cu.edu.ge</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana Okhrimenko</string-name>
          <email>t.okhrimenko@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Okoro Chukwukaelonma</string-name>
          <email>kaelo@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana Hryniuk</string-name>
          <email>t.hryniuk@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CSDP-2024: Cyber Security and Data Protection</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Caucasus University</institution>
          ,
          <addr-line>1 Paata Saakadze str., 0102 Tbilisi</addr-line>
          ,
          <country country="GE">Georgia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>1 Liubomyra Huzara ave., 03058 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>42</fpage>
      <lpage>57</lpage>
      <abstract>
        <p>This study delves into the predictive capabilities of neural network models, specifically focusing on Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks as well as the combination of both in a hybrid architecture, for forecasting the outputs of various pseudo-random number generators (PRNGs). The investigation extends across a diverse set of PRNG algorithms, including Linear Congruential Generator (LCG), Mersenne Twister (MT), Xorshift, and Middle Square. Through meticulous analysis, the study evaluates the accuracy of these models in predicting single and continuous outputs generated from the mentioned PRNGs. The research findings illuminate the superior predictive performance of hybrid models, attributed to their adeptness at capturing long-term dependencies, a crucial factor in decoding the complexities of PRNG sequences. Additionally, the impact of model optimization techniques, including dropout and L2 regularization, on enhancing predictive accuracy is thoroughly explored. This comprehensive examination not only underscores the potential of neural networks in identifying deterministic patterns within PRNG outputs but also offers valuable insights into optimal model selection and configuration. The implications of this work are significant, paving new avenues in cryptography and securing random number generation by highlighting the predictability of PRNGs under advanced neural network models.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;random numbers</kwd>
        <kwd>RNN</kwd>
        <kwd>CNN</kwd>
        <kwd>LSTM</kwd>
        <kwd>GRU</kwd>
        <kwd>hybrid model</kwd>
        <kwd>PRNG 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the ever-evolving landscape of machine learning, the
ability to accurately predict future events based on
sequential data stands as a cornerstone of numerous
technological advancements and applications. From
forecasting stock market trends to decoding human
language, the significance of effective sequence prediction
cannot be overstated. Central to this domain are Recurrent
Neural Networks (RNNs) and Long Short-Term Memory
(LSTM) networks, which have emerged as powerful tools in
the machine learning arsenal for handling sequential data.</p>
      <p>
        RNNs, known for their unique architecture that allows
information to persist, have been instrumental in modeling
time-dependent data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, their application is often
marred by challenges such as the vanishing gradient
problem, which hinders the learning of long-range
dependencies [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Enter LSTMs, a special kind of RNN
designed specifically to overcome these limitations. With
their sophisticated internal mechanisms, LSTMs have set
new benchmarks in sequence prediction tasks,
demonstrating remarkable success where traditional RNNs
falter [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>This paper embarks on a comprehensive exploration of
RNNs and LSTMs in the context of sequence prediction. We
delve into the architectural intricacies of these models, their
strengths and weaknesses, and their performance across
various sequence prediction scenarios.</p>
      <p>Our study is particularly focused on datasets generated
by different Pseudo-Random Number Generators (PRNGs),
offering a unique lens through which the capabilities of
these models can be examined and understood.</p>
      <p>Through rigorous experimentation and analysis, we aim
to shed light on the nuances of sequence prediction and
provide insights that could guide future applications and
research in this fascinating area of machine learning.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and related work</title>
      <p>Recent advancements in sequence prediction have been
significantly influenced by the development and refinement
of Recurrent Neural Networks (RNNs) and Long Short-Term
Memory (LSTM) networks. These models have shown
remarkable proficiency in handling sequential data,
particularly in domains where understanding temporal
dynamics is crucial.</p>
      <p>
        0000-0002-2835-4279 (D. Proskurin); 0000-0002-3109-7971
(M. Iavich); 0000-0001-9036-6556 (T. Okhrimenko); 0000-0002-1247-9854
(O. Chukwukaelonma); 0000-0003-0123-5241 (T. Hryniuk)
© 2024 Copyright for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).
1. LSTM for Time Series Prediction: Studies have
demonstrated the effectiveness of LSTM models in time
series forecasting, a domain traditionally dominated by
statistical methods like ARIMA. Unlike these methods,
LSTMs can capture complex nonlinear relationships in time
series data [
        <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
        ]. Researchers have successfully applied
LSTM models to forecast stock prices, energy demand, and
weather patterns, achieving higher accuracy than
traditional models, especially in scenarios with long-term
dependencies and high volatility.
      </p>
      <p>
        2. RNNs in Natural Language Processing (NLP): RNNs
have been pivotal in advancing NLP. Their ability to process
sequential text data has led to breakthroughs in machine
translation, text generation, and sentiment analysis [
        <xref ref-type="bibr" rid="ref1 ref2 ref5">1, 2, 5</xref>
        ].
The sequential processing capability of RNNs allows them
to maintain context in text, a critical factor in understanding
human language [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However, vanilla RNNs often struggle
with long-term dependencies [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], leading to the adoption of
LSTMs and GRUs (Gated Recurrent Units) in more complex
NLP tasks.
      </p>
      <p>
        3. Sequence-to-Sequence Learning: The
sequence-tosequence learning framework, often implemented using
LSTMs, has revolutionized tasks like machine translation.
This approach involves training models on pairs of output
and output sequences, enabling the model to learn
mappings from one sequence to another. This framework
has been crucial in developing models that can translate
entire sentences with context, rather than translating on a
word-by-word basis [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        4. Challenges and Limitations: Despite their successes,
RNNs and LSTMs are not without challenges. The vanishing
gradient problem in RNNs, where the model loses its ability
to learn long-range dependencies, has been partially
addressed by LSTMs but still poses limitations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Additionally, the training of these models can be
computationally intensive, requiring significant resources
for large datasets.
      </p>
      <p>
        5. Future Directions: Ongoing research is exploring
more efficient and effective variants of RNNs and LSTMs,
such as attention mechanisms and Transformer models [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
These developments aim to address existing limitations
while enhancing the models’ ability to process longer
sequences and maintain context over extended periods.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Model architecture overview</title>
      <p>
        Neural networks are artificial intelligence models that
mimic human brain function [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. A neural network connects
processing units, similar to neurons, rather than
manipulating zeros and ones like a digital model does [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
The result depends on how the connections are organized
and weighted. Neural networks are algorithms modeled
after the human brain that recognize patterns. Sensory data
is interpreted using machine perception, which labels or
clusters raw information. They recognize numerical
patterns in vectors, which must be converted into
realworld data like as images, sounds, text, and time series [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Artificial Neural Networks (ANNs) are computing systems
modeled after biological neural systems, including the
human brain [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Convolutional Neural Networks (CNNs) are similar to
standard artificial neural networks (ANNs) in that they use
neurons to improve themselves through learning [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. CNNs
have made remarkable achievements. This neural network
is now widely used in deep learning. Convolutional neural
networks have revolutionized computer vision, enabling
previously unthinkable feats like facial recognition,
driverless automobiles, self-service supermarkets, and
intelligent medical treatments, CNNs also differ from
typical ANNs by focusing on picture pattern recognition.
This allows us to encode image-specific properties into the
architecture, making the network better suited for
imagefocused tasks while also lowering the number of parameters
needed to set up the model [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ].
      </p>
      <p>
        Hybrid Neural Networks (HNNs), which integrate the
strengths of many neural networks, are becoming
increasingly popular in computer vision applications
including picture captioning and action identification.
However, there has been limited research on the effective
use of hybrid architectures for time series data, particularly
for trend forecasting purposes [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. HNNs use their internal
structure to limit the interactions between process variables
to align with physical models. Compared to regular neural
networks, coupled models are more accurate, dependable,
and generalizable [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Recurrent Neural Networks (RNNs) represent a
paradigm shift in neural networks, specifically designed to
recognize patterns in sequences of data [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Unlike
traditional feedforward neural networks, RNNs possess a
unique feature: the output from the previous step is fed back
into the output of the current step. This looping mechanism
allows RNNs to maintain an internal state that captures
information about the sequence they have processed so far,
making them ideal for tasks like speech recognition,
language modeling, and time series forecasting [
        <xref ref-type="bibr" rid="ref12 ref2">2, 12</xref>
        ].
      </p>
      <p>
        The core architecture of an RNN involves a hidden layer
where the activation at a given time step is a function of the
output at the same step and the activation of the hidden
layer at the previous step [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This recurrent nature allows
the network to maintain a form of memory [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. However,
RNNs are often challenged by long-term dependencies due
to issues like vanishing and exploding gradients during
backpropagation [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], where the network becomes unable
to learn and retain information from earlier time steps in the
sequence [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Long Short-Term Memory Networks, a special kind of
RNN, were developed to overcome the limitations of
traditional RNNs. LSTMs are adept at learning long-term
dependencies, thanks to their unique internal structure [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Unlike standard RNNs, LSTMs have a complex architecture
with a series of gates: the forget gate, output gate, and
output gate [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. These gates regulate the flow of
information into and out of the cell, deciding what to keep
in memory and what to discard, thereby addressing the
vanishing gradient problem [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Forget Gate: Determines what information is discarded
from the cell state [
        <xref ref-type="bibr" rid="ref13 ref4">4, 13</xref>
        ].
      </p>
      <p>
        Output Gate: Updates the cell state with new
information from the current output [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        Output Gate: Determines the next hidden state and
output based on the current output and the updated cell
state [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        This architecture allows LSTMs to make more precise
decisions about what information to store, modify, and
output. As a result, LSTMs have been successfully applied
in various complex sequence modeling tasks, including
machine translation, speech synthesis, and even generative
models for music composition [
        <xref ref-type="bibr" rid="ref13 ref3 ref4">3, 4, 13</xref>
        ].
      </p>
      <p>
        While both RNNs and LSTMs are designed for sequence
processing, the key difference lies in their ability to handle
long-term dependencies [
        <xref ref-type="bibr" rid="ref14 ref4">4, 14</xref>
        ]. Standard RNNs, while
simpler and computationally less intensive, struggle with
retaining information over longer sequences. LSTMs, with
their intricate gating mechanism, excel in scenarios where
understanding long-range contextual information is
crucial [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The choice between RNNs and LSTMs often boils down
to the specific requirements of the task at hand, the
complexity of the sequences involved, and the
computational resources available. LSTMs are generally
preferred for more complex tasks with longer sequences [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
while RNNs might suffice for simpler tasks with shorter
temporal dependencies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>
        There are a large number of pseudorandom generators that
differ in their characteristics, construction methods, and
areas of possible application [
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref18 ref19">15–19</xref>
        ]. In our study, we
employed datasets generated by four distinct PRNG
algorithms, each offering unique challenges and
characteristics for sequence prediction using RNN and
LSTM models. These datasets serve as a testing ground to
evaluate and compare the performance of different neural
network architectures in sequence prediction tasks.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Linear congruential generator dataset</title>
        <p>
          Description: The LCG is one of the oldest and simplest
PRNG algorithms [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. It generates random numbers using
a linear equation [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. The simplicity of its algorithm makes
it a good baseline for evaluating the predictive capabilities
of RNN and LSTM models.
        </p>
        <p>
          Characteristics: The sequence generated by an LCG can
exhibit patterns due to its linear nature. These patterns,
while not immediately apparent, can be learned over time,
making it an interesting case for sequence prediction
models [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. Despite their potential statistical issues, LCGs
have the advantage of offering all the auxiliary qualities,
such as seekability, numerous streams, and k-dimensional
equidistribution [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Mersenne twister dataset</title>
        <p>
          Description: The Mersenne Twister, specifically the
MT19937 variant, is known for its long period and
highquality outputs. It’s widely used in various applications due
to its reliability and speed [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
        </p>
        <p>
          Characteristics: MT generates sequences that are far
more complex and less predictable than LCG [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. This
complexity provides a challenging scenario for RNNs and
LSTMs, testing their ability to model and predict more
intricate and seemingly random sequences. In addition to its
inability to produce the all-zero state, the Mersenne Twister
also finds it difficult to act randomly in its nearly all-zero
state [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Xorshift dataset</title>
        <p>
          Description: Xorshift is a class of PRNGs that operates using
XOR (exclusive or) and bit-shifting operations [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. It’s
known for its simplicity and speed, often used in scenarios
where the speed of random number generation is critical [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>
          Characteristics: Despite its simplicity, Xorshift can
produce high-quality random sequences [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. The
nonlinear nature of its operations makes it an interesting case
for studying how well neural network models can adapt to
and predict outputs from non-linear algorithms [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. A
bitwise xor operation is a type of permutation that involves
flipping certain bits in the target. It can be performed again
to reverse the effects [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. The conventional understanding
of Xorshift would advise us to concentrate on lengthening
the bits’ period [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Middle square method dataset</title>
        <p>
          Description: The Middle Square method is an older PRNG
technique that generates random numbers by squaring the
number and extracting the middle digits of the result [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
It’s less commonly used today due to certain limitations.
        </p>
        <p>
          Characteristics: This method is prone to quickly
converging to repetitive cycles or zeros, especially with
certain seed values [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. The predictability and potential
repetition in the sequences makes it a unique dataset to test
the models’ ability to detect and adapt to less complex and
potentially degenerative patterns [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. The field of
computer science began with the invention of the middle
square [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. It is possible to develop a viable version with a
sufficiently long period (264 for each stream) thanks to
modern 64-bit computing architecture. The fastest RNGs are
comparable in processing speed [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. This generator works
well for parallel processing because of its simple stream
capability. Because a square is nonlinear, it provides this
generator with an edge over linearly-based generators in
terms of data quality [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Dataset preparation</title>
      <p>In our study on “Predicting PRNG Output with Sequential
Analysis”, we meticulously prepared a dataset to analyze
the predictability of various Pseudorandom Number
Generators (PRNGs), focusing on four widely recognized
algorithms: Linear Congruential Generator (LCG) (Fig. 2),
MiddleSquare (Fig. 4), Xorshift (Fig. 3), and Mersenne
Twister (MT) (Fig. 1). Each of these PRNGs was chosen for
its unique approach to generating sequences of
pseudorandom numbers, providing a diverse test bed for our
predictive models.</p>
      <sec id="sec-5-1">
        <title>5.1. Data generation parameters</title>
        <p>The dataset was generated using the following
parameters to ensure consistency across all PRNGs:
Sample Size: Each PRNG was used to generate a sequence
of 10,000 numbers, with n = 10000, to create a
substantial dataset for training and evaluation. Seed
Value: A common seed value of 8956482 was applied to
initialize each PRNG, ensuring that the starting point of
the pseudorandom sequence was consistent across
different generators. Word Size: For PRNGs where
applicable, such as MiddleSquare, a word size of 8 bits
was selected, balancing the need for computational
efficiency with the desire for sequence complexity.
Sequence Length: The output was segmented into
sequences of length 10, which were then used as
individual data points for the subsequent analysis. This
sequence length was chosen to provide enough data for
recognizing patterns without overwhelming the
analytical models.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Dataset splitting</title>
        <p>Once generated, the dataset was divided into three distinct
sets to facilitate the training, testing, and validation of our
predictive models:</p>
        <p>Training Set: Used to train the models, allowing them to
learn and adapt to the patterns inherent in the
pseudorandom sequences generated by each PRNG.</p>
        <p>Testing Set: Employed to assess the performance of the
models on unseen data, providing an unbiased evaluation of
their predictive capabilities.</p>
        <p>Validation Set: Utilized during the model tuning phase
to fine-tune parameters and prevent overfitting, ensuring
that the models generalize well to new data.</p>
        <p>This careful preparation and partitioning of the dataset
were critical in establishing a robust foundation for our
investigation into the predictability of PRNG outputs
through sequential analysis. By standardizing the
generation parameters and thoughtfully splitting the data,
we aimed to create a fair and consistent testing environment
for each of the predictive models applied in our study.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Model configuration</title>
      <p>In our exploration of “Predicting PRNG Output with
Sequential Analysis”, we employed a comprehensive
approach by leveraging various neural network
architectures. Each model was selected based on its ability
to process sequential data, a core characteristic of PRNG
outputs. Our analysis incorporated Convolutional Neural
Networks (CNNs), Recurrent Neural Networks (RNNs),
Long Short-Term Memory networks (LSTMs), and a custom
Hybrid model, each designed to handle the intricacies of
PRNG-generated data in distinct ways.</p>
      <sec id="sec-6-1">
        <title>6.1. Convolutional neural networks</title>
        <p>Application: Primarily used for single-value output
prediction, CNNs are adept at identifying patterns within a
fixed-size window of the sequence. This model excels at
capturing local dependencies and spatial hierarchies in data,
making it suitable for analyzing individual segments of the
PRNG output.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Recurrent neural networks</title>
        <p>Application: RNNs were employed for both single-value and
continuous-value output predictions. Unlike CNNs, RNNs
have a memory mechanism that allows them to process
entire sequences of data, making them ideal for
understanding the temporal dynamics and dependencies
within PRNG outputs.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Long short-term memory networks</title>
        <p>Application: Like RNNs, LSTMs were utilized for both
single-value and continuous-value output predictions.
LSTMs are a special kind of RNN capable of learning
longterm dependencies. They are particularly effective in
avoiding the vanishing gradient problem, enabling them to
capture patterns over longer sequences of PRNG outputs.</p>
      </sec>
      <sec id="sec-6-4">
        <title>6.4. Hybrid model</title>
        <p>Configuration: The Hybrid model represents an innovative
approach, integrating the strengths of CNNs and LSTMs
into a singular architecture. It comprises:
•
•
•</p>
        <sec id="sec-6-4-1">
          <title>CNN Layer: For extracting local features within</title>
          <p>the subsequence of the PRNG output.</p>
          <p>LSTM Layer: To capture long-term dependencies
and temporal patterns in the data, building upon
the features extracted by the CNN layer.</p>
          <p>Dense Layer: Serving as the output layer, it
synthesizes the information processed by the CNN
and LSTM layers to make predictions.</p>
          <p>Application: Designed for versatility, the Hybrid model
is equipped to handle both single-value and
continuousvalue outputs, offering a robust solution for predicting
PRNG outputs by leveraging the complementary strengths
of convolutional and recurrent layers.</p>
          <p>The strategic selection and configuration of these
models underpin our analytical methodology. By employing
a diverse array of architectures, each with its unique
advantages, our study aims to comprehensively evaluate the
predictability of PRNG outputs. The Hybrid model
underscores our commitment to innovation, integrating
multiple neural network paradigms to enhance predictive
accuracy and insight into the sequential nature of
PRNGgenerated data.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Evaluation metrics</title>
      <p>To rigorously assess the effectiveness of our models in
predicting PRNG outputs, we employed a set of
comprehensive evaluation metrics. These metrics are
crucial for quantifying the accuracy of our predictions and
facilitating a direct comparison between the different neural
network architectures utilized in our study. Our evaluation
framework is centered around the Mean Squared Error
(MSE) and a specially devised Model Performance Score.</p>
      <sec id="sec-7-1">
        <title>7.1. Mean squared error</title>
        <p>MSE serves as the cornerstone of our evaluation strategy. It
calculates the average squared difference between the
actual and predicted values, offering a precise measure of
the prediction error’s magnitude. By squaring the errors,
MSE gives more weight to larger errors, making it
particularly sensitive to outliers and significant prediction
inaccuracies.</p>
        <p>In the context of predicting PRNG outputs, MSE
provides a clear and direct measure of how closely the
model’s predictions align with the actual sequence of
numbers generated by the PRNGs. A lower MSE indicates
higher prediction accuracy, reflecting a model’s ability to
effectively capture and replicate the underlying patterns of
the PRNG sequence.</p>
      </sec>
      <sec id="sec-7-2">
        <title>7.2. Model performance score</title>
        <p>Recognizing the need for a standardized metric that allows
for an intuitive understanding of model performance, we
introduced the Model Performance Score. This metric
normalizes the MSE to a scale ranging from 0 to 1, where 0
represents the poorest performance (highest MSE) and 1
denotes perfect prediction accuracy (zero MSE).</p>
        <p>The Model Performance Score is calculated by inversely
scaling the MSE against a predetermined maximum error
threshold. This approach ensures that the performance
score is adjusted for the scale of the data and the expected
variation in prediction accuracy, allowing for a fair
comparison across different models and datasets.</p>
        <p>This normalized score simplifies the interpretation of
our results, providing a straightforward metric to gauge
model effectiveness. It allows stakeholders to quickly assess
the relative performance of each model in predicting PRNG
outputs without delving into the complexities of raw MSE
values.</p>
        <p>Together, these evaluation metrics form the foundation
of our analytical approach, enabling a nuanced analysis of
model performance. MSE offers a detailed view of the
prediction accuracy, while the Model Performance Score
provides a high-level, comparative perspective. By
incorporating both metrics, our study ensures a balanced and
comprehensive evaluation of how well each neural network
architecture can predict the seemingly unpredictable: output
of pseudorandom number generators.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Experiment variables and observations</title>
      <p>We conducted an extensive series of experiments to
evaluate the predictive capabilities of various neural
network configurations. These experiments were
meticulously designed to explore the impact of different
model parameters on the accuracy of PRNG output
predictions. Below, we detail the variables involved in these
experiments and highlight some critical observations
related to model performance.</p>
      <sec id="sec-8-1">
        <title>8.1. Experiment variables</title>
        <p>To systematically assess the effects of various
hyperparameters on model performance, we tested a
wide array of combinations, encompassing:

</p>
        <sec id="sec-8-1-1">
          <title>Activation Functions: We exper. imented with two popular activation functions, ReLU (Rectified Linear Unit) and tanh (Hyperbolic 47</title>
        </sec>
        <sec id="sec-8-1-2">
          <title>Tangent). These functions were chosen for their</title>
          <p>distinct characteristics in handling nonlinearities
in the data.</p>
          <p>Number of Neurons: The neuron counts tested
were 8, 16, and 32. This range allowed us to
explore the models’ capacity to learn and
generalize from the data, balancing complexity
with computational efficiency.</p>
          <p>Epochs: All models were trained for [1000] epochs,
providing ample opportunity for learning and
convergence.</p>
          <p>
            Model Layers: We varied the depth of the models
by testing configurations with [
            <xref ref-type="bibr" rid="ref1 ref2 ref4">1, 2, 4</xref>
            ] layers. This
variation aimed to understand how model depth
influences learning and prediction accuracy.
          </p>
          <p>
            Output Lengths: For continuous value prediction,
output lengths of [
            <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1–4</xref>
            ] were tested. This range was
selected to assess the models’ ability to forecast multiple
steps in the PRNG sequence.
          </p>
        </sec>
      </sec>
      <sec id="sec-8-2">
        <title>8.1.1. Impact of dropout and L2 regularization</title>
        <p>One of the most notable findings from our experiments was
the impact of dropout and L2 regularization techniques on
model learning capabilities. Contrary to common practice in
machine learning, where these techniques are employed to
enhance model generalization and prevent overfitting, our
experiments revealed that:</p>
        <p>Models without dropout and L2 regularization
demonstrated superior performance in learning and
predicting PRNG outputs. The introduction of these
regularization techniques led to models that were unable to
adequately learn from the training data and, consequently,
failed to predict accurately.</p>
        <p>This observation suggests a unique aspect of predicting
PRNG outputs: the data generated by PRNGs, while
seemingly random, follows deterministic algorithms. The
addition of regularization techniques, which are designed to
introduce randomness and constraint to the learning
process, may interfere with the model’s ability to capture
the underlying deterministic patterns of PRNG sequences.</p>
        <p>The results of these experiments provide valuable
insights into the design and optimization of neural network
models for predicting PRNG outputs. Specifically, they
underscore the importance of tailoring model
configurations to the specific characteristics of the data and
the task at hand. In the context of PRNG prediction,
minimizing external sources of randomness and constraint
(e.g., through dropout and L2 regularization) appears to be
crucial for enabling models to learn and replicate the
deterministic patterns that govern PRNG behavior.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>9. Experiment results analysis</title>
      <p>Our exhaustive investigation into predicting PRNG output
through sequential analysis yielded compelling findings,
elucidated through the analysis of the top-performing
models for each PRNG. Here, we detail the significant
outcomes for both single-output and continuous-output
scenarios across different PRNGs: Xorshift, MT (Mersenne
Twister), LCG (Linear Congruential Generator), and
MiddleSquare.</p>
      <sec id="sec-9-1">
        <title>9.1. Single-output scenario analysis</title>
        <p>For single-output predictions, our experiments have the
following results.</p>
        <p>Xorshift: The RNN model with 32 neurons, 5 layers,
and the ReLU activation function emerged as the top
performer, achieving a mean score of 0.9898 (Table 1).
However, both Hybrid and CNN models came close to the
same success rate suggesting that the Xorshift sequence
characteristics are not particularly difficult to capture.
50% of all models were able to reach 90% success thresholds
(Error! Reference source not found.).</p>
        <p>Nevertheless, more improvement can get this number
higher.
MT: The CNN model with 8 neurons, 3 layers, and ReLU
activation function led the pack with a mean score of 0.9832
(Table 2), indicating that CNN’s feature extraction
capabilities are effective at decoding the MT’s output
patterns.</p>
        <p>28% of all models were able to reach 90% success
thresholds (Fig. 6).
relu
relu
relu
relu
relu
relu
relu
relu
relu
60% of all models were able to reach 90% success thresholds
(Fig. 7).
reaching a mean score of 0.9831 (Table 3). This
underscores the Hybrid model’s robustness in capturing
both local and long-range dependencies in LCG sequences.</p>
        <p>Epochs
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
navigating the complex, squared calculations intrinsic to
MiddleSquare algorithm.
64% of all models were able to reach 90% success thresholds
(Fig. 8).
These results underscore the nuanced relationship between
PRNG algorithms and neural network architectures,
suggesting that no single model architecture is universally
superior. Instead, the optimal choice depends on the specific
characteristics and mechanisms of the PRNG being
predicted. While the models have been fine-tuned to achieve
high predictive accuracy, the graphical analysis indicates
that there is an inherent limitation to the exactness of these
predictions.
A further performance plot (Error! Reference source not
found.) illustrates the correlation between the predicted and
actual values of the PRNG sequence. The near-perfect linear
alignment along the 45-degree line suggests that the model’s
predictions are highly correlated with the actual PRNG outputs.
The tight clustering of the points around this line demonstrates
the model’s effectiveness in capturing the underlying pattern
of the PRNG sequence. However, the slight deviation of points
from the line implies that while the model can predict the
general trend and distribution of the PRNG outputs, it cannot
replicate the sequence with absolute precision.</p>
        <p>The scatter plot showing predictions versus actual
values (Fig. 10) for the Hybrid model using tanh activation,
16 neurons, and 3 layers reveals a close correspondence
between predicted and actual values. However, the
dispersion of points away from the line of perfect agreement
(where predicted values equal actual values) suggests that
while the model can approximate the PRNG’s output with
high fidelity, it cannot achieve complete accuracy. The
variance from the line of perfect prediction could be
attributed to the deterministic yet complex nature of
PRNGs, which inherently limits the predictability even with
sophisticated models.</p>
      </sec>
      <sec id="sec-9-2">
        <title>9.2. Continuous-Output Scenario Analysis</title>
        <p>The continuous-output models demonstrated even higher
predictive accuracy, with the Hybrid model configured for
continuous predictions (Hybrid-C) achieving remarkable
success.</p>
        <p>For the MiddleSquare PRNG, the Hybrid-C model with tanh
activation, 16 neurons, 3 layers, and an output length of 3
achieved a near-perfect mean score of 0.9955. Only 29% of
all models were able to break the 90% success milestone (Fig.
11).
models were able to break the 90% success threshold
(Fig. 12).
For the Xorshift PRNG, the Hybrid-C model with relu
activation, 16 neurons, 2 layers, and an output length of 2
achieved a near-perfect mean score of 0.987906 (Table 7).
Table 4
Xorshift, continuous output results
15% of all models were able to break the 90% success
threshold (Fig. 13).</p>
        <sec id="sec-9-2-1">
          <title>For the MT PRNG, the Hybrid-C model with relu activation, 32 neurons, 2 layers, and an output length of 2 achieved a near-perfect mean score of 0.985006 (Table 8). 53</title>
          <p>12% of all models were able to break the 90% success
threshold (Fig. 14).
The examination of continuous-output models reveals a
notable enhancement in predictive performance compared
to single-output models. This is particularly evident in the
context of predicting sequences generated by the
MiddleSquare PRNG.</p>
          <p>The performance plot illustrating the correlation
between predicted and actual values (Fig. 15) for the
continuous-output model shows an even tighter linear
alignment than the single-output model. This near-perfect
correlation, along with a high success score of 0.9955,
reflects the model’s exceptional predictive accuracy. The
dense clustering of points along the diagonal suggests that
the model can reliably predict the MiddleSquare PRNG’s
output with high confidence, and such precision is
indicative of the model’s ability to capture both the
immediate and contextual dependencies within the PRNG’s
sequence.
The scatter plot for the continuous-output Hybrid model,
which integrates CNN and LSTM architectures (Hybrid-C),
showcases a substantial concentration of points closely
aligned with the line of perfect prediction (Fig. 16). The
model, employing tanh activation with 16 neurons across 3
layers, exhibits a remarkable ability to track the actual
values throughout the sequence. This tight clustering
indicates a substantial reduction in prediction errors and a
strong alignment with the true PRNG sequence, suggesting
a deeper understanding of the underlying patterns by the
model.
The continuous-output model’s superior performance, as
evidenced by the closer proximity of predicted to actual
values and the higher success score, highlights the benefit
of utilizing sequential context in PRNG output prediction.
The ability to forecast the sequence with a success score
reaching 0.9955 marks a significant milestone, suggesting
that models incorporating sequence history can more
effectively decode the deterministic yet complex structure
of PRNG outputs.</p>
          <p>This analysis implies that continuous-output models
hold great promise for applications where forecasting
accuracy over sequences is critical. The insights gleaned
from this research can inform the development of more
secure PRNGs, capable of withstanding sophisticated
sequential analysis. Future work will likely explore the
expansion of this approach to more complex and
higherdimensional sequences, potentially integrating additional
layers of complexity and exploring the impact on model
performance.</p>
        </sec>
      </sec>
      <sec id="sec-9-3">
        <title>9.3. Model Performance Across PRNGs</title>
        <p>Our study’s findings highlight the nuanced nature of PRNG
output prediction, with different models excelling for
specific generators. This variation underscores the
importance of model selection tailored to the characteristics
of the PRNG being analyzed. For instance, the
bestperforming model for the Xorshift generator might leverage
its unique XOR and shift operations, whereas the optimal
model for the Mersenne Twister (MT) would need to
account for its complex bit manipulation and tempering
techniques.</p>
        <p>Remarkably, the single-output models consistently
achieved a 98% success rate across various PRNGs,
demonstrating a high level of accuracy in predicting the
next output value based solely on a single preceding value.</p>
        <p>This success rate is indicative of the models’ ability to
decipher the underlying deterministic patterns that govern
PRNG outputs.</p>
        <p>Even more impressive, the continuous-output model,
which utilizes sequences of values to predict subsequent
outputs, reached a 99% success rate. This improvement
suggests that incorporating more context in the form of
continuous output sequences enables the models to better
capture the PRNGs’ inherent algorithms, leading to more
accurate predictions.</p>
      </sec>
      <sec id="sec-9-4">
        <title>9.4. Implications for PRNG Analysis and</title>
      </sec>
      <sec id="sec-9-5">
        <title>Security</title>
        <p>The success of our models in predicting PRNG outputs with
such high accuracy has profound implications for the fields
of cryptography and random number generation. While
PRNGs are designed to produce sequences that are difficult
to predict, our results suggest that advanced neural network
models can uncover and exploit hidden patterns within
these sequences. This finding calls for ongoing efforts to
enhance the unpredictability and security of PRNGs,
ensuring they remain robust against sophisticated
analytical techniques.
10. Conclusions
This research delves into the predictability of PRNGs using
advanced neural network models. Our study demonstrates
that tested architectures possess a remarkable ability to
predict the outputs of various PRNGs, with enhanced
accuracy observed in continuous output prediction
scenarios showcasing a superior performance in capturing
long-term dependencies within PRNG sequences, affirming
their suitability for complex sequence prediction tasks.
Our findings illuminate the nuanced dynamics of PRNG
predictability and the potential vulnerabilities inherent
within commonly used generators. By leveraging neural
networks, we not only uncover the deterministic patterns
masked as randomness but also push the boundaries of
understanding in cryptographic security and random
number generation.</p>
        <p>Future research should explore the integration of more
complex neural architectures and the application of these
findings in real-world scenarios, such as secure
communications and cryptographic key generation. The
implications of our work suggest a pivotal shift towards
more secure and unpredictable PRNG designs, bolstering
the defenses against adversarial predictions and enhancing
the integrity of cryptographic systems.
11. Future research directions
These findings have significantly advanced our
understanding of the capabilities and limitations of current
PRNG technologies when subjected to advanced neural
network-based predictive models. The high success rates
achieved by these models, particularly the 99% success rate
with continuous-output models, not only demonstrate the
feasibility of predicting PRNG outputs but also underscore
the intricate patterns that deterministic algorithms
generate—patterns that sophisticated models can uncover.</p>
        <p>This study opens several avenues for future research,
aimed at both improving PRNG designs and developing
more advanced predictive models:</p>
        <p>Advanced PRNG Algorithms: There is a clear need for
the development of new PRNG algorithms that incorporate
mechanisms specifically designed to counteract the
capabilities of neural network-based predictive models.
Future research should focus on exploring algorithmic
complexities that can more effectively obscure deterministic
patterns.</p>
        <p>Neural Network Enhancements: Our research has
shown that certain neural network architectures are more
adept at predicting PRNG outputs than others. Investigating
the development of novel neural network models or hybrid
architectures that can more efficiently process and predict
complex sequences is an exciting frontier. This includes
exploring deeper networks, attention mechanisms, and
other advanced features that could further improve
prediction accuracy.</p>
        <p>Cross-Disciplinary Approaches: Combining insights
from cryptography, machine learning, and complexity
theory could yield innovative approaches to both PRNG
design and predictive modeling. Interdisciplinary research
might uncover new principles for creating sequences that
are inherently more difficult to predict, as well as models
that are more adept at understanding complex patterns.</p>
        <p>Real-World Application Scenarios: Applying our
findings to real-world scenarios, where PRNGs are used
under various constraints and for different purposes, will be
essential. This includes testing PRNGs in environments with
high-security requirements, such as in blockchain
technologies, secure communications, and digital
signatures.</p>
        <p>Ethical Considerations and Security Implications: As
research progresses in predicting PRNG outputs, it is
imperative to consider the ethical implications and potential
security risks associated with disseminating advanced
predictive models. Developing guidelines and best practices
for responsible research and application in this area is
crucial.</p>
        <p>Enhancing PRNG security: The ability of neural
networks to predict PRNG outputs with such accuracy
highlights an urgent need for the cryptographic community
to re-evaluate and enhance the design and implementation
of PRNGs. Ensuring that PRNGs can withstand analysis by
advanced predictive models is crucial for maintaining the
security and integrity of cryptographic systems, which rely
heavily on the unpredictability of these generators.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgment</title>
      <p>This work was supported by the Shota Rustaveli National
Foundation of Georgia (SRNSFG) [NFR-22-14060] as well as
the Ministry of Education and Science of Ukraine (grant
№0122U002361 “Intelligent system of secure packet data
transmission based on reconnaissance UAV”).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          , et al.,
          <article-title>Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation</article-title>
          , arXiv:
          <fpage>1406</fpage>
          .1078 (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Sequence to Sequence Learning with Neural Networks</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>27</volume>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Long</given-names>
            <surname>Short-Term</surname>
          </string-name>
          <string-name>
            <surname>Memory</surname>
          </string-name>
          ,
          <source>Neural Computation</source>
          <volume>9</volume>
          (
          <issue>8</issue>
          ) (
          <year>1997</year>
          )
          <fpage>1735</fpage>
          -
          <lpage>1780</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Gers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cummins</surname>
          </string-name>
          , Learning to Forget:
          <article-title>Continual Prediction with LSTM</article-title>
          .
          <source>Neural Computation</source>
          <volume>12</volume>
          (
          <issue>10</issue>
          ) (
          <year>2000</year>
          )
          <fpage>2451</fpage>
          -
          <lpage>2471</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-R.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          , G. Hinton,
          <article-title>Speech Recognition with Deep Recurrent Neural Networks</article-title>
          ,
          <source>IEEE International Conference on Acoustics, Speech and Signal Processing</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Karpathy</surname>
          </string-name>
          ,
          <source>The Unreasonable Effectiveness of Recurrent Neural Networks</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Islam</surname>
          </string-name>
          , G. Chen,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <article-title>An Overview of Neural Network, American J</article-title>
          .
          <source>Neural Netw. Appl</source>
          .
          <volume>5</volume>
          (
          <issue>1</issue>
          ) (
          <year>2019</year>
          )
          <fpage>7</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .11648/j.ajnna.
          <volume>20190501</volume>
          .12.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>K. O'Shea</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Nash</surname>
          </string-name>
          ,
          <article-title>An Introduction to Convolutional Neural Networks (</article-title>
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          , et al.,
          <source>A Survey of Convolutional Neural Networks: Analysis</source>
          ,
          <string-name>
            <surname>Applications</surname>
          </string-name>
          , and
          <string-name>
            <surname>Prospects</surname>
          </string-name>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Aberer</surname>
          </string-name>
          ,
          <article-title>Hybrid Neural Networks for Learning the Trend in Time Series</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Psichogios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ungar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Hybrid</given-names>
            <surname>Neural NetworkFirst Principles Approach</surname>
          </string-name>
          to Process Modeling.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          , et al.,
          <source>Attention Is All You Need. Advances in Neural Information Processing Systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Brownlee</surname>
          </string-name>
          ,
          <article-title>Deep Learning for Time Series Forecasting: Predict the Future with MLPs, CNNs</article-title>
          and LSTMs in Python,
          <source>Machine Learning Mastery</source>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>V.</given-names>
            <surname>Desai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Patil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <article-title>Using Layer Recurrent Neural Network to Generate Pseudo Random Number Sequences</article-title>
          ,
          <source>Int. J. Comput. Sci. 9</source>
          (
          <year>2012</year>
          )
          <fpage>324</fpage>
          -
          <lpage>334</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Maksymovych</surname>
          </string-name>
          , et al.,
          <source>Hardware Modified Additive Fibonacci Generators Using Prime Numbers</source>
          , Advances in Computer Science for Engineering and
          <string-name>
            <surname>Education</surname>
            <given-names>VI</given-names>
          </string-name>
          , LNDECT
          <volume>181</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -36118-0_
          <fpage>44</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>V.</given-names>
            <surname>Maksymovych</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Harasymchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shabatura</surname>
          </string-name>
          ,
          <source>Modified Generators of Poisson Pulse Sequences Based on Linear Feedback Shift Registers, Advances in Intelligent Systems and Computing, AISC</source>
          <volume>1247</volume>
          (
          <year>2021</year>
          )
          <fpage>317</fpage>
          -
          <lpage>326</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>V.</given-names>
            <surname>Maksymovych</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Harasymchuk</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Opirskyy</surname>
          </string-name>
          ,
          <source>The Designing and Research of Generators of Poisson Pulse Sequences on Base of Fibonacci Modified Additive Generator, International Conference on Theory and Applications of Fuzzy Systems and Soft Computing, ICCSEEA 2018: Advances in Intelligent Systems and Computing</source>
          <volume>754</volume>
          (
          <year>2019</year>
          )
          <fpage>43</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hamza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Novel</given-names>
            <surname>Pseudo</surname>
          </string-name>
          <article-title>Random Sequence Generator for Image-Cryptographic Applications</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Info</surname>
          </string-name>
          .
          <source>Secur. Appl</source>
          .
          <volume>35</volume>
          (
          <year>2017</year>
          )
          <fpage>119</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>O.</given-names>
            <surname>Harasymchuk</surname>
          </string-name>
          ,
          <article-title>Generator of Pseudorandom Bit Sequence with Increased Cryptographic Security</article-title>
          ,
          <source>Metallurgical and Mining Industry: Sci. Tech. J</source>
          .
          <volume>6</volume>
          (
          <issue>5</issue>
          ) (
          <year>2014</year>
          )
          <fpage>24</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>M. O'Neill</surname>
          </string-name>
          ,
          <article-title>PCG: A Family of Simple Fast SpaceEfficient Statistically Good Algorithms for Random Number Generation (</article-title>
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Matsumoto</surname>
          </string-name>
          , T. Nishimura, Dynamic Creation of Pseudorandom Number Generators (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>B.</given-names>
            <surname>Widynski</surname>
          </string-name>
          ,
          <string-name>
            <surname>Middle-Square Weyl Sequence RNG</surname>
          </string-name>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>K.</given-names>
            <surname>Okada</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Learned</surname>
          </string-name>
          Pseudo-Random Number Generator:
          <article-title>WGAN-GP for Generating Statistically Robust Random Numbers</article-title>
          ,
          <source>PLoS One</source>
          <volume>18</volume>
          (
          <issue>6</issue>
          ) (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>0287025</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>