<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Detection of Hidden Information in Graphic Files using Machine Learning*</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>IntexSoft LLC</institution>
          ,
          <addr-line>Grodno</addr-line>
          ,
          <country country="BY">Belarus</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Yanka Kupala State University of Grodno</institution>
          ,
          <addr-line>Grodno</addr-line>
          ,
          <country country="BY">Belarus</country>
        </aff>
      </contrib-group>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The method of detecting the presence of hidden information in graphic files based on the use of machine learning methods is considered. Detection of the presence of hidden information is carried out in the absence of data on the original algorithm used to implement the hidden information. In steganalysis, methods that solve problems of this type are usually called blind. Dataset formation methods for teaching machine learning models using wavelet decomposition and test results of trained models on training data sets are described.</p>
      </abstract>
      <kwd-group>
        <kwd>hidden information</kwd>
        <kwd>steganography</kwd>
        <kwd>steganalysis</kwd>
        <kwd>stegocontainer</kwd>
        <kwd>graphical stegocontainer</kwd>
        <kwd>blind steganalysis method</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Throughout the history of public relations, there has been a need to hide information or
share it unnoticed by others. The combination of methods developed for these purposes
has formed a scientific direction known as steganography.</p>
      <p>Modern steganography methods widely use computer technology to embed hidden
digital information (stego information) into other digital data called “container files” or
“stegocontainers”, such as digital images, audio or video data, text, or even network
packets.</p>
      <p>In contrast to cryptography, which hides the meaning of the transmitted message,
steganography hides the fact of message transmission, which is in some ways its
advantage, since in this case unnecessary attention is not attracted. The interest in
steganographic methods that have been emerging in recent years is largely because, in
contrast to cryptography, the law does practically not regulate the use of steganography.</p>
      <p>
        One of the key requirements for steganographic algorithms is that the
implementation of information in steganocontainers should not noticeably change the size of the
*
file and/or quality of the container, such as image or sound. Therefore, steganographic
algorithms often exploit the limitations of biological systems of human perception. For
example, if the information is hidden in images, stegoalgorithms change the intensity
of colors so that, on the one hand, encodes the stegoinformation with these changes,
and on the other hand, that these changes are not perceived by the human organs of
vision. Algorithms for working with sound are based on the same principle: the
recorded information changes the high frequencies of the audio signal, which is probably
not noticeable when listening [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Steganographic systems protect information primarily from the point of view of
behavioral security, hiding the existence of information and communication behavior and
thus ensuring the security of important information. Because of its powerful ability to
hide information, the concealment system plays an important role in protecting privacy
and security in cyberspace.</p>
      <p>
        There are various storage media that can be used to hide information, including
images [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ], audio [
        <xref ref-type="bibr" rid="ref4 ref5">4,5</xref>
        ], text [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6-8</xref>
        ], etc. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Among them, the images have a large
information capacity, which has allowed the images in recent years to become a widely
studied and used steganographic medium. However, protecting information security,
these concealment systems can also be used by cybercriminals and transmit some
malicious information, which creates potential risks for cyberspace security [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Therefore, the study and development of effective methods of steganalysis are becoming an
increasingly promising and difficult task.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Tools and Scheme of Information Hiding</title>
      <p>One of the common types of files in which messages are embedded in digital images.
Digital images are typically presented in the formats * .bmp, * .jpeg, * .png or * .gif
(without animation). The message can be represented by any type of digital
information, for example by a file of a certain type or a line of text.</p>
      <p>To write a hidden message to a file, special programs that implement stego
algorithms are used. Once the implementation algorithm is known, you can write software
that attacks the scanned images and finds out which contains hidden information and
which does not.</p>
      <p>Algorithmically, steganography consists of two phases: one for hiding information
and one for extracting. The hiding process embeds the message (for example, a line
entered in the terminal or another file) in the media file (or in the file container). As a
result, we get a container object with an embedded message. In the process of extracting
the message, on the contrary, the original message is extracted from the sterile. In case
you still find the fact of the presence of a hidden message, in most steganographic
programs, before embedding the message, it is first encrypted. The basic model of
steganography is shown in Fig. 1.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Imaging Containing Hidden Data</title>
      <p>
        Steganography algorithms work differently, and, accordingly, produce different types
of distortion of the original information. Because of this, it becomes hardly possible to
write a clear deterministic algorithm for detecting the presence of a steganographic
insertion. It is precisely in such situations machine learning methods are used.
Over the past decade, many steganographic algorithms have been proposed for hiding
data within a stegocontainer. Such embedding schemes can work in the spatial domain,
such as, for example, MiPOD [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], STABYLO [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], SUNIWARD [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], HILL [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
WOW [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] or HUGO [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], as well as in the frequency domain of the image.
      </p>
      <p>The vast majority of approaches to image steganalization are two-stage. At the first
stage, useful information about the image content is generated by calculating the set of
attributes, and at the second stage, it is used to teach the machine learning model that
allows distinguishing empty steg containers from containers with hidden information.</p>
      <p>
        For the first step, various Rich Models (RM) for the spatial domain (SRM) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and
JPEG [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] were proposed, while for the second step, the most common choice is
Ensemble Classifier (EC) [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. This RM + EC combination is used in many modern image
steganalization tools. So in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], stegoimages obtained using the HUGO steganographic
algorithm were detected with errors of 13% and 37%, respectively, for embedding
payloads of 0.4 and 0.1 pp. These errors were slightly reduced (12% and 36%) in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], and
a similar model was applied to stego images obtained using the JUNIWARD
steganographic scheme.
      </p>
      <p>Since we are dealing with the task of blindly detecting the presence of
stegoinformation, we should train the machine learning model using examples created using
different algorithms.</p>
      <p>
        In this work, the following programs were used to create training and test data sets:
─ Steganography Software F5 (algorithm f5) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ];
─ StegHide (steganography based on graph theory) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ];
─ OpenStego (RandomLSB, a modified least significant bit algorithm) [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
The graphic files for stego-information used in this work were taken from open sets on
the Internet, as well as from the blogs of photographers, without copyright
infringement. A total of 750 images were selected.
      </p>
      <p>Most of these images were originally in high definition. However, models of
machine learning methods are better trained using previously prepared small examples.
Therefore, the size of the images was reduced to an average of between 640 x 480 and
1147 x 768.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Generating Characteristic Files</title>
      <p>To use machine learning models, it is necessary to form a dataset of attributes based on
the original graphic images, which would reflect the characteristic features of "clean"
images and images containing stego information. The dataset is presented as a CSV
file. Each dataset record contains 84 features, as well as the 85th classification feature:
“0” for a “clean” image and “1” otherwise. As a result, the dataset contains 3000
records - 750 records for “clean” images, and 750 records for images with
stego-information introduced by Steganography Software F5, StegHide, and OpenStego,
respectively.</p>
      <p>The signs characterizing the graphic image will be generated using discrete wavelet
transforms and statistical moments 1-4 orders of magnitude.</p>
      <p>
        By the time this work was completed, there were already attempts to detect the
presence of steganographic content in graphic files using the discrete Fourier transform [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
or wavelet transforms [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] in combination with machine learning methods.
      </p>
      <p>
        The wavelet transform is an integral transform that is a convolution of a wavelet
function with a signal. The wavelet transform translates the signal from the time
representation into the time-frequency [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Since we are dealing with digital images, it is worth considering and applying
discrete wavelet transformations. The decomposition will be performed according to the
wavelet functions of the Haar, db2, bior1.3, and rbio1.3 (Fig. 2).</p>
      <p>Since each color channel in the image is represented by a rectangular matrix, the
discrete wavelet transform must be two-dimensional. You can do this using the
wavedec2 method from the PyWavelets module of the Python programming language.
Usually, when analyzing images by a two-dimensional wavelet transform,
decomposition no higher than the third level is used. Decompositions of higher levels usually do
not provide valuable additional information.</p>
      <p>The wavedec2 function returns a structure of the form [cAn, (cHn, cVn, cDn), ...
(cH1, cV1, cD1)], where:
─ cAn is the approximation coefficient of decomposition of the nth level;
─ cHn is the horizontal decomposition coefficient of the nth level;
─ cVn is the vertical decomposition coefficient of the nth level;
─ cDn is the diagonal expansion coefficient of the nth level.</p>
      <p>Each of these coefficients is a multidimensional array. To apply statistical methods to
it, we will make it one-dimensional using the flatten() function from the NumPy module
for Python. Before creating a frequency dictionary, we additionally round up all the
coefficients to integers. This will avoid the presence of many coefficients in the
frequency dictionary, slightly differing from each other, having a frequency of 1.
Next, we calculate the statistical moments from the first to the fourth-order on such a
frequency plane. To do this, use the describe method from the scipy.stats module. The
described method returns an object from which you can get: stat. mean - average value;
stat.variance - variance; stat.skewness asymmetry coefficient; stat.kurtosis coefficient
of excess.</p>
      <p>Similarly, datasets in the format of CSV files for other wavelet functions can be
obtained.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Machine Learning Methods in Steganalysis</title>
      <p>Stegoanalysis can be considered as a problem of two-class classification of machine
learning, for the solution of which a whole range of methods can be used. For example,
K-nearest neighbors, decision trees, random forest algorithm, support vector method,
neural network technologies.</p>
      <p>Traditional modern techniques of image steganalysis typically consist of a classifier
trained using the features provided by rich models. Since the stages of extracting and
classifying features are perfectly implemented in the deep learning architecture and
convolutional neural networks (CNN), various studies have tried to develop a
CNNbased stego analyzer.</p>
      <p>
        Deep learning [
        <xref ref-type="bibr" rid="ref25 ref26">25-26</xref>
        ] led to breakthrough improvements in various complex tasks
in the field of computer vision, becoming modern for many of them. A key reason for
this success is the current availability of powerful computing platforms, in particular
GPU-accelerated ones. Among the various network architectures that belong to this
family of machine learning methods, convolutional neural networks (CNNs) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] are
very effective for solving image classification problems. For example, in the MNIST
problem, which consists of the automatic recognition of handwritten numbers [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] or
the tasks of the CIFAR benchmark test [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. Since steganalysis is a similar problem,
the goal is to classify the input image as a cover or as a stego, the development of a
CNN-based stego analyzer in the past few years has attracted increasing attention.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Application of Machine Learning Methods</title>
      <p>The task of blindly detecting the presence of stegoinformation in a graphic image,
considered in this paper, relates to the problems of binary classification and can be solved
within the framework of machine learning technology with a teacher.</p>
      <p>An experiment was conducted on the application of the following methods to solve
the problem, using their implementations from the scikit-learn library for Python:
─ K-nearest neighbors algorithm;
─ naive Bayes classifier;
─ decision tree;
─ linear regression;
─ method of support vectors;
─ neural network direct distribution.</p>
      <p>Records of the prepared dataset were evenly mixed and divided into training and test
sets in a ratio of 8:2.</p>
      <p>For more reasonable model training, a cross-validation mechanism was used, which
provides for dividing the training sample into n equal parts with n-fold repetition of the
training process. Moreover, for the kth time, model training takes place in all parts
except the kth one, and it is the kth part of the training data set that is used to test the
quality of training.</p>
      <p>The experiment was carried out using two methods of scaling (normalizing) the
MinMaxScaling and StandardScaling dataset data. With MinMax normalization, the
values of some attribute in all records of the dataset are changed so that they fit into a
fixed range (from 0 to 1). With standard normalization, the data has an average of 0 and
a standard deviation from the average of 1.</p>
      <p>After applying the normalization phase, the model training phase was carried out.
The training was carried out on a pair of feature sets - test and training for the generation
of which the selected wavelet function was used.</p>
      <p>Machine learning algorithms have, depending on their type, parameters, and/or
hyperparameters. For convenience, where possible, the enumeration of parameter variants
was automated using the GridSearchCV tool included in the scikit-learn library.</p>
      <p>Some learning outcomes are presented in table 1:</p>
      <p>As can be seen from the table, the best results were obtained using the support vector
method and direct distribution neural network with the “multilayer perceptron”
architecture with two hidden layers, each of which has several tens of neurons.</p>
      <p>In the case of using the db2 and bior1.3 wavelet functions, the results were worse
than when using the Haar wavelet function.</p>
      <p>They are also worse when using min-max normalization instead of the standard one.</p>
      <p>The most significant influence on the result was the significance of the features
associated with the moments of the third and fourth orders. Features associated with
moments of the first and second-order did not significantly affect the result. Also, the
values of the features calculated on the horizontal decomposition coefficients did not
significantly affect the result.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusions</title>
      <p>Based on the approaches described in the work, a console application was developed
using the methods of the scikit-learn library, which allows the use of a classical method
of machine learning and trained a neural network to search for signs of hidden
information in graphic files.</p>
      <p>As a result of the experiment on the blind detection of stegoinformation in graphic
files using machine learning methods, it was found that the best results were obtained
the support vector method with parameters C = 1000, gamma = 0.001 and kernel =
RBF, and a multilayer perceptron with two hidden layers (90 and 20 neurons).</p>
      <p>As effective features, asymmetry and kurtosis coefficients calculated in the
frequency plane for approximation, vertical, and diagonal coefficients of a
two-dimensional three-level wavelet transform using the Haar wavelet function can be used. Using
trained models, it is possible with a probability close to 0.7 to predict whether a graphic
image contains a hidden message or not.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Wheeler D. Audio Steganography Using High Frequency Noise Introduction</surname>
          </string-name>
          (
          <year>2012</year>
          ). Available at: https://pdfs.semanticscholar.org/d547/ 3318c5c9171fe38abc550b89a15022d559cb.pdf
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J.</given-names>
          </string-name>
          <article-title>Steganography in digital media: principles, algorithms, and applications</article-title>
          . Cambridge University Press,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chen</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            <given-names>W.</given-names>
          </string-name>
          , Zhang W., and
          <string-name>
            <surname>Yu</surname>
            <given-names>N.</given-names>
          </string-name>
          <article-title>Defining cost functions for adaptive jpeg steganography at the microscale</article-title>
          .
          <source>IEEE Transactions on Information Forensics and Security</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>1052</fpage>
          -
          <lpage>1066</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Yang</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            <given-names>X.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Huang</surname>
            <given-names>Y.</given-names>
          </string-name>
          <article-title>A sudoku matrix-based method of pitch period steganography in low-rate speech coding</article-title>
          .
          <source>In International Conference on Security and Privacy in Communication Systems</source>
          . Springer,
          <year>2017</year>
          , pp.
          <fpage>752</fpage>
          -
          <lpage>762</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Yang</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Du</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            <given-names>Y.</given-names>
          </string-name>
          , and Zhang Y.
          <string-name>
            <surname>-J.</surname>
          </string-name>
          Aag-stega:
          <article-title>Automatic audio generationbased steganography</article-title>
          .
          <source>ArXiv</source>
          preprint arXiv:
          <year>1809</year>
          .03463,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Yang Z.-L.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Guo</surname>
            <given-names>X.-Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen Z.-M.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Huang Y.-F.</surname>
          </string-name>
          , and Zhang Y.
          <string-name>
            <surname>-J.</surname>
          </string-name>
          Rnn-stega:
          <article-title>Linguistic steganography based on recurrent neural networks</article-title>
          .
          <source>IEEE Transactions on Information Forensics and Security</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>1280</fpage>
          -
          <lpage>1295</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Yang</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            <given-names>Y.</given-names>
          </string-name>
          , and Zhang Y.
          <string-name>
            <surname>-J. Rits</surname>
          </string-name>
          <article-title>: Real-time interactive text steganography based on automatic dialogue model</article-title>
          .
          <source>In International Conference on Cloud Computing and Security</source>
          . Springer,
          <year>2018</year>
          , pp.
          <fpage>253</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Yang</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jin</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>Y.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Li H</surname>
          </string-name>
          .
          <article-title>Automatically generate steganographic text based on Markov model and Huffman coding</article-title>
          .
          <source>ArXiv</source>
          preprint arXiv:
          <year>1811</year>
          .04720,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Johnson</surname>
            <given-names>N. F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sallee</surname>
            <given-names>P. A.</given-names>
          </string-name>
          <article-title>Detection of hidden information, covert channels and information flows</article-title>
          .
          <source>Wiley Handbook of Science and Technology for Homeland Security</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Theohary C.</surname>
          </string-name>
          <article-title>A. Terrorist use of the internet: Information operations in cyberspace</article-title>
          .
          <source>DIANE Publishing</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Sedighi</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cogranne</surname>
            <given-names>R.</given-names>
          </string-name>
          , and Fridrich J.
          <article-title>Content-adaptive steganography by minimizing statistical detectability</article-title>
          .
          <source>IEEE Transactions on Information Forensics and Security</source>
          , vol.
          <volume>11</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>221</fpage>
          -
          <lpage>234</lpage>
          ,
          <year>Feb 2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Couchot</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Couturier</surname>
            <given-names>R.</given-names>
          </string-name>
          , and Guyeux C.
          <article-title>STABYLO: steganography with adaptive, bbs, and binary embedding at low cost</article-title>
          .
          <source>Annales des Tel´ e-communications ´</source>
          , vol.
          <volume>70</volume>
          , no.
          <issue>9-10</issue>
          , pp.
          <fpage>441</fpage>
          -
          <lpage>449</lpage>
          ,
          <year>2015</year>
          . [Online]. DOI: http://dx.doi.org/10.1007/s12243-015-0466-7
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Holub</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J.</given-names>
          </string-name>
          , and Denemark T.
          <article-title>Universal distortion function for steganography in an arbitrary domain</article-title>
          .
          <source>EURASIP Journal on Information Security</source>
          , vol.
          <year>2014</year>
          , no.
          <issue>1</issue>
          ,
          <year>2014</year>
          . [Online]. DOI: http://dx.doi.org/10.1186/
          <fpage>1687</fpage>
          -417X-2014-1
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Li</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Li</surname>
            <given-names>X.</given-names>
          </string-name>
          <article-title>A new cost function for spatial image steganography</article-title>
          .
          <source>in 2014 IEEE International Conference on Image Processing (ICIP)</source>
          . IEEE,
          <year>2014</year>
          , pp.
          <fpage>4206</fpage>
          -
          <lpage>4210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Holub</surname>
            <given-names>V.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J. J.</given-names>
          </string-name>
          <article-title>Designing steganographic distortion using directional filters</article-title>
          .
          <source>In WIFS. IEEE</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Pevny</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Filler</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <article-title>and Bas P. Using high-dimensional image models ´ to perform highly undetectable steganography</article-title>
          .
          <source>In Information Hiding - 12th International Conference, IH 2010</source>
          , Calgary, AB, Canada, June 28-30,
          <year>2010</year>
          , Revised Selected Papers, ser. Lecture Notes in Computer Science, R. Bohme,
          <string-name>
            <given-names>P. W. L.</given-names>
            <surname>Fong</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Safavi-Naini</surname>
          </string-name>
          , ¨ Eds., vol.
          <volume>6387</volume>
          . Springer,
          <year>2010</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>177</lpage>
          . [Online]. DOI: http://dx.doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -16435-4
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kodovsky J. Multivariate</surname>
          </string-name>
          <article-title>Gaussian model for designing ´ additive distortion for steganography</article-title>
          .
          <source>In Acoustics, Speech, and Signal Processing (ICASSP)</source>
          ,
          <source>2013 IEEE International Conference on, May</source>
          <year>2013</year>
          , pp.
          <fpage>2949</fpage>
          -
          <lpage>2953</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Holub</surname>
            <given-names>V.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J. J.</given-names>
          </string-name>
          <article-title>Low-complexity features for JPEG steganalysis using undecimated DCT</article-title>
          .
          <source>IEEE Trans. Information Forensics and Security</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>228</lpage>
          ,
          <year>2015</year>
          . [Online]. DOI: http://dx.doi.org/10.1109/TIFS.
          <year>2014</year>
          .2364918
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kodovsky</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J. J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Holub</surname>
            <given-names>V.</given-names>
          </string-name>
          <article-title>Ensemble classifiers ´ for steganalysis of digital media</article-title>
          .
          <source>IEEE Transactions on Information Forensics and Security</source>
          , vol.
          <volume>7</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>432</fpage>
          -
          <lpage>444</lpage>
          ,
          <year>2012</year>
          . [Online]. DOI: http://dx.doi.org/10.1109/TIFS.
          <year>2011</year>
          .2175919
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Holub</surname>
            <given-names>V.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Fridrich</surname>
            <given-names>J. J.</given-names>
          </string-name>
          <article-title>Random projections of residuals for digital image steganalysis</article-title>
          .
          <source>IEEE Trans. Information Forensics and Security</source>
          , vol.
          <volume>8</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>1996</fpage>
          -
          <lpage>2006</lpage>
          ,
          <year>2013</year>
          . [Online]. DOI: http://dx.doi.org/10.1109/TIFS.
          <year>2013</year>
          .2286682
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Dang-Nguyen D</surname>
          </string-name>
          .-T.,
          <string-name>
            <surname>Pasquini</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conotter</surname>
            <given-names>V.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Boato</surname>
            <given-names>G.</given-names>
          </string-name>
          <article-title>RAISE: a raw images dataset for digital image forensics</article-title>
          .
          <source>In Proceedings of the 6th ACM Multimedia Systems Conference, MMSys</source>
          <year>2015</year>
          , Portland,
          <string-name>
            <surname>OR</surname>
          </string-name>
          , USA, March
          <volume>18</volume>
          -20,
          <year>2015</year>
          ,
          <string-name>
            <given-names>W. T.</given-names>
            <surname>Ooi</surname>
          </string-name>
          , W. chi Feng, and F. Liu, Eds. ACM,
          <year>2015</year>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>224</lpage>
          . [Online]. Available at: http://dl.acm.org/citation.cfm?id=
          <fpage>2713168</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <article-title>F5-steganography. The world's leading software development platform - GitHub</article-title>
          . Available at: https://github.com/matthewgao/F5-steganography
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Steghide</surname>
          </string-name>
          . Sourceforge. Available at: http://steghide.sourceforge.net
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>24. OpenStego. Available at: https://www.openstego.com</mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Dang-Nguyen D</surname>
          </string-name>
          .-T.,
          <string-name>
            <surname>Pasquini</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conotter</surname>
            <given-names>V.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Boato</surname>
            <given-names>G.</given-names>
          </string-name>
          <article-title>RAISE: a raw images dataset for digital image forensics</article-title>
          .
          <source>In Proceedings of the 6th ACM Multimedia Systems Conference, MMSys</source>
          <year>2015</year>
          , Portland,
          <string-name>
            <surname>OR</surname>
          </string-name>
          , USA, March
          <volume>18</volume>
          -20,
          <year>2015</year>
          ,
          <string-name>
            <given-names>W. T.</given-names>
            <surname>Ooi</surname>
          </string-name>
          , W. chi Feng, and F. Liu, Eds. ACM,
          <year>2015</year>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>224</lpage>
          . [Online]. Available at: http://dl.acm.org/citation.cfm?id=
          <fpage>2713168</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26. LeCun Y.,
          <string-name>
            <surname>Bengio</surname>
            <given-names>Y.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Hinton</surname>
            <given-names>G.</given-names>
          </string-name>
          <article-title>Deep learning</article-title>
          .
          <source>Nature</source>
          , vol.
          <volume>521</volume>
          , no.
          <issue>7553</issue>
          , pp.
          <fpage>436</fpage>
          -
          <lpage>444</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Krizhevsky</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sutskever</surname>
            <given-names>I.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Hinton</surname>
            <given-names>G. E.</given-names>
          </string-name>
          <article-title>Imagenet classification with deep convolutional neural networks</article-title>
          .
          <source>In Advances in neural information processing systems</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>1097</fpage>
          -
          <lpage>1105</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Wan</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zeiler</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cun</surname>
            <given-names>Y. L.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Fergus</surname>
            <given-names>R.</given-names>
          </string-name>
          <article-title>Regularization of neural networks using dropconnect</article-title>
          .
          <source>In Proceedings of the 30th International Conference on Machine Learning (ICML-13)</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>1058</fpage>
          -
          <lpage>1066</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Xu</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu H.-Z</surname>
          </string-name>
          . and
          <string-name>
            <surname>Shi</surname>
            <given-names>Y.-Q.</given-names>
          </string-name>
          <article-title>Structural design of convolutional neural networks for steganalysis</article-title>
          .
          <source>IEEE Signal Processing Letters</source>
          , vol.
          <volume>23</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>708</fpage>
          -
          <lpage>712</lpage>
          ,
          <year>2016</year>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>