<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Non-Linear Analytic Prediction of IP Addresses for Supporting Cyber Attack Detection and Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alfredo Cuzzocrea</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enzo Mumolo</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Edoardo Fadda</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Selim Soufargi</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carson K. Leung</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Torino &amp; ISIRES</institution>
          ,
          <addr-line>Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Manitoba</institution>
          ,
          <addr-line>Winnipeg, MB</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Trieste</institution>
          ,
          <addr-line>Trieste</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>iDEA Lab, University of Calabria</institution>
          ,
          <addr-line>Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Computer network systems are often subject to several types of attacks. For example the distributed Denial of Service (DDoS) attack introduces an excessive trafic load to a web server to make it unusable. A popular method for detecting attacks is to use the sequence of source IP addresses to detect possible anomalies. With the aim of predicting the next IP address, the Probability Density Function of the IP address sequence is estimated. Prediction of source IP address in the future access to the server is meant to detect anomalous requests. In this paper we consider the sequence of IP addresses as a numerical sequence and develop the nonlinear analysis of the numerical sequence. We used nonlinear analysis based on Volterra's Kernels and Hammerstein's models. The experiments carried out with datasets of source IP address sequences show that the prediction errors obtained with Hammerstein models are smaller than those obtained both with the Volterra Kernels and with the sequence clustering by means of the K-Means algorithm.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Cyber Attack</kwd>
        <kwd>Distributed Denial of Service</kwd>
        <kwd>Hammerstein Models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>User modeling is an important task for web applications dealing with large trafic flows. They
can be used for a variety of applications such as to predict future situations or classify current
states. Furthermore, user modeling can improve detection or mitigation of Distributed Denial
of Service (DDoS) attack [1, 2, 3], improve the quality of service (QoS) [4], individuate click
fraud detection and optimize trafic management. In peer-to-peer (P2P) overlay networks, IP
models can also be used for optimizing request routing [5]. Those techniques are used by severs
for deciding how to manage the actual trafic. In this context, also outlier detection methods
are often used if only one class is known. If, for example, an Intrusion Prevention System wants
to mitigate DDoS attacks, it usually has only seen the normal trafic class before and it has to
detect the outlier class by its diferent behavior. In this paper we deal with the management of
DDos because nowadays it has become a major threat in the internet. Those attacks are done by
using a large scaled networks of infected PCs (bots or zombies) that combine their bandwidth
and computational power in order to overload a publicly available service and denial it for
legal users. Due to the open structure of the internet, all public servers are vulnerable to DDoS
attacks. The bots are usually acquired automatically by hackers who use software tools to scan
through the network, detecting vulnerabilities and exploiting the target machine. Furthermore,
there is also a strong need to mitigate DDoS attacks near the target, which seems to be the only
solution to the problem in the current internet infrastructure. The aim of such a protection
system is to limit their destabilizing efect on the server through identifying malicious requests.
There are multiple strategies with dealing with DDoS attacks. The most efective ones are the
near-target filtering solutions. They estimates normal user behavior based on IP packet header
information. Then, during an attack the access of outliers is denied. One parameter that all
methods have in common is the source IP address of the users. It is the main discriminant for
DDoS trafic classification. However, the methods of storing IP addresses and estimating their
density in the huge IP address space, are diferent. In this paper, we present a novel approach
based on system identification techniques and, in particular, on the Hammerstein models.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Non-Linear Analytic Prediction of IP Addresses</title>
      <p>Data driven identification of mathematical models of physical systems (i.e. nonlinear) starts
with representing the systems as a black box. In other terms, while we may have access to
the inputs and outputs, the internal mechanisms are totally unknown to us. Once a model
type is chosen to represent the system, its parameters are estimated through an optimization
algorithm so that eventually the model mimics at a certain level of fidelity the inner mechanism
of the nonlinear system or process using its inputs and outputs. These approach is, for instance,
widely used in the related big data analytics area (e.g., [6, 7, 8, 9, 10, 11, 12, 13, 14])</p>
      <p>
        In this work, we consider a particular sub-class of nonlinear predictors: the
Linear-in-theparameters (LIP) predictors. LIP predictors are characterized by a linear dependence of the
predictor output on the predictor coeficients. Such predictors are inherently stable, and that
they can converge to a globally minimum solution (in contrast to other types of nonlinear filters
whose cost function may exhibit many local minima) avoiding the undesired possibility of getting
stuck in a local minimum. Let us consider a causal, time-invariant, finite-memory,continuous
nonlinear predictor as described in (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ).
      </p>
      <p>
        ˆ() =  [( − 1), . . . , ( −  )]
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
where  [· ] is a continuous function, () is the input signal and ˆ() is the predicted sample.
We can expand  [· ] with a series of basis functions (), as shown in (2).
      </p>
      <p>∞
ˆ() = ∑︁ ℎ()[( − )]
=1
(2)
where ℎ() a re proper coeficients. To make (2) realizable we truncate the series to the first 
In the general case, a linear-in-the-parameters nonlinear predictor is described by the
inputoutput relationship reported in (4).
where ⃗ is a row vector containing predictor coeficients and ⃗() is the corresponding
column vector whose elements are nonlinear combinations and/or expansions of the input
samples.</p>
      <sec id="sec-2-1">
        <title>2.1. Linear Predictor</title>
        <p>Linear prediction is a well known technique with a long history [15]. Given a time series ⃗,
linear prediction is the optimum approximation of sample () with a linear combination of
the  most recent samples. That means that the linear predictor is described as eq. (5).
terms, thus we obtain</p>
        <p>ˆ() = ∑︁ ℎ()[( − )]
=1
(3)
(4)
(5)
(6)
(7)
(8)
(9)
where the coeficient and input vectors are reported in (7) and (8).</p>
        <p>ˆ() = ∑︁ ℎ1()( − )</p>
        <p>
          =1
ˆ() = ⃗ ⃗()
⃗ = [︀ ℎ1(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) ℎ1(2) . . . ℎ1( )]︀
⃗ = [︀ ( − 1) ( − 2) . . . ( −  )]︀
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Non-Linear Predictor based on Volterra Series</title>
        <p>As well as Linear Prediction, Non Linear Prediction is the optimum approximation of sample
() with a non linear combination of the  most recent samples. Popular nonlinear predictors
are based on Volterra series [16]. A Volterra predictor based on a Volterra series truncated to
the second term is reported in (9).</p>
        <p>1 2 2
ˆ() = ∑︁ ℎ1()( − ) + ∑︁ ∑︁ ℎ2(, )( − )( − )</p>
        <p>=1 =1 =
where the symmetry of the Volterra kernel (the ℎ coeficients) is considered. In matrix terms,
the Volterra predictor is represented in (10).</p>
        <p>ˆ() = ⃗ ⃗()
=
=
=
where the coeficient and input vectors are reported in (12) and (12).</p>
        <p>⃗ 

⃗ 

(10).</p>
        <p>
          ⃗ 

⃗ 

where the coeficient and input vectors of FLANN predictors are reported in (22) and (23).
︂]
︂[ ( − 1)
2( − 1) ( − 1)( − 2) . . . 2( − 2)
( − 2) . . . ( − 1)
 
=1 =1
∑︁ ∑︁ ℎ2(, ) sin[ ( − )] (13)
⎡ ℎ1(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
        </p>
        <p>
          ℎ1(2) . . . ℎ1( )
⎣ℎ2(
          <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
          ) ℎ2(
          <xref ref-type="bibr" rid="ref1">1, 2</xref>
          ) . . . ℎ2(2, 2)⎦
ℎ3(
          <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
          )
ℎ3(
          <xref ref-type="bibr" rid="ref1">1, 2</xref>
          ) . . . ℎ3(2, 2)
2.3. Non-Linear Predictor based on Functional Link Artificial Neural
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Networks (FLANN)</title>
        <p>FLANN is a single layer neural network without hidden layer. The nonlinear relationships
between input and output are captured through function expansion of the input signal exploiting
suitable orthogonal polynomials. Many authors used for examples trigonometric, Legendre
and Chebyshev polynomials. However, the most frequently used basis function used in FLANN
for function expansion are trigonometric polynomials [17]. The FLANN predictor can be
represented by eq.(13).</p>
        <p>=1
 
ˆ() = ∑︁ ℎ1()( − ) + ∑︁ ∑︁ ℎ2(, ) cos[ ( − )]+
Also in this case the Flann predictor can be represented using the matrix form reported in
ˆ() = ⃗ ⃗()
⎡
( − 1)
( − 2)</p>
        <p>. . . ( −  )
=
⎣cos[ ( − 1)] cos[ ( − 2)] . . .</p>
        <p>sin[ ( − 1)] sin[ ( − 2)] . . .</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Non-Linear Predictors based on Hammerstein Models</title>
        <p>Previous research [18] shown that many real nonlinear systems, spanning from
electromechanical systems to audio systems, can be modeled using a static non-linearity. These terms capture
the system nonlinearities, in series with a linear function, which capture the system dynamics
as shown in Figure 1.</p>
        <p>FIR
y(n)</p>
        <p>
          Indeed, the front-end of the so called Hammerstein Model is formed by a nonlinear function
whose input is the system input. Of course the type of non-linearity depends on the actual
physical system to be modeled. The output of the nonlinear function is hidden and is fed
as input of the linear function. In the following, we assume that the non-linearity is a finite
polynomial expansion, and the linear dynamic is realized with a Finite Impulse Response (FIR)
iflter. Furthermore, in contrast with [ 18], we assume a mean error analysis and we postpone
the analysis in the robust framework in future work. In other word,
()
=
(2)2() + (3)3() + . . . ()()
=
On the other hand, the output of the FIR filter is:
() = ℎ0(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )( − 1) + ℎ0(2)( − 2) + . . . + ℎ0( )( −  ) =

∑︁ ()() (17)
=2

= ∑︁ ℎ0()( − ) (18)
=1
        </p>
        <sec id="sec-2-4-1">
          <title>Substituting (17) in (20) we have:</title>
          <p>() = ∑︁ ℎ0()( − ) = ∑︁ ℎ0() ∑︁ ()( − ) =
=1 =1 =2
Setting (, ) = ℎ0()() we write
This equation can be written in matrix form as
 
() = ∑︁ ∑︁ (, )( − )
=2 =1
ˆ() = ⃗ ⃗()
 
∑︁ ∑︁ ℎ0()()( − ) (19)
=2 =1
(20)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Predictor Parameters Estimation</title>
      <p>So far we saw that all the predictors can be expressed, at time instant , as</p>
      <p>ˆ() = ⃗  ⃗ ()
with diferent definitions of the input, ⃗ (), end parameters vectors ⃗  . There are two well
known possibilities for estimating the optimal parameter vector.</p>
      <sec id="sec-3-1">
        <title>3.1. Block-based Approach</title>
        <p>The Minimum Mean Square estimation is based on the minimization of the mathematical
expectation of the squared prediction error () = () − ˆ()</p>
        <p>[2] = [(() − ˆ())2] = [(() − ⃗  ⃗ ())2]
The minimization of (25) is obtain by setting to zero the Laplacian of the mathematical
expectation of the squared prediction error:</p>
        <p>
          ∇ [2] = [∇ 2] = [2()∇ ] = 0
where
⃗ 

⃗ 

=
⎡ (
          <xref ref-type="bibr" rid="ref1">2, 1</xref>
          ) (2, 2) . . . (2, 2) ⎤
⎣ (
          <xref ref-type="bibr" rid="ref1">3, 1</xref>
          ) (3, 2) . . . (3, 2) ⎦
        </p>
        <p>. . . (, 1) (, 2) . . . (,  )
=
⎡ 2( − 2) 2( − 3) . . . 2( −  )
⎣ 3( − 2) 3( − 3) . . . 3( −  )
 ( − 2)  ( − 3) . . .  ( − 1)  ( − 3) . . .  ( −  )
⎤
⎦
⃗  = ⃗−1⃗
⃗() = [⃗ ()⃗  ()]</p>
        <p>⃗() = [()⃗ ()]
⃗() =
∑︀=1 ⃗ ()⃗  ()</p>
        <p>⃗() =
∑︀
=1 ()()⃗ ()

which leads to the well known unique solution
where
is the statistical auto-correlation matrix of the input vector ⃗ () and
is the statistical cross-correlation vector between the signal () and the input vector ⃗ ().
The mathematical expectations of the auto and cross correlation are estimated using
is the statistical auto-correlation matrix of the input vector ⃗ () and
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Adaptive Approach</title>
        <p>Let us consider a general second order terms of a Volterra predictor
It can be generalized for higher order term as</p>
        <p>− 1  − 1
() = ∑︁ ∑︁ ℎ2(, )( − )( − )</p>
        <p>=0 =0

∑︁
1=1
· · ·

∑︁ 1 · · ·   {︀ 1 (), · · ·  ()}︀
=1

∑︁ ().
=1
where</p>
        <sec id="sec-3-2-1">
          <title>By defining and</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Eq (35) can be rewritten as follows</title>
          <p>
            For the sake of simplicity and without loss of generality, we consider a Volterra predictor based
on a Volterra series truncated to the second term
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
  () = |ℎ1(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ), · · ·
, ℎ1 (1) , ℎ2(
            <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
            ), · · ·
          </p>
          <p>, ℎ2 (2, 2)|
  () = ⃒⃒ ( − 1), · · · ,  ( − 1) , 2( − 1)
( − 1)( − 2), · · · , 2 ( − 2) |
ˆ() =   () ().</p>
          <p>( ) = ∑︁  −  [︀ ˆ() −   () ()]︀ 2</p>
          <p>=0</p>
          <p>In order to estimate the best parameters  , we consider the following loss function
where  −  weights the relative importance of each squared error. In order to find the  that
minimizes the convex function (39) it is enough to impose its gradient to zero, i.e.,
That is equivalent to</p>
          <p>∇ ( ) = 0
 () () =  ()

1
() =
 () =   ( − 1) + () ()
 () =  ( − 1) + −1 () () ()
 = ˆ() −   ( − 1) ()
 () =  ( − 1) +  ()()</p>
          <p>() =  ( − 1) ()
Thus, inserting Eq (47) and Eq (45)in Eq (43) and rearranging the terms, we obtain
where
By recalling Eq. (46), we can write Eq. (48) as</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>By introducing, the new notation,</title>
          <p>where () is equal to
Instead, matrix  () in (43) can be written as
−1 ( − 1) ()
 +   ()−1 ( − 1) ()
The previous equations can be resumed by the following system
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎧⎪() = ( − 1) ()
⎪⎪ () =  +   () ()
⎪⎪ () =
1
√
 ()+</p>
          <p>()
⎪() = √1 [︀ ( − 1) −  ()()  ()]︀</p>
          <p>⎪
⎪</p>
          <p>()
⎪⎩ () =  ( − 1) + ()  ()
⎪⎪ () = ˆ( − 1) −  ()()  ()
⎪
where</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>Since it follows that</title>
          <p>It follows that the best  can be computed by
 () = ∑︀
 () = ∑︀
=0  −  ()  ()
=0  − () ()
 () = −1 () ()
 () =   ( − 1) +  ()  ()
−1 () =</p>
          <p>︀[ −1 ( − 1) − ()  ()−1 ( − 1)]︀
It should be noted that by using Eq (52) the estimation adapts in each step in order to decrease
the error. Thus, the system structure is somehow similar to the Kalman filter.</p>
        </sec>
        <sec id="sec-3-2-5">
          <title>Finally, we define the estimation error as</title>
          <p>() = () −   () ()
It is worth noting that the computation of the predicted value from Eq. (38) requires 6tot +2t2ot
operations, where tot = 1 + 2 (2 + 1) /2.
(42)
(43)
(44)
(45)
(46)
(47)
(48)
(49)
(50)
(51)
(52)
(53)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <p>In order to prove the efectiveness of the proposed approach, in this section we present our
experimental results for a real dataset. Specifically, we consider the requests made to the 1998
World Cup Web site between April 30, 1998 and July 26, 1998 1. During this period of time the
site received 1,352,804,107 requests. The fields of the request structure contain the following
information: (i) timestamp the time of the request, stored as the number of seconds since the
Epoch. The timestamp has been converted to GMT to allow for portability. During the World
Cup the local time was 2 hours ahead of GMT (+0200). In order to determine the local time, each
timestamp must be adjusted by this amount; (ii) clientID a unique integer identifier for the client
that issued the request; due to privacy concerns these mappings cannot be released; note that
each clientID maps to exactly one IP address, and the mappings are preserved across the entire
data set - that is if IP address 0.0.0.0 mapped to clientID  on day  then any request in any of the
data sets containing clientID  also came from IP address 0.0.0.0; (iii) objectID a unique integer
identifier for the requested URL; these mappings are also 1-to-1 and are preserved across the
entire data set; (iv) size the number of bytes in the response; (v) method the method contained
in the client’s request (e.g., GET); (vi) status this field contains two pieces of information; the 2
highest order bits contain the HTTP version indicated in the client’s request (e.g., HTTP/1.0);
the remaining 6 bits indicate the response status code (e.g., 200 OK); (vii) type the type of file
requested, generally based on the file extension (.html), or the presence of a parameter list; ( viii)
server indicates which server handled the request. The upper 3 bits indicate which region the
server was at; the remaining bits indicate which server at the site handled the request.</p>
      <p>In the dataset, 87 days are reported. We use the first one in order to initialise the estimator in
(35) and we use the others as test set by using a rolling horizon method (as in [18]). Particularly,
for each day  we compute the estimation by using all the IP observations in the previous days
[0,  − 1]. The results are reported in Figure 2. The increase of errors in June is due to the sudden
increment of diferent IP accessing the website due to the start of the competition (see [ 19]). It
should be noted that the estimation error decrease exponentially despite dealing with several
millions of IPs.</p>
      <p>1ftp://ita.ee.lbl.gov/html/contrib/WorldCup.html</p>
      <p>Since the computation of the optimal coeficient () may require some time, we measure
the percentage of available data that our approach needs in order to provide good results.
Particularly, in this experiment we consider the average estimation error done by the model
at time  by considering a subset of the IPs observed in interval [0,  − 1]. The experimental
results on real data cubes are depicted in Figure 3.</p>
      <p>Hammersteinmodels
VolterraKerners
sequenceclusterings
0.1
0.2
0.3
0.4</p>
      <p>It should be noted that the Hammerstain model outperform the results by the Volterra’s
kernel as well as the clustering techniques. In more detail, the clustering techniques are the
one less performing. This is due to the nature of the clustering techniques that exploit the
geometric information of the data more than their time dependency. We highlight that despite
the calculation of () is computational intensive, this does not efect the real time applicability
of the method. In fact, the access decision is taken by considering the estimator ˆ() that is
computed once per day. Thus the computation of () does not need to be fast.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>In this paper, we presented a new way to deal with cyber attack by using Hammerstein models.
Experimental results clearly confirm the efectiveness of the proposed techniques for a real
data set, outperforming other well-known techniques. Future work will have two objectives.
First, we want to consider the problem in a stochastic optimization settings, as for example in
[20]. Second, we want to test the approach on other case studies, by also exploiting knowledge
management methodologies (e.g., [21, 22]).</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This work is partially supported by NSERC (Canada) and University of Manitoba.
[2] H.-X. Tan, W. Seah, Framework for statistical filtering against ddos attacks in manets, 2006, pp. 8 pp.–. doi: 10.1109/ICESS.</p>
      <p>2005.57.
[3] G. Pack, J. Yoon, E. Collins, C. Estan, On filtering of ddos attacks based on source address prefixes, 2006, pp. 1–12. doi: 10.</p>
      <p>1109/SECCOMW.2006.359537.
[4] Y. Yang, C.-H. Lung, The role of trafic forecasting in qos routing - a case study of time-dependent routing, 2005, pp. 224 –
228 Vol. 1. doi:10.1109/ICC.2005.1494351.
[5] A. Agrawal, H. Casanova, Clustering hosts in p2p and global computing platforms, 2003, pp. 367– 373. doi:10.1109/</p>
      <p>CCGRID.2003.1199389.
[6] A. Cuzzocrea, R. Moussa, G. Xu, Olap*: Efectively and eficiently supporting parallel OLAP over big data, in: Model and
Data Engineering - Third International Conference, MEDI 2013, Amantea, Italy, September 25-27, 2013. Proceedings, 2013,
pp. 38–49.
[7] G. Chatzimilioudis, A. Cuzzocrea, D. Gunopulos, N. Mamoulis, A novel distributed framework for optimizing query routing
trees in wireless sensor networks via optimal operator placement, J. Comput. Syst. Sci. 79 (2013) 349–368.
[8] A. Cuzzocrea, E. Bertino, Privacy preserving OLAP over distributed XML data: A theoretically-sound
secure-multipartycomputation approach, J. Comput. Syst. Sci. 77 (2011) 965–987.
[9] A. Cuzzocrea, V. Russo, Privacy preserving OLAP and OLAP security, in: Encyclopedia of Data Warehousing and Mining,</p>
      <p>Second Edition (4 Volumes), 2009, pp. 1575–1581.
[10] A. Bonifati, A. Cuzzocrea, Storing and retrieving xpath fragments in structured P2P networks, Data Knowl. Eng. 59 (2006)
247–269.
[11] R. C. Camara, A. Cuzzocrea, G. M. Grasso, C. K. Leung, S. B. Powell, J. Souza, B. Tang, Fuzzy logic-based data analytics
on predicting the efect of hurricanes on the stock market, in: 2018 IEEE International Conference on Fuzzy Systems,
FUZZ-IEEE 2018, Rio de Janeiro, Brazil, July 8-13, 2018, IEEE, 2018, pp. 1–8.
[12] P. Braun, A. Cuzzocrea, C. K. Leung, A. G. M. Pazdor, S. K. Tanbeer, G. M. Grasso, An innovative framework for supporting
frequent pattern mining problems in iot environments, in: O. Gervasi, B. Murgante, S. Misra, E. N. Stankova, C. M. Torre,
A. M. A. C. Rocha, D. Taniar, B. O. Apduhan, E. Tarantino, Y. Ryu (Eds.), Computational Science and Its Applications - ICCSA
2018 - 18th International Conference, Melbourne, VIC, Australia, July 2-5, 2018, Proceedings, Part V, volume 10964 of Lecture
Notes in Computer Science, Springer, 2018, pp. 642–657.
[13] A. Cuzzocrea, C. Mastroianni, G. M. Grasso, Private databases on the cloud: Models, issues and research perspectives, in:
J. Joshi, G. Karypis, L. Liu, X. Hu, R. Ak, Y. Xia, W. Xu, A. Sato, S. Rachuri, L. H. Ungar, P. S. Yu, R. Govindaraju, T. Suzumura
(Eds.), 2016 IEEE International Conference on Big Data, BigData 2016, Washington DC, USA, December 5-8, 2016, IEEE
Computer Society, 2016, pp. 3656–3661.
[14] A. Cuzzocrea, Accuracy control in compressed multidimensional data cubes for quality of answer-based OLAP tools, in: 18th
International Conference on Scientific and Statistical Database Management, SSDBM 2006, 3-5 July 2006, Vienna, Austria,
Proceedings, IEEE Computer Society, 2006, pp. 301–310.
[15] J. Makhoul, Linear prediction: A tutorial review, Proceedings of the IEEE 63 (1975) 561–580.
[16] Z. Peng, C. Changming, Volterra series theory: A state-of-the-art review, Chinese Science Bulletin (Chinese Version) 60
(2015) 1874. doi:10.1360/N972014-01056.
[17] H. Zhao, J. Zhang, Adaptively combined fir and functional link artificial neural network equalizer for nonlinear
communication channel, IEEE Transactions on Neural Networks 20 (2009) 665–674.
[18] V. Cerone, E. Fadda, D. Regruto, A robust optimization approach to kernel-based nonparametric error-in-variables
identiifcation in the presence of bounded noise, in: 2017 American Control Conference (ACC), IEEE, 2017. doi: 10.23919/acc.
2017.7963056.
[19] M. Arlitt, T. Jin, H. packard Laboratories, A workload characterization study of the 7998 world cup web site, ????
[20] E. Fadda, G. Perboli, R. Tadei, Customized multi-period stochastic assignment problem for social engagement and
opportunistic iot, Computers &amp; Operations Research 93 (2018) 41–50.
[21] A. Cuzzocrea, Combining multidimensional user models and knowledge representation and management techniques for
making web services knowledge-aware, Web Intelligence and Agent Systems 4 (2006) 289–312.
[22] M. Cannataro, A. Cuzzocrea, C. Mastroianni, R. Ortale, A. Pugliese, Modeling adaptive hypermedia with an object-oriented
approach and XML, in: M. Levene, A. Poulovassilis (Eds.), Proceedings of the Second International Workshop on Web
Dynamics, WebDyn@WWW 2002, Honululu, HW, USA, May 7, 2002, volume 702 of CEUR Workshop Proceedings,
CEURWS.org, 2002, pp. 35–44.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Goldstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lampert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Reif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stahl</surname>
          </string-name>
          , T. Breuel,
          <article-title>Bayes optimal ddos mitigation by adaptive history-based ip filtering</article-title>
          , in: Seventh International Conference on Networking (icn
          <year>2008</year>
          ),
          <year>2008</year>
          , pp.
          <fpage>174</fpage>
          -
          <lpage>179</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>