<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Modelling the Multi-Layer Artificial Neural Network for Internet Traffic Forecasting: The Model Selection Design Issues</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mba</forename><forename type="middle">O</forename><surname>Odim</surname></persName>
							<email>odimm@run.edu.ng</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science Department Redeemer&apos;s</orgName>
								<orgName type="institution">University Ede</orgName>
								<address>
									<country key="NG">Nigeria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jacob</forename><forename type="middle">A</forename><surname>Gbadeyan</surname></persName>
							<email>jagbadeyan@unilorin.edu.ng</email>
							<affiliation key="aff1">
								<orgName type="department">Mathematics Department</orgName>
								<orgName type="institution">University of Ilorin Ilorin</orgName>
								<address>
									<country key="NG">Nigeria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Joseph</forename><forename type="middle">S</forename><surname>Sadiku</surname></persName>
							<email>jssadiku@unilorin.edu.ng</email>
							<affiliation key="aff2">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution">University of Ilorin Ilorin</orgName>
								<address>
									<country key="NG">Nigeria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Modelling the Multi-Layer Artificial Neural Network for Internet Traffic Forecasting: The Model Selection Design Issues</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6289A91CD533C3E3495BC8B2F67F9D7C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T00:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>CCS Concepts</term>
					<term>Computing Methodologies → Artificial Intelligence → Machine learning → Machine learning approaches → Neural Networks Internet Traffic</term>
					<term>Times Series Forecasting</term>
					<term>Machine learning</term>
					<term>Multi-layer Artificial Neural Network</term>
					<term>Design issues</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Internet traffic forecasting models with learning ability, such as the artificial neural network (ANN), have been growing in popularity in recent time due to their impressive performance in modelling the high degree of variability and nonlinearity of internet traffic. This study examined the impacts of some design issues on performance of the multi-layer artificial neural network for internet traffic forecasting. The traffic forecasting was modelled as a standard time series problem and the multilayer artificial neural network designed to performs the time series function mapping. The input lags were varied from 1 to 24. The training epoch values of 200, 500, and 1000 on one and two hidden layered networks were used. The learning algorithm was backpropagation with 0.1 learning rate and 0.9 momentum on logistic sigmoid activation function. The model was implemented in Visual Basic and validated with four categories of classified time series internet traffic of a branch residential network of one of a firm in Nigeria. Various predictive performances without consistent pattern were observed on the issues considered, however, input lag one gave the worst performance in all cases for the HOURLY traffic; three of the four traffic categories demonstrated the superiority of two hidden layers to one hidden layer. Although the epoch values of 200, 500 and 1000 showed no consistent performance variations, epoch value 200 outperformed the others on the model selections. The study revealed that input lags, number of hidden layers and epoch values could impact on the traffic forecasting performance of multilayer perceptron and that performance could be considerably improved by careful selection of those parameters through experimentations.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>Accurate information about offered traffic is required for efficient resource provisioning and general capacity planning of an Internet service. The inability of most statistical methods in modelling the high variability of internet traffic accurately, and their lack of reasoning capabilities have triggered an increased number of studies that employ non-traditional statistical methods including machine learning. Furthermore, traditional summary statistics, particularly the sample mean and variance are instable metrics for working with the high variability of internet traffic, as such the sample means and sample variances are not reliable statistics for summarising traffic properties <ref type="bibr" target="#b1">[1]</ref>. Machine learning techniques, such as the Artificial Neural Network (ANN) employ mechanisms that allow computers to evolve behaviour based on knowledge gained from dynamic observations. Machine learning technique based on nonlinear elements is often referred to as Neural Network. Neural networks are networks of nonlinear elements interconnected through adjustable weights and they play a prominent role in machine learning. Artificial (ANN) emerged with the aim of imitating the information processing process of human brain. Through learning, ANNs can determine nonlinear relationship in a data set by associating the corresponding output to input patterns. The multilayer artificial neural network, among other machine learning models, has shown impressive results in forecating studies <ref type="bibr" target="#b2">[2,</ref><ref type="bibr" target="#b3">3,</ref><ref type="bibr" target="#b4">4,</ref><ref type="bibr" target="#b5">5,</ref><ref type="bibr">6,</ref><ref type="bibr" target="#b7">7,</ref><ref type="bibr" target="#b8">8,</ref><ref type="bibr" target="#b9">9]</ref>. However, applying an ANN to a given forecasting endeavour is a hard task, as basic modelling issues must be carefully considered for enhanced precision. The issues include the network architecture, learning parameters and data pre-processing methods <ref type="bibr">[6,</ref><ref type="bibr" target="#b8">8]</ref>). The inconsistencies in performance reports on the design issues in the literature was noted also in <ref type="bibr" target="#b8">[8]</ref>. In <ref type="bibr" target="#b9">[9]</ref> it was argued that ANN technique should not be applied arbitrarily as has been sometimes suggested and even used in the internet forecasting domain <ref type="bibr" target="#b10">[10,</ref><ref type="bibr" target="#b11">11]</ref> .</p><p>The paper examined the impacts of number of input lags, hidden neurons and training epochs on the precision of the multilayer artificial neural network in forecasting internet traffic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">RELATED WORK</head><p>Quite a number of research efforts has been reported in the literature on seeking appropriate models for forecasting Internet traffic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Internet Traffic Forecasting: Statistical Methods</head><p>In <ref type="bibr" target="#b12">[12]</ref> a comparative study on suitable statistical methods for network traffic estimation was conducted. In the paper, several estimation methods for IP network traffic were studied. The study showed that non-linear time series models could model and forecast better than the classical linear time series models. Anand in <ref type="bibr" target="#b13">[13]</ref> investigated a non-linear Time series model, the Generalised Autoregressive Heteroskdasticity (GARCH) in internet traffic modelling. The model showed that the forecasting algorithm was accurate compared with actual traffic. Although nonlinear statistical models can capture the busrtiness of network"s traffic, the models are parametric in nature and therefore require the knowledge of the distribution of the traffic. In addition, they are analytical and therefore require explicit programming to clearly specify the algorithmic steps. To take the advantages of machine learning paradigms, applying machine learning techniques to internet traffic forecasting has been on the increase.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Machine Learning and Artificial Neural Network for Internet Traffic Forecasting</head><p>A vast number of research efforts have been on going in exploring machine learning techniques to internet traffic predictions, the results of which have demonstrated their superiority to statistical forecasting methods. A concurrent neuro-fuzzy model to discover and analyse useful knowledge from available Web log data was proposed in <ref type="bibr" target="#b14">[14]</ref>. The study used self-organizing map for pattern analysis and a fuzzy inference system to capture the chaotic trend to provide short term (hourly) and long term (daily) web traffic trend predictions. Empirical results demonstrated that the proposed approach was efficient for mining and predicting web traffic. A study in <ref type="bibr" target="#b15">[15]</ref> presented a neural network ensemble (NNE) for the prediction of TCP/IP traffic using time series forecasting (TSF) point of view. The NNE approach was compared with TSF methods (Holt -winter and ARIMA) and the NNE was found to compete favourably with the TSF methods. In <ref type="bibr" target="#b16">[16]</ref> the least square support vector machines was applied to solve the problem of accurately predicting non-peak traffic and the method had a good generalization ability and guaranteed global minima. <ref type="bibr" target="#b17">[17]</ref> Presented a neural network ensemble approach and two adapted time series methods (ARIMA and Holt-Winters) for forecasting the amount of traffic in TCP/IP based networks. The experiments with the neural ensemble achieved the best results for 5 min and hourly data, while the Holt-Winters was the best option for the daily forecasts. The study in <ref type="bibr" target="#b10">[10]</ref> investigated the ensembles of artificial neural networks in predicting long-term internet traffic. The proposed prediction models were compared with the classic method of Holt-Winters. Prangchumpol in <ref type="bibr" target="#b18">[18]</ref> presented a description approach to predicting incoming and outgoing data rate in network system by using a data (machine learning) mining techniques, the association rule discover. The result of the study showed that the technique could predict future network traffic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Design Issues with Forecasting with Artificial Neural Network</head><p>A detailed state of the art presentation on forecasting with artificial neural networks was made in <ref type="bibr" target="#b8">[8]</ref>. The study showed that overall; ANNs gave satisfactory performance in forecasting, but went on to indicate the inconsistencies in performance reports of design issues in the literature. The inconsistencies were attributed to trial and error methodology adopted in most studies. Faraway and Chatfield <ref type="bibr" target="#b9">[9]</ref> argued that it was unwise to apply ANN models blindly in black box mode as had sometimes been suggested. <ref type="bibr">Shamsuddin,</ref><ref type="bibr">et al. in [7]</ref> investigated the effect of applying different number of input nodes, activation functions and preprocessing techniques on the performance of backpropagation network in time series revenue forecasting. The findings showed that the performance of ANN model could be considerably improved by careful selection of those parameters. In <ref type="bibr" target="#b19">[19]</ref>, the performance of two learning algorithms: the linear regression and Neural Network Standard Back propagation were compared on the prediction of four major stock market indexes. The comparison showed that the neural network approach resulted in better prediction accuracy than the Linear Regression model. Chabaa et al in <ref type="bibr" target="#b20">[20]</ref> presented an ANN based on the multi-layer perceptron for analysing a time series measured internet traffic data over IP networks. The comparison between some training algorithms demonstrated the efficiency and accuracy of the Levenberg Maquardt and the Resilient back propagation algorithms. Chukwuchekwa in <ref type="bibr" target="#b21">[21]</ref> compared the performance of the back propagation gradient descent technique and genetic algorithm on some pattern recognition problems. The backpropagation (BP) algorithm was found to outperform the genetic algorithm in that instance. The study suggested that caution should be applied before using other algorithms as substitutes for the BP algorithm, more especially in classification problems. In <ref type="bibr" target="#b2">[2]</ref>, an evaluation of several learning rules for adjusting ANN weights was carried out on the popular airline passenger data set. The Levenberg-Marquardt backpropagation algorithm showed the best performance among other learning rule. Various degrees of performances were observed in <ref type="bibr" target="#b22">[22]</ref> on examining the impact of input lags of the multilayer perceptron in forecasting internet traffic on a two layered network. In <ref type="bibr" target="#b23">[23]</ref> a survey of research and application issues on Web usage mining based on various mining technique was conducted to provide some understanding in designing algorithms suitable for mining data.</p><p>This review demonstrated the impressive results of applying machine learning technique, such as the artificial neural networks, in forecasting Internet traffic as well as raising concerns over the little or no consideration given by researchers on the design issues. The paper therefore presents results from the study on the impacts of some multi-layer perceptron design issues on internet traffic forecasting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">METHODOLOGY</head><p>The traffic forecasting was modelled as a standard time series problem and the multilayer artificial neural network designed to performs the time series function mapping.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Time series for Traffic Forecasting</head><p>Traffic forecasting is a standard time series prediction task. The goal is to approximate the function that relates the future values of a variable of the previous observations of that variable <ref type="bibr" target="#b24">[24]</ref>. In some situations, such as internet traffic, data are non-stationary and chaotic. In such situation, one general assumption is that historical data incorporate all behaviour required to capture the dependency between the future traffic and that of the past. Therefore, the historical data is the major player in the forecasting process. The second assumption to model and forecast the dynamic of the traffic is that its values are expressed by discrete time series <ref type="bibr" target="#b2">[2,</ref><ref type="bibr" target="#b3">3]</ref>. A discrete time series is a vector {y t } of observations made at regular intervals, t=1, 2, 3……, N. For the time series forecasting problem, the inputs are typically the past observation of the data series and the output is the future value.  , where each output would have separate weights for each connection to the neurons. Secondly, "feedback" the one-step-ahead forecast to replace the lag 1 value as one of the input variables, and the same architecture could then be used to construct the two-step-ahead forecast, and so on <ref type="bibr" target="#b16">[16]</ref>.. This study adopted the latter iterative approach because of its numerical simplicity and because it requires fewer weights to be estimated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">The Multilayer Neural Network</head><p>Neural network is a powerful model for solving complex problems because it has natural potential of solving nonlinear problems and can esily achieve the input-out mapping, it is good for solving predicting problems <ref type="bibr" target="#b26">[26]</ref>. The basic features of the multilayer perceptrons include:</p><p>i.</p><p>The model of each neuron in the network includes a nonlinear activation function that is differentiable. ii.</p><p>The network contains one or more layers that are hidden from both input and output nodes.</p><p>iii.</p><p>The network exhibits a high degree of connectivity, the extent of which is determined by synaptic weights of the network.</p><p>Figure <ref type="figure">.</ref> 1. Architecture of Multilayer Perceptron with two hidden layers</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1">A Neural Model</head><p>The node is the basic unit of the Artificial Neural Network. . Each node is able to sum many inputs x 1 , x 2 , …,x n form the environment or from other nodes, with each input modified by an adjusted node weight (Figure <ref type="figure" target="#fig_2">2</ref>). The sum of these weighted inputs is added to an adjustable threshold for the node and then passed through a modifying (activation) function that determines the final output.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure. 2. Nonlinear model of a neuron [26]</head><p>The neural model in Figure <ref type="figure" target="#fig_2">2</ref> includes an externally applied bias, denoted by k b . The bias has the effect of increasing or lowering the net effect of the activation function, depending on whether it is positive or negative, respectively. Mathematically, we may describe the neuron k depicted in Fig. <ref type="figure" target="#fig_2">2</ref> by the following equations:</p><formula xml:id="formula_0">1 k m kj j j u w x    (4) and k y =  ( k u + k b )<label>(5)</label></formula><p>where </p><formula xml:id="formula_1">v = k u + k b (6)</formula><p>The bias k b is an external parameter of neuron k.</p><p>The activation function, denoted by  (v) defines the output of a neuron in terms of induced local field v. It is this function (also called, the transfer function) that determines the relationship between inputs and outputs of a node and a network. In general, the activation function introduces a degree of nonlinearity that is valuable for most ANN applications. Among these functions, sigmoid function is very popular. It is a strictly increasing function that exhibits a graceful balance between linear and nonlinear behaviour. The Logistic Sigmoid is defined as in ( <ref type="formula" target="#formula_0">5</ref>)</p><formula xml:id="formula_2"> (v) = (7)</formula><p>A logistic sigmoid function assumes a continuous range of values from 0 to 1. Additional types of activation functions can be found in <ref type="bibr" target="#b8">[8]</ref>. Among these functions, logistic transfer function is the most popular choice <ref type="bibr" target="#b8">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.2.1</head><p>Training of artificial neural networks ANN has to be trained before it can be put to use. The goal of the training is to find the logical relationship from the given input/output. There two strategies of the learning: supervised and unsupervised. This study employs the supervised learning strategy. Supervised learning typically operates in two phasestraining and test set. The training set is used for estimating the arc weights while the test set is used for measuring the generalization ability of the network. Training is used to gain generalised knowledge about the system under consideration and testing is used to predict (forecast) the system behaviour using the knowledge gained. On the other hand, unsupervised techniques such as the reinforcement learning is independent of training data and operate by directly interacting with the environment.</p><p>The training algorithm employed is the Backpropagation. It is a supervised training strategy and popular method for training the multilayer perceptron. The training proceeds in two phases <ref type="bibr" target="#b26">[26]</ref>:</p><p>1. In the forward phase, the synaptic weights of the network are fixed and the input signal is propagated through the network, layer by layer, until it reaches the output. Thus, in this phase, changes are confined to the activation potentials and outputs of the neurons in the network. 2. In the backward phase, an error signal is produced by computing the output of the network with desired response. The resulting error signal is propagated through the network, again layer by layer, but this time the propagation is performed in the backward direction. In this second phase, successive adjustment is made to the synaptic weights of the network.</p><p>In <ref type="bibr" target="#b5">[5]</ref> it is also reported that the backprobagation is the most computationally straightforward algorithm for training the multi-layer perceptron. They summarized the algorithms steps as </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Data collection and Description</head><p>Internet traffic data was collected in hourly average kilo bit/s of TCP/IP traffic of a company"s resident network from January 1 2010 to September 30 2010 (making up 6552 data points each for IN and OUT traffic data), daily traffic data from January 1 to December 31, 2010 (making up 365 data points each for IN and OUT traffic data), using PRTG (Paessler Router Traffic Grapher), a network monitoring and bandwidth usage tool from a company called PAESSLER. 20Mpbs bandwidth was allocated for upload (Traffic IN) and 20Mbps for download (traffic out) statically for the period under consideration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.1">Data Pre-processing/ Normalisation</head><p>Nonlinear activation functions such as the logistic function typically restricts the possible output from a node to, typically, (0, 1) or (-1, 1). This is to avoid computational problems, to meet algorithm requirement and to facilitate network learning. Four methods for input normalization are summarized in <ref type="bibr" target="#b8">[8]</ref>. This study employs, the Linear transformation to [0, 1], defined as y n = (y 0 -y min )/(y max -y min )</p><p>where y n and y o represent the normalized and original data: y min , y max, are the minimum, maximum of the column or rows respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.2">Training and Testing set</head><p>Eighty percent (80%) of the data, that is, 5241.6 approximated to 5242 was used for training the network, while twenty per cent (20%), that is, 1310.4, approximated to 1310, was used for testing the generalisation predictive capability of the network each for the HOURLY_IN and HOURLY_OUT flow traffic. Also, a training set of 80% and testing set of 20% were used for each of the DAILY traffic, that is. 292 data points for training and 73 for testing. .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Finding the appropriate complexity of the Network</head><p>For times series forecasting problem, a training pattern consists of a fixed number of lagged observations of the series <ref type="bibr" target="#b7">[7]</ref>. The inputs (number of lag observations) were varied from 1 to 24, excluding the bias. One and two hidden layers were considered. The number of hidden nodes were equalled to the number of input nodes. In several studies, networks with the number of hidden nodes being equal to the number of input nodes are reported to have better forecasting <ref type="bibr" target="#b8">[8]</ref>. One output node was used, one look-ahead. So the model of our network is k, k, k, 1, where k represents the number of lag observations (input variables). The epochs were based on 200, 500, and 1000. The best model according to <ref type="bibr" target="#b18">[18]</ref> is the one that gives the best result in the test set. The logistic sigmoid activation function was used <ref type="bibr" target="#b8">[8]</ref>. The Error correction backpropagation algorithm with learning rate: 0.1; momentum: 0.9 was used to train the network.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Stopping and Evaluation Criteria</head><p>The training stop after each epoch respectively. Typically, as SSE based objective function or cost function to be minimized during the training process is defined in <ref type="bibr" target="#b10">(10)</ref>. The measure of accuracy employed is the Root Mean Square error (RMSE) defined as where n is the total number of sample group observations, ŷt is the predicted (computed) value while yt is the target value at time t. RMSE is one of the most commonly used measure of forecast error to examine how close the forecast is to the actual value <ref type="bibr" target="#b5">[5]</ref>. The best model is the one that gives the best result in the test set, that is, the model that has the least RMSE in the testing set <ref type="bibr" target="#b27">[27]</ref>. <ref type="bibr" target="#b4">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>RESULTS AND DISCUSSION</head><p>The system was implemented in visual basic. The RMSE of various models were recorded and compared based on the design issues considered. The results are presented and discussed in this section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">HOURLY_IN traffic</head><p>The RMSE of the testing (prediction) results of the various models based on the number of input lags, number of hidden layers and training epochs on one and two hidden layers network respectively were compared for the HOURLY_IN traffic. Figure <ref type="figure">3</ref> depicts these results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 3. ANN model selection for the HOURLY_IN traffic</head><p>There were varying degrees of performance with no regular patterns of performance among the input lags, between the one and two hidden layers networks, and among the various epochs used. Nevertheless, the worst performance for all the cases is input lag 1. The least RMSE with the value 0.0766984 of this experiment occurred at input lag 24 with 200 training epochs on two hidden layer network. Therefore, the best model for forecasting the HOURLY_IN traffic is input lag 24, 200 training epochs using two hidden layers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">HOURLY_OUT traffic</head><p>The RMSE of the testing (prediction) results of the various models were compared for the HOURLY_OUT traffic. The results are shown in Figure <ref type="figure" target="#fig_4">4</ref>. There also various values of the performance measure with no particular patterns on the issues for the HOURLY_OUT traffic. As in the HOURLY_IN, the worst performance for was recorded at input lag 1 in all the cases. The least RMSE with the value 0.0621992 occurred at input lag 13 with 200 training epochs on two hidden layer network. Therefore, the best model for forecasting the HOURLY_OUT traffic is input lag 13, 200 training epochs using two hidden layers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">DAILY_IN traffic</head><p>Figure <ref type="figure" target="#fig_5">5</ref> presents the prediction RMSE of the various models for the DAILY_IN traffic. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.4</head><p>The DAILY_OUT traffic.   <ref type="figure" target="#fig_7">7</ref> compares the predicted models selected for the traffic categories. The HOURLY traffic categories had a better prediction performance than the DAILY traffic counterparts. This could have been attributed to the very large sample size used for the HOURLY traffic. It has been reported that the ANN for forecasting perform better with large sample size than with small sample size <ref type="bibr">(Zhang et al. 1998 [8]</ref> and Zhang, et al. <ref type="bibr" target="#b26">[26]</ref>). In addition, figure <ref type="figure" target="#fig_7">7</ref> revealed that various forecasting models may exist for different traffic categories, even if the traffic categories are all from the same network operator. This study has observed different forecasting models for the various traffic categories based on the issues. The findings suggest that carefully consideration of the design issues is indispensable for improving the predictive performance of a multi-layer artificial neural network rather than applying it to internet traffic forecasting blindly. However, there are no generally acceptable techniques for determining the optimal design parameter but by experimentations, an improved predictive performance model is feasible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">CONCLUSION</head><p>This study examined the impacts of some important design issues in modelling a multilayer perceptron artificial neural network for Internet traffic forecasting. The traffic forecasting was modelled as a standard time series problem and a multilayer artificial neural network designed to performs the time series function mapping. The mechanism was implemented in a Visual Basic programming environment and tested with real Internet traffic data through experimentation with the various design issues considered. Although no particular pattern of performance was observed the study showed that the forecasting performance can be affected by the number of input lags, hidden layers and training epochs,. Despite that the study did not make any attempt to determine an optimal values for the various factors considered, it has shown that careful experimentation is required to choose appropriate values for each of the design issues. Therefore, the multilayer perceptron should not be applied blindly to Internet traffic forecasting. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Suppose</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>time series of the traffic loads, then the basic problem is to estimate future traffic value such as Nk y  , where the integer k is called the lead time or the forecasting horizon [25]. For the univariate method, forecasts of a given traffic load are based on a model fitted only to the past observations of the given time series, so that ^t y (N, k) depends only on y 1 , y 2 ….y N-1. The estimate of 1 N y  is computed as a weighted sum of the past observations: the training pattern composed of a fixed number (n) of lagged observations of the series. The weight to be used in the ANN model are estimated from the data by minimizing the sum of squares of the within-sample onestep ahead forecast errors, the first part of the time series, called the training set. The last part of the time series called the test set, is kept in reserve so that genuine out of sample (ex ante) forecasts can be made and compared with the actual observations. Equations (1) and (2) give a one-step-ahead forecast as it uses the actual observed values of all lagged variables as inputs. If multistep-ahead-forecasts are required, then it is possible to proceed in one of two ways. Firstly, construct a new architecture with several outputs, giving</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>x 1 , x 2 ,</head><label>2</label><figDesc>…, x m are the input signals; w k1 , w k2 , …., w km are the respective synaptic weights of neuron k. k u is the linear combiner output due to the input signals, k b is the "bias",  (.) is the activation function, and k y is the output signal of the neuron. The use of the bias b k has the effect of applying affine transformation to the output k v of the linear combiner in the model this is shown by k</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4. ANN model selection for the HOURLY_OUT traffic</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. ANN model selection for the DAILY_IN traffic Different performance values were also observed with no particular patterns on the various prediction models.. The least RMSE with the value 0.116691 of this experiment occurred on input lag 3 with 1000 training epochs on two hidden layer network. Therefore, the best model for forecasting the DAILY_IN traffic is input lag 3, 1000 training epochs using two hidden layers.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 Figure 6 .</head><label>66</label><figDesc>Figure 6 presents the RMSE of the various prediction models for the DAILY_OUT traffic.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 7 .</head><label>7</label><figDesc>Figure 7. Summary least RMSE for model selection of the traffic categories</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Back-propagate error through output and hidden layers and adapt W ij and q j . 8. Back-propagate error through hidden and input layer and adapt weights W ij and q j , 9. Check if Error &lt; E min or max epoch reached. If not, repeat steps 6 -9, otherwise, stop training.</figDesc><table><row><cell>7.</cell></row><row><cell>1. Obtain a set of training patterns</cell></row><row><cell>2. Set up ANN model that consist of number of input</cell></row><row><cell>neurons, hidden neurons, and output neurons</cell></row><row><cell>3. Set learning rate (h) and momentum rate (a)</cell></row><row><cell>4. Initialize all connections (W ij and W jk ) and bias</cell></row><row><cell>weights ( q k and q j ) to random values.</cell></row><row><cell>5. Set the minimum error E min /number of epochs</cell></row><row><cell>6. Start training by applying input pattern one at a</cell></row><row><cell>time and propagate through the layers then</cell></row><row><cell>calculate total error</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>presents a summary of the model selection for forecasting the various traffic categories.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 :</head><label>1</label><figDesc>Summary of the Forecasting Model Selection for the traffic Categories</figDesc><table><row><cell></cell><cell>Hid</cell><cell>Epo</cell><cell>Lear</cell><cell>mom</cell><cell>Input</cell><cell>Least</cell></row><row><cell></cell><cell>den</cell><cell>chs</cell><cell>ning</cell><cell>entu</cell><cell>lags</cell><cell>RMSE</cell></row><row><cell></cell><cell>laye</cell><cell></cell><cell>rate</cell><cell>m</cell><cell></cell></row><row><cell></cell><cell>rs</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HOURLY_IN</cell><cell>2</cell><cell cols="2">200 0.1</cell><cell>0.9</cell><cell>24</cell><cell>0.07669</cell></row><row><cell>HOURLY_OU</cell><cell>2</cell><cell cols="2">200 0.1</cell><cell>0.9</cell><cell>13</cell><cell>0.06219</cell></row><row><cell>T</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>DAILY_IN</cell><cell>2</cell><cell cols="2">200 0.1</cell><cell>0.9</cell><cell>3</cell><cell>0.11521</cell></row><row><cell>DAILY_OUT</cell><cell>1</cell><cell cols="2">200 0.1</cell><cell>0.9</cell><cell>3</cell><cell>0.09941</cell></row><row><cell cols="7">For the HOURLY_IN traffic the traffic computed (predictive)</cell></row><row><cell cols="7">values based on 24 input lags on two hidden layer network using</cell></row><row><cell cols="7">200 training epochs was deployed, and for the HOURLY_OUT</cell></row><row><cell cols="7">traffic, the study used 13 input lags of the traffic computed values</cell></row><row><cell cols="7">of the testing set on 200 training epochs on two hidden layers</cell></row><row><cell cols="7">network designed to perform . The study deployed 3 input lags,</cell></row><row><cell cols="7">two hidden layers of 3 neurons each using 200 training epochs for</cell></row><row><cell cols="7">predicting the DAILY_IN traffic. For the DAILY_OUT traffic, 3</cell></row><row><cell cols="7">input lags, one hidden layer of three neurons with 200 training</cell></row><row><cell cols="2">epochs were selected.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Figure</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">CoRI'16, Sept 7-9, 2016, Ibadan, Nigeria.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Internet Measurement</title>
		<author>
			<persName><forename type="first">M</forename><surname>Crovella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Krishnamurthy</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>John Wiley &amp; Sons, Ltd</publisher>
			<pubPlace>England</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Seasonal Time Series Forecasting Models on Artificial Neural Network</title>
		<author>
			<persName><forename type="first">S</forename><surname>Benkacha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Benhra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>El Hassani</surname></persName>
		</author>
		<idno type="DOI">=:10.5120/20451-2805</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Applications</title>
		<imprint>
			<biblScope unit="volume">116</biblScope>
			<biblScope unit="issue">20</biblScope>
			<biblScope unit="page" from="975" to="8887" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Causal Method and Time Series Forecasting Model based on Artificial Neural Network</title>
		<author>
			<persName><forename type="first">S</forename><surname>Benkacha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Benhra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>El Hassani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Applications</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="975" to="8887" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Empirical prediction models for adaptive resource provisioning in the cloud</title>
		<author>
			<persName><forename type="first">S</forename><surname>Islam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Keung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">=10.1016/1.future2011.05.027</idno>
	</analytic>
	<monogr>
		<title level="j">Future Generation Computer Systems</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="155" to="162" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Identification and prediction of internet traffic using artificial neural networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chabaa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zeroual</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Antari</surname></persName>
		</author>
		<idno>DOI=1.4236/jilsa.2010.23018</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of. Intelligent Learning Systems &amp; Applications</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="147" to="155" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Artificial neural network time Series modelling for revenue forecasting</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Shamsuddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sallehuddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M</forename><surname>Yusof</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Chiang Mai J. Sci</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="411" to="426" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Topology aware internet forecasting using neural networks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cortez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sousa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rocha</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17 th International Conference on Artificial Neural Networks</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting>the 17 th International Conference on Artificial Neural Networks<address><addrLine>Porto, Portugal</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">4669</biblScope>
			<biblScope unit="page" from="445" to="452" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Forecasting with artificial neural networks: The state of the art</title>
		<author>
			<persName><forename type="first">G</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">E</forename><surname>Patuwo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Y</forename><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Forecasting</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="35" to="62" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Times series forecasting with neural networks: a comparative study using the airline data</title>
		<author>
			<persName><forename type="first">J</forename><surname>Faraway</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chattfield</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Appl. Statist</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="231" to="250" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">New models for long-term internet traffic forecasting using artificial neural networks and flow based information</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L F</forename><surname>Miguel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Penna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Nievola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Pellenz</surname></persName>
		</author>
		<idno type="DOI">10.1109/NOMS.2012.6212033</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 2012 IEEE Network Operations and Management Symposium</title>
				<meeting>2012 IEEE Network Operations and Management Symposium</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1082" to="1088" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Multiscale Internet traffic forecasting using neural networks and time series methods</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cortez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rocha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sousa</surname></persName>
		</author>
		<idno type="DOI">10.1111/exsy.2012.29.issue-2/issuetoc</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="143" to="155" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A Comparative Study of the statistical Methods suitable for Network Traffic Estimation</title>
		<author>
			<persName><forename type="first">I</forename><surname>Mariam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dadarlat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Iancu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13 th WSEAS International Conference on Communications</title>
				<meeting>the 13 th WSEAS International Conference on Communications</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="99" to="104" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Internet traffic modeling and forecasting using non-linear time series model Garch</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">N</forename><surname>Anand</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
		<respStmt>
			<orgName>Department of Electrical and Computing Engineering, College of Engineering, Kansas State University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">M.Sc. Thesis</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Intelligent web traffic mining and analysis</title>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">.</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Abraham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Network and Computer Applications</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="147" to="165" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Internet Forecasting using Neural Networks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cortez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rocha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sousa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding of the International Joint Conference on Neural Network</title>
				<meeting>eeding of the International Joint Conference on Neural Network<address><addrLine>Vancouver)</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="2635" to="2642" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Comparison of parametric and nonparametric techniques for non-peak traffic forecasting</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">World Academy of Science, engineering and Technology</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Multi-scale internet traffic forecasting using neural networks and time series methods</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cortez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rocha</surname></persName>
		</author>
		<author>
			<persName><forename type="middle">P</forename><surname>Sousa</surname></persName>
		</author>
		<idno type="DOI">10.1111/exsy.2012.29.issue-2/issuetoc</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="143" to="155" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Network traffic prediction algorithm based on data mining technique</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Prangchumpol</surname></persName>
		</author>
		<ptr target="http//www.waset.org" />
	</analytic>
	<monogr>
		<title level="j">World Academy of Science</title>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>Engineering and Technology. International Science Index</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Computational neural network for global stock Indexes Prediction</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W T</forename><surname>Fok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">W L</forename><surname>Tam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of World Congress on Engineering</title>
				<meeting>World Congress on Engineering<address><addrLine>London, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008-07-02">2008. July 2 -4, 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Identification and Prediction of Internet traffic using artificial neural networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chabaa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zeroual</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Antari</surname></persName>
		</author>
		<idno>DOI=1.4236/jilsa.2010.23018</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of. Intelligent Learning Systems &amp; Applications</title>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Comparing the performance of backpropagation algorithm and genetic algorithms in pattern recognition problems</title>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">J</forename><surname>Chukwuchekwa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Information Systems</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="7" to="12" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">A neural network model for improved internet service resource provisioning</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">O</forename><surname>Odim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Gbadeyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Sadiku</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">British Journal of Mathematics &amp; Computer Science</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="2418" to="2434" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Evolving trends and its application in web usage mining: a survey</title>
		<author>
			<persName><forename type="first">V</forename><surname>Dogne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of soft computing and engineering</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="98" to="101" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Study on internet traffic prediction models</title>
		<author>
			<persName><forename type="first">G</forename><surname>Rutka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lauks</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics and Electrical Engineering. -Kaunas: Technologija</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="47" to="50" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Chatfield</surname></persName>
		</author>
		<title level="m">The analysis of time series: An introduction</title>
				<meeting><address><addrLine>London</addrLine></address></meeting>
		<imprint>
			<publisher>Chapman &amp; Hall</publisher>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
	<note>4 th ed</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Haykin</surname></persName>
		</author>
		<title level="m">Neural networks and learning machines</title>
				<meeting><address><addrLine>New Jersey</addrLine></address></meeting>
		<imprint>
			<publisher>Pearson Education, Inc</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
	<note>3 rd ed</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Simulation Study of Artificial Neural Networks for Nonlinear Time-series Forecasting</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">E</forename><surname>Patuwo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Y A</forename><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer &amp; Operations research</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="381" to="396" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
