<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>OINOS, an application suite for the performance evaluation of classifiers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>C. Results</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Emanuele Paracone Dept. of Civil Engineering and Computer Science University of Rome ”Tor Vergata” Rome</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>48</fpage>
      <lpage>52</lpage>
      <abstract>
        <p>-The last few years have been characterized by a big development of machine learning (ML) techniques, and their application has spread in many fields. The success of their use in a specific problem strongly depends on the approach used, the dataset formatting, and not only on the type of ML algorithm employed. Tools that allows the user to evaluate different classification approaches on the same problem, and their efficacy on different ML algorithms, are therefore becoming crucial. In this paper we present OINOS, a suite written in Python and Bash aimed to the evaluation of performances of different ML algorithms. This tool allows the user to face a classification problem with different classifiers and dataset formatting strategies, and to extract related performance metrics. The tool is presented and then tested on the classification of two diagnostic species from a public electroencephalography (EEG) database. The flexibility and ease of use of this tool allowed us to easily compare the performances of the different classifiers varying the dataset formatting and to determine the best approach, obtaining an accuracy of almost 75%. OINOS is an open source project, therefore its use and sharing are encouraged.</p>
      </abstract>
      <kwd-group>
        <kwd>Machine learning</kwd>
        <kwd>Classification</kwd>
        <kwd>EEG</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        In recent years we have witnessed the development of
new machine learning (ML) techniques and the improvement
of the existing ones, and their application has expanded in
many fields [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]–[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Contemporarily, Python programming
language has seen a surge in popularity across the sciences
and in particular in neuroscience [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] for reasons which include
its readability, modularity, and the large libraries available.
Python’s versatility is today evident in its range of uses.
With the aim of carrying out classification, regression and /
or clustering on a specific problem, it is useful to evaluate the
performances of different ML tools and the different dataset
formatting strategies, for studying their behaviour with respect
to the different scenarios.
      </p>
      <p>In this work we present OINOS, a suite for the evaluation
of classifier performances, composed of a set of modules for
the comparison of ML algorithms with respect to different
dataset partitioning strategies. OINOS is written in Python and
©2019 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0).</p>
      <p>Bash, and implemented as an applicative for the execution of
multithreaded benchmark.</p>
      <p>In order to give an example of application, we considered
electroencephalography (EEG) data related to the problem of
alcoholic prediction, i.e., the classification between patients
suffering from alcoholism and healthy patients, based on EEG
times series of a second of brain activity. Such dataset has been
chosen for the high prediction complexity and because the
data is publicly available (at https://kdd.ics.uci.edu/databases/
eeg/eeg.html). In this problem the ML tools learn to glean the
correlations among the fluctuation of brain signals obtained
from the different channels and their dependance on the
subject’s pathological state.</p>
      <p>The use of a custom dataset partitioning procedure allowed us
to find satisfactory performances without the need to overload
the data preprocessing. OINOS has simplified us to find
alternative approaches to train the classifiers.</p>
      <p>The analyzed data belongs to a test concerning 122 subjects.
From each of them it has been collected a set of 120 trial.
Each trial consists in the measurement of 1 sec. of EEG
signals caught from 64 electrodes placed on the subject’s scalp.
During the trials, the subjects were exposed to three kind of
stimuli: une single image, two matching images or two
nonmatching images alternately. Since subjects belongs to the 2
category alcoholic and non-alcoholic and the stimuli to the 3
kind single, matching and non-matching, the EEG data have
been labeled through those 2 cohordinates (e.g. if a trial has
been caught from an alcoholic patient while he was looking
to a non-matching couple of figure, the trial label will be
alcnon-matching).</p>
    </sec>
    <sec id="sec-2">
      <title>II. EXECUTION</title>
      <p>Here we describe the structure of the presented tool and its
operation modes.</p>
      <p>
        The algorithms and the logic underlying the classification
processes of OINOS are implemented by the libraries
scikitlearn [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]–[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The component modules are:
1) main: the entry point of the suite. This block is
responsible for the execution and the orchestration of single
modules;
2) OINOS core: the main component. It implements the
logic of comparison among different ML algorithms;
3) datalogger: the module for the output management and
the experiment reports.</p>
      <p>A. Use</p>
      <p>In order to start OINOS it is necessary to execute the
starter present in the root directory of the project through the
command: $ ./start.</p>
      <p>In this way the program will return:
================
= OINOS V1.0 =
================
Select an option:
1. learn from the ’Alcoholic’ dataset from UCI Knowledge
Discovery in Databases
2. learn from the ’Wrist’ dataset from NeuCube
[1,2, quit]:</p>
      <p>From this menu it is possible to select on which dataset the
prediction algorithms must be tested.</p>
      <sec id="sec-2-1">
        <title>B. Alcoholic</title>
        <p>By selecting the first option, OINOS will acquire the
datasets of the database of the UCI Knowledge Discovery in
Databases. Before to start the execution, OINOS will ask to
the user:
1) to specify the dataset among those available;
2) to specify the destination path for the output
3) to specify which portion of the data (i.e., ratio) with
respect to the overall dataset, will be used for testing.
4) to specify the number of executions of the same
prediction test. The importance of this setting is notable
because once fixed the cardinality of the training and
test sets (through the ratio), the related elements will
be randomly selected; of course at each run the
performances will vary depending on the specific training set
of each experiment and it may be reasonable to study
them in a statistical sense.</p>
        <p>At the end of this configuration phase the comparison
between the prediction algorithms is executed.</p>
      </sec>
      <sec id="sec-2-2">
        <title>a) Dataset description: The dataset shown by the start</title>
        <p>are named with a suffix that indicates the dataset cardinality.
For example, if data 100 is selected by the user, it will be
executed the prediction on a sample of 100 elements.
If a ratio of 0.2 is specified, the prediction will be redistributed
using 80 elements as training set and the remaining 20 as test
set.</p>
        <p>b) Execution: During the execution, messages of four
comparison categories are printed on the standard output;
at each category corresponds a different classification to be
presented to the prediction algorithms:
1) alcoholic-control: the EEG time-series of the dataset are
classified as pertaining to alcoholic (alcohol) or healthy
(i.e., control) patients;
2) single-matching-non matching images: the EEG
timeseries pertain to participants which a single flickering
image (single), two identical alternate images (matching)
or two different alternate images (non matching) are
shown;
3) single-matching-nonmatching images for alcoholic and
control: the intersection of the two previous
classifications is considered (alcoholic patient watching a single
image, alcoholic patient watching two identical images,
etc.);
4) alcoholic-control extended: the six classes of the
previous step are considered and projected to the two classes
alcoholic and control.</p>
        <p>The execution goes through these categories in four phases,
showing the results of the test on the screen in terms of:
classification (ALC-CTRL, SGL-MATCH-NONMATCH,</p>
      </sec>
      <sec id="sec-2-3">
        <title>ALC-CTRL/SGL-MATCH-NONMATCH, ALC</title>
        <p>CTRL EXT, respectively)
overall cardinallity of the dataset (for example 100 for
data 100)
cardinality of the test set (for example 20 for ratio = 0:2)
type of classifier under testing
accuracy of the classifier, computed as</p>
        <p>T P + T N
Acc =</p>
        <p>T P + T N + F P + F N
precision of the classifier, computed as</p>
        <p>T P
P r =</p>
        <p>T P + F P
recall of the classifier, computed as</p>
        <p>T P
Rec =</p>
        <p>T P + F N
F1 Score of the classifier, computed as
(1)
(2)
(3)</p>
        <p>P r Rec
F 1 = 2 (4)</p>
        <p>P r + Rec
where TP stands for true positive, TN for true negative,
FP for false positive and FN for false negative. A TP is an
outcome where the model correctly predicts the positive
class; similarly, a TN is an outcome where the model
correctly predicts the negative class. A FP is an outcome
where the model incorrectly predicts the positive class,
and a FN is an outcome where the model incorrectly
predicts the negative class.</p>
        <p>c) output: When the execution is finished, the output will
be avaible at the path specified during the configuration phase:
a microsoft excel file (.xlsx) with the report, as described
above;
a figure with the comparison between the different
accuracy values.</p>
      </sec>
      <sec id="sec-2-4">
        <title>C. Unattended mode</title>
        <p>With this option it is possible to directly call the Python
relative sources.</p>
        <p>This allows the user the execution in unattended mode, useful
for the implementaton of custom procedures and benchmarks.
The related scripts are :=bin=oinos:py and :=bin=wrist:py
respectively; the switch -h enables the help, which returns the
following information to the user:
$ ./bin/oinos.py -h
usage:
$ python bin/main.py -d &lt;data path&gt; -r &lt;testing data
ratio&gt; -o
&lt;output path&gt;
example:
$ ./bin/oinos.py -d data 100 -r 0.3 -v -o out
——————————————————————
$ ./bin/wrist.py -h
usage:
$ python bin/main.py -r &lt;testing data ratio&gt; -o &lt;output
path&gt;
example:
$ ./bin/oinos.py -r 0.3 -v -o out</p>
        <p>This menu makes possibile to select the dataset on which
the prediction algorithms have to be tested.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>III. DATASET: ALCOHOLIC</title>
      <sec id="sec-3-1">
        <title>A. Dataset interpretation</title>
        <p>
          This dataset comes from the database of the Knowledge
Discovery in Databases Archive of the University of California,
Irvine [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], that is part of a bigger dataset on the detection of
genetic predisposition of human beings to the alcoholism [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>In our case, the gathered data comes from an experiment
conducted on 122 subjects, each of which underwent 120 trials
of the same task.</p>
        <p>
          The task consists in a second of EEG activity recorded while
the subject is asked to watch alternatively:
a single image (case identifiec as single, i.e., SNGL)
two identical images (matching, i.e., MATCH)
two different images (non matching, i.e., NONMATCH)
For each presented stimulus, ten trials of a second of
activity, recorded by 64 electrodes, have been gathered.
Electrodes were located on the head of the subject, to record
fluctuations of postsynaptic activity [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], sampled at 256 Hz.
We implemented our comparisons between the classifiers by
considering the four classifications described in the section
II-B0b. In the next section we will show the strong points and
the advantages of this approcach.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Classification: a bottom up approach</title>
        <p>Using the metadata of the experiment, the samples have
been subdivided with respect to the type of stimulus given
to the subject (SNGL, MATCH, NONMATCH) or on the
bases of the type of subject (alcoholic, control). Different tests
have been done to evaluate the performances of the classifiers
considered, using different configurations (only subject type,
only stimulus type, combined).</p>
        <p>Unfortunately, neither classifiers who achieved greater success
during repeated runs were able to achieve satisfying
performances.</p>
        <p>Therefore we implemented a different method. In addition to
the three types of prediction described above, we implemented
- alcohol control extended, i.e., alc-ctrl ext - able to project
the classifications obtained by the combined configuration (six
classes, one for each combination of pathology and stimulus)
in the two classes alcoholic and control; we therefore classified
the data in the most stringent way to go back into abstraction
and generalize the final solution.</p>
        <p>This new way to predict the classes alcoholic and control
has significantly improved the performances of the classifiers,
in the way we give illustration below.</p>
        <p>We have conducted a benchmark of 100 consecutive run to
analyze the performances of the following classifiers:
K Nearest Neighbors
Linear SVM
RBF SVM
Linear SVC
Gaussian Process
Decision Tree (with max depth = 5)
Decision Tree (with max depth = 10)
Random Forest
Gradient Boosting Classifier
Neural Net
Ada Boost
Naive Baies
Linear Discriminant Analysis</p>
        <p>QDA
After the execution of the run, an average for each one of the
metrics has been done. Although the performances are not very
good, it has to be noted that the introduction of the approach
acc-ctrl extended has significantly affected the performance of
some of the classifiers.</p>
        <p>The two types of classifications alcoholic - control (in
blue) and alcoholic -control extended (in orange) have been
compared, underlining the benefits of the latter approach. The
results are summarized in figures 3, 4, 5, 6.</p>
        <p>Among the different classifiers tested, it is worth
highlighting the cases Linear SVC and Neural Network. Their
classification for alcoholic control were just above the average
of the other classifiers, with accuracy and precision near the
60%. Such performances have considerably improved with the
extended approach.</p>
      </sec>
      <sec id="sec-3-3">
        <title>The University of California Knowledge Discovery</title>
      </sec>
      <sec id="sec-3-4">
        <title>Database Archive (UCI KDD Archive) openly shares different</title>
        <p>datasets with the aim of make them usable for Machine
Learning research (http://kdd.ics.uci.edu/). In the website different
dataset are available, indexed for typology and semantic area.
EEG data selected for our study are categorized into the
section Time Series - EEG (http://kdd.ics.uci.edu/databases/
eeg/eeg.data.html).</p>
        <p>In this work we present OINOS, a suite for the evaluation
of classifier performances, composed of a set of modules
for the comparison of several ML algorithms with respect to
different datasets. We faced a classification problem based on
neurophysiology data (i.e., EEG time series), to distinguish
alcoholic to non/alcoholic subjects during the execution of a
task. Through the performance evaluation of a set of
classifiers we found the better configuration among the proposed
classifiers and dataset formatting strategies. Although the big
cardinality of the dataset, a need of alternative approaches for</p>
        <p>Fig. 6. F1 results are summarized, taking in account the considered classifiers.
In orange are shown the results obtained by adopting the bottom-up approach
whereas the blue ones shows the result of the ”normal” aproach.
dataset formatting to facilitate the learning of the classifiers
has emerged. The use of a custom procedure allowed us to
find a way to improve the classification.</p>
        <p>
          To show how to use OINOS, here we have performed
the evaluation of classifiers for an application related to the
biomedical field. Nevertheless, such kind of tools are of
great help in many other fields [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], as financial [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], face
recognition [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], and communications systems [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] where they
could result useful for evaluating the performance of recent
communication algorithms (e.g., [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ])). Finally, since
some ML strategies are based on neural networks, a future
development could be that of expanding classical artificial
neural networks (ANNs) with the bio-inspired spiking neural
networks (SNNs) [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]–[
          <xref ref-type="bibr" rid="ref22">22</xref>
          ], since recently such approaches
are proving to be appropriate for classification/prediction of
spatio-temporal stream data [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]–[
          <xref ref-type="bibr" rid="ref25">25</xref>
          ], and compare their
performances on classical problems which approaches are
classically faced with ANN [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]
        </p>
        <p>The work is an opensource project, available at https://gitlab.
com/knizontes/oinos.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Angra</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahuja</surname>
          </string-name>
          , “
          <article-title>Machine learning and its applications: A review,</article-title>
          ” in
          <source>2017 International Conference on Big Data Analytics and Computational Intelligence (ICBDAC)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>57</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Malhotra</surname>
          </string-name>
          , “
          <article-title>A systematic review of machine learning techniques for software fault prediction</article-title>
          ,” Applied Soft Computing, vol.
          <volume>27</volume>
          , pp.
          <fpage>504</fpage>
          -
          <lpage>518</lpage>
          ,
          <year>2015</year>
          . [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S1568494614005857
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Matta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. C.</given-names>
            <surname>Cardarilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Di</given-names>
            <surname>Nunzio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fazzolari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Giardino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Re</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Silvestri</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Span</surname>
          </string-name>
          , “
          <article-title>Q-rts: a real-time swarm intelligence based on multi-agent q-learning,” Electronics Letters</article-title>
          , vol.
          <volume>55</volume>
          , no.
          <issue>10</issue>
          , pp.
          <fpage>589</fpage>
          -
          <lpage>591</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G. C.</given-names>
            <surname>Cardarilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Di</given-names>
            <surname>Nunzio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fazzolari</surname>
          </string-name>
          , M. Re, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Span</surname>
          </string-name>
          , “
          <article-title>Awsom, an algorithm for high-speed learning in hardware self-organizing maps</article-title>
          ,
          <source>” IEEE Transactions on Circuits and Systems II: Express Briefs</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>1</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Coco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Laudani</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>Riganti Fulginei, and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Salvini</surname>
          </string-name>
          , “
          <article-title>Team problem 22 approached by a hybrid artificial life method,” COMPELThe international journal for computation and mathematics in electrical and electronic engineering</article-title>
          , vol.
          <volume>31</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>816</fpage>
          -
          <lpage>826</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Coco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Laudani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Fulginei</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Salvini</surname>
          </string-name>
          , “
          <article-title>Bacterial chemotaxis shape optimization of electromagnetic devices,” Inverse Problems in Science and Engineering</article-title>
          , vol.
          <volume>22</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>910</fpage>
          -
          <lpage>923</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Muller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bednar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Diesmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gewaltig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hines</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Davison</surname>
          </string-name>
          , “Python in neuroscience,” Frontiers in Neuroinformatics, vol.
          <volume>9</volume>
          , no.
          <issue>11</issue>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>76</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8] INRIA, “
          <article-title>Scikit learn</article-title>
          .” [Online]. Available: https://scikit-learn.org
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>[9] --, “Scikit eeg.” [Online]. Available: http://kdd.ics.uci.edu/databases/ eeg/eeg.data.html</mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          , “
          <article-title>Scikit-learn: Machine learning in python</article-title>
          ,
          <source>” Journal of machine learning research</source>
          , vol.
          <volume>12</volume>
          , pp.
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S. D.</given-names>
            <surname>Bay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Kibler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Pazzani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and P.</given-names>
            <surname>Smyth</surname>
          </string-name>
          , “
          <article-title>The uci kdd archive of large data sets for data mining research and experimentation,” SIGKDD explorations</article-title>
          , vol.
          <volume>2</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>85</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Susi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ye-Chen</surname>
          </string-name>
          , J. de Frutos Lucas, G. Niso, and
          <string-name>
            <given-names>F.</given-names>
            <surname>Maestu</surname>
          </string-name>
          ´, “
          <article-title>Neurocognitive aging and functional connectivity using magnetoencephalography,” in Oxford research encyclopedia of psychology and aging</article-title>
          . Oxford: Oxford University press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Soofi</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Awan</surname>
          </string-name>
          , “
          <article-title>Classification techniques in machine learning: Applications and issues</article-title>
          ,
          <source>” Journal of Basic &amp; Applied Sciences</source>
          , vol.
          <volume>13</volume>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Henrique</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Amorim Sobreiro</surname>
          </string-name>
          , and
          <string-name>
            <surname>H. K. S.</surname>
          </string-name>
          , “
          <article-title>Literature review: Machine learning techniques applied to financial market prediction,” Expert Systems with Applications</article-title>
          , vol.
          <volume>124</volume>
          , pp.
          <fpage>226</fpage>
          -
          <lpage>251</lpage>
          ,
          <year>2019</year>
          . [Online]. Available: http://www.sciencedirect.com/science/article/ pii/S095741741930017X
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>H.</given-names>
            <surname>Filali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Riffi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Mahraz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Tairi</surname>
          </string-name>
          , “
          <article-title>Multiple face detection based on machine learning</article-title>
          ,
          <source>” in 2018 International Conference on Intelligent Systems and Computer Vision</source>
          (ISCV),
          <year>April 2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>O.</given-names>
            <surname>Simeone</surname>
          </string-name>
          , “
          <article-title>A very brief introduction to machine learning with applications to communication systems</article-title>
          ,
          <source>” IEEE Transactions on Cognitive Communications and Networking</source>
          , vol.
          <volume>4</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>648</fpage>
          -
          <lpage>664</lpage>
          ,
          <year>Dec 2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Detti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Orru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Paolillo</surname>
          </string-name>
          , G. Rossi,
          <string-name>
            <given-names>P.</given-names>
            <surname>Loreti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bracciale</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. Blefari</given-names>
            <surname>Melazzi</surname>
          </string-name>
          , “
          <article-title>Application to information centric networking to nosql database</article-title>
          ,” in
          <source>2017 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Detti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bracciale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Loreti</surname>
          </string-name>
          , G. Rossi, and
          <string-name>
            <given-names>N. Blefari</given-names>
            <surname>Melazzi</surname>
          </string-name>
          , “
          <article-title>A cluster-based scalable router for information centric networks,” Computer networks</article-title>
          , vol.
          <volume>142</volume>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Susi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Antn</given-names>
            <surname>Toro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Canuet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Lpez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Maest</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. R.</given-names>
            <surname>Mirasso</surname>
          </string-name>
          , and E. Pereda, “
          <article-title>A neuro-inspired system for online learning and recognition of parallel spike trains, based on spike latency, and heterosynaptic stdp,” Frontiers in Neuroscience</article-title>
          , vol.
          <volume>12</volume>
          , p.
          <fpage>780</fpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Kasabov</surname>
          </string-name>
          , “
          <article-title>Neucube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data,” Neural Networks</article-title>
          , vol.
          <volume>52</volume>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>76</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>G.</given-names>
            <surname>Susi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cristini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Mario</surname>
          </string-name>
          , “
          <article-title>Path multimodality in a feedforward snn module, using lif with latency model,” Neural Network World</article-title>
          , vol.
          <volume>4</volume>
          , no.
          <issue>26</issue>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Acciarito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. C.</given-names>
            <surname>Cardarilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cristini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Nunzio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fazzolari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Khanal</surname>
          </string-name>
          , M. Re, and G. Susi, “
          <article-title>Hardware design of lif with latency neuron model with memristive stdp synapses,” Integr</article-title>
          . VLSI J., vol.
          <volume>59</volume>
          , no. C, pp.
          <fpage>81</fpage>
          -
          <lpage>89</lpage>
          , Sep.
          <year>2017</year>
          . [Online]. Available: https://doi.org/10.1016/j.vlsi.
          <year>2017</year>
          .
          <volume>05</volume>
          .006
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kasabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Scott</surname>
          </string-name>
          , E. Tu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sengupta</surname>
          </string-name>
          , E. Capecci,
          <string-name>
            <given-names>M.</given-names>
            <surname>Othman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Doborjeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Murli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hartono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. I. EspinosaRamos</given-names>
            , L.
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. B.</given-names>
            <surname>Alvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , V. Feigin,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gulyaev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mahmoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-G.</given-names>
            <surname>Hou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J</given-names>
            .
            <surname>Yang</surname>
          </string-name>
          , “
          <article-title>Evolving spatio-temporal data machines based on the neucube neuromorphic framework: Design methodology and selected applications,” Neural Networks</article-title>
          , vol.
          <volume>78</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lo Sciuto</surname>
          </string-name>
          , G. Susi, G. Cammarata, and G. Capizzi, “
          <article-title>A spiking neural network-based model for anaerobic digestion process</article-title>
          ,” in 2016 International Symposium on Power Electronics, Electrical Drives,
          <source>Automation and Motion (SPEEDAM)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>996</fpage>
          -
          <lpage>1003</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>S.</given-names>
            <surname>Brusca</surname>
          </string-name>
          , G. Capizzi,
          <string-name>
            <given-names>G. Lo</given-names>
            <surname>Sciuto</surname>
          </string-name>
          , and G. Susi, “
          <article-title>A new design methodology to predict wind farm energy production by means of a spiking neural networkbased system</article-title>
          ,”
          <source>International Journal of Numerical Modelling: Electronic Networks, Devices and Fields</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>4</issue>
          , p.
          <fpage>e2267</fpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tealab</surname>
          </string-name>
          , “
          <article-title>Time series forecasting using artificial neural networks methodologies: A systematic review</article-title>
          ,
          <source>” Future Computing and Informatics Journal</source>
          , vol.
          <volume>3</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>334</fpage>
          -
          <lpage>340</lpage>
          ,
          <year>2018</year>
          . [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2314728817300715
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G.</given-names>
            <surname>Capizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. Lo</given-names>
            <surname>Sciuto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Monforte</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , “
          <article-title>Cascade feed forward neural network-based model for air pollutants evaluation of single monitoring stations in urban areas,”</article-title>
          <source>INTL Journal of Electronics and Communications</source>
          , vol.
          <volume>61</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>327</fpage>
          -
          <lpage>332</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>