<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How Fast is AI in Pharo? Benchmarking Linear Regression</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr Zaitsev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Jordan-Montaño</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stéphane Ducasse</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Arolla</institution>
          ,
          <addr-line>25 Rue du Louvre, 75001 Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Univ. Lille, Inria</institution>
          ,
          <addr-line>CNRS, Centrale Lille, UMR 9189 CRIStAL, Park Plaza, Parc scientifique de la Haute-Borne, 40 Av. Halley Bât A, 59650 Villeneuve-d'Ascq</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As many other modern programming languages, Pharo spreads its applications into computationally demanding fields such as machine learning, big data, cryptocurrency, etc. This raises a need for fast numerical computation libraries. In this work, we propose to speed up the low-level computations by calling the routines from highly optimized external libraries, e.g., LAPACK or BLAS through the foreign function interface (FFI). As a proof of concept, we build a prototype implementation of linear regression based on the DGELSD routine of LAPACK. Using three benchmark datasets of diferent sizes, we compare the execution time of our algorithm agains pure Pharo implementation and scikit-learn - a popular Python library for machine learning. We show that LAPACK&amp;Pharo is up to 2103 times faster than pure Pharo. We also show that scikit-learn is 8-5 times faster than our prototype, depending on the size of the data. Finally, we demonstrate that pure Pharo is up to 15 times faster than the equivalent implementation in pure Python. Those findings can lay the foundation for the future work in building fast numerical libraries for Pharo and further using them in higher-level libraries such as pharo-ai.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;pharo</kwd>
        <kwd>benchmarking</kwd>
        <kwd>machine learning</kwd>
        <kwd>foreign function interface</kwd>
        <kwd>lapack</kwd>
        <kwd>linear regression</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Pharo (https://pharo.org/) is an open-source dynamically-typed reflective, object-oriented
programming language inspired by Smalltalk [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Pharo community is growing and spreading its
applications into various domains of modern computer science: big data [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], machine
learning [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],1 cryptocurrency,23 internet of things (IoT) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Those domains require fast numerical
algorithms capable of working with large datasets. Modern programming languages provide
such libraries and use them as backend for more high-level ones such as scikit-learn4, a popular
machine learning library. Pharo also provides a library for numerical computations — PolyMath.
However, as we will see in this paper, the current implementation of certain algorithms in Pharo
is significantly slower than the ones provided by similar libraries in Python and R that delegate
the low-lewel computations to the fast routines compiled in Fortran or C.
      </p>
      <p>In this paper, we propose to speed up the numerical computations in Pharo by calling
lowlewel routines from the highly optimized external libraries such as LAPACK or BLAS. As a
proof of concept, we propose a prototype implementation of the linear regression algorithm in
Pharo based on the DGELSD routine of LAPACK.5 We benchmark our algorithm against both
an alternative implementation in pure Pharo and scikit-learn — a widely used Python library
for machine learning.</p>
      <p>The Pharo &amp; LAPACK implementation is 1820 times faster than pure Pharo on a small-size
dataset (200K rows) and 2103 times faster on a medium-size dataset (1M rows). This serves as a
proof of concept that the speed of numerical algorithms, and more specifically machine learning
in Pharo can be significantly improved by calling highly optimized external libraries through
FFI. We also show that scikit-learn is still 8-5 times faster than our prototype implementation
in Pharo (depending on the size of data), which might be caused by data preprocessing and
other optimizations. We propose to further explore this diference in future work. Finally,
we show that pure Pharo implementation of linear regression is about 5-15 times faster than
the equivalent implementation in pure Python. This can be explained by the just-in-time
compilation performed by Pharo virtual machine.</p>
      <p>The rest of this paper is structured in the following way. In Section 2, we discuss the possibility
of implementing a machine learning library in Pharo and the challenges associated with it. In
Section 4, we briefly explain the foreign function interface (FFI) and how it can be used to call
C and Fortran routines from Pharo. In Section 3, we explain diferent implementations of the
linear regression algorithm that we have selected for benchmarking.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Can We Do Machine Learning in Pharo?</title>
      <p>Modern machine learning is resource demanding. It requires fast algorithms capable of
processing large volumes of data. Many algorithms in machine learning are based on matrix algebra
and numerical optimization. This raises the need for fast libraries that implement the basic
algebraic operations such as matrix-vector multiplication and also highly optimized algebraic
algorithms as singular value decomposition, Cholesky decomposition, eigenvector and
eigenvalues calculation, etc. for diferent types of matrices, i.e., symmetric, orthogonal, hermitian,
etc.</p>
      <p>In Pharo, there is PolyMath — a library for numerical computations and scientific computing.
With its 60 packages and 903 green tests6 PolyMath provides a wide range of algorithms that
cover various aspects of mathematics: diferential equations, complex numbers, fuzzy algorithms,
etc. It also provides matrix algebra, statistical distributions, and numerical optimization methods
that can be particularly useful for machine learning. That being said, some algorithms that are
5All the code, the datasets, and the instructions on how to reproduce our experiment are available at https:
//anonymous.4open.science/r/lapack-experiment-55FB.
6Measured at https://github.com/PolyMathOrg/PolyMath on June 1, 2022.
Python</p>
      <p>LAPACK</p>
      <p>SciPy
scikit-learn
Pharo</p>
      <p>LAPACK</p>
      <p>PolyMath
pharo-ai
implemented in PolyMath are very slow, compared to the similar numerical libraries in other
languages. For example, as we will see in Section 6, the linear regression implementation that is
based on the singular value decomposition (SVD) provided by PolyMath takes almost 5 hours
to converge on a dataset with 5,000 rows and 5 columns.</p>
      <p>There is also an ongoing efort in Pharo community to implement the tools for data science,
machine learning, artificial intelligence, data mining, etc. 7 One such library is pharo-ai — a
collection of diferent machine learning algorithms implemented in Pharo: linear and logistic
regression, support vector machines, K-means, naive bayes, etc. It is inspired by scikit-learn,
a similar machine learning library in Python. scikit-learn is a popular library that has many
industrial applications. Many of its algorithms are very fast because internally scikit-learn
depends on a mathematical library scipy, which in turn depends on LAPACK — an eficient
low-level library implemented in Fortran that provides fast algorithms for linear algebra.</p>
      <p>We propose to improve the speed of PolyMath and pharo-ai by calling LAPACK routines
through foreign function interface — a technique for calling external functions which is already
implemented by Pharo (see Section 4). In Figure 1, we show how LAPACK is used by
scikitlearn in Python and propose a way how it could be used by pharo-ai library. In this scheme,
PolyMath implements various high-level mathematical algorithms by delegating low-level
algebraic operations to LAPACK. Then pharo-ai simply depends on PolyMath to implement
fast machine learning algorithms.</p>
      <p>
        As a proof of concept, in this paper we present a prototype implementation of linear regression
in Pharo based on LAPACK. We benchmark the training time of our model and show that it is
almost 500 times faster than the pure Pharo implementation.
7For the non-extensive list of diferent machine learning libraries and tools in Pharo, see https://github.com/
pharo-ai/awesome-pharo-ml.
3. Linear regression and How it is Implemented
Linear Regression is a machine learning algorithms that models a relationship between the
input matrix  and the output vector  [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>⎛ 11 . . . 1 ⎞ ⎛ 1 ⎞ ⎛ 0⎞
 = ⎜⎝ ... . . . ... ⎟⎠ ;  = ⎜⎝ ... ⎟⎠ ;  = ⎜⎝ ... ⎟⎠
1 . . .</p>
      <p>The task of linear regression is to find a set of parameters  = ( 0, . . . ,  ) such that the
predictions ˆ = ℎ () are as close as possible to the real output values . The function ℎ
(also known as hypothesis function) is defined as:
ℎ () =  0 +  11 + · · ·
+</p>
      <p>This is an optimization problem: we need to minimize the objective function that is often
defined as mean squared error:
 ( ) =</p>
      <p>1 ∑︁( − ℎ (:))2
 =1</p>
      <p>We have selected linear regression for benchmarking because: (1) it is one of the most well
known and commonly used machine learning algorithms; (2) it can be solved by finding a
minimum norm solution to the linear least squares problem — an algorithm of linear algebra
that is implemented in LAPACK.</p>
      <p>Gradient Descent Solution Gradient descent is the iterative numerical optimization
technique that is used by many machine learning algorithms, including linear regression.</p>
      <p>It is based on the fact that derivative of the function in a given point is positive if the function
is increasing and negative if it is decreasing. This means that by subtracting the value of
derivative   ( ), scaled by a parameter  (called the learning rate), from the current value of
parameter   , we decrease the cost  ( ):
 () =  
()
− 

 
 ( )
Least Squares Solution Alternatively, the problem of linear regression can be expressed
in terms of linear least squares. In this case,  =  is viewed as a system of linear equation
where  is the unknown. Unless all points of the dataset lie on a straight line, this system has
no solutions. However, it is possible to find the closest approximation to the solution by solving
the system  = ˆ where ˆ is the orthogonal projection of  onto the columnspace of  (thus
the distance between true output  and predicted output ˆ is the smallest). This is the same as
ifnding the optimal parameters ˆ by minimizing the norm:</p>
      <p>ˆ = argmin ⃒⃒ ⃒⃒  −  ⃒⃒ ⃒⃒ 2
In this work, we compare the following implementations of linear regression:
1. Iterative gradient descent implementation in pure Pharo and pure Python.
2. Least squares implementation in Pharo (our prototype) and Python (scikit-learn) that are
both based on a routine from LAPACK library.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Foreign Function Interface in Pharo</title>
      <p>
        As many modern programming languages, Pharo supports the Foreign Function Interface (FFI) —
the mechanism that allows the call of routines from another programming language [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. One
of the common uses is to speed up computations in interpreted or virtual machine languages
by calling routines from natively compiled libraries (e.g., *.dll files on Windows or *.so files on
Linux and Mac). Here is an example of calling a C function uint clock() from Pharo using FFI:
self
ffiCall: #(uint clock())
library: 'libc.so.6'.
      </p>
      <p>The method fiCall:library: is understood by every object in Pharo. It accepts two arguments
which specify the shared library and the signature of a method that should be called. The value
returned by this method will be the same as the return value of the external function that is
being called.</p>
      <p>Calling LAPACK routine The Linear Algebra PACKage (LAPACK) is a Fortran library that
provides routines for solving systems of simultaneous linear equations, least-squares solutions
of linear systems of equations, eigenvalue problems, and singular value problems, matrix
factorizations, etc. In this work, we use one specific routine of LAPACK that computes the
minimum-norm solution to a real linear least squares problem — DGELSD.8 It accepts as input
matrix  and vector  and finds the solution vector ˆ which minimizes the norm || −  ||2
(see Section 3).</p>
    </sec>
    <sec id="sec-4">
      <title>5. Experiment Setup</title>
      <p>
        Research Questions In our study, we answer the following research questions:
• RQ.1 - Measuring LAPACK speedup. How much time improvement can we achieve by
calling LAPACK from Pharo?
• RQ.2 - Comparing to scikit-learn. How does Pharo &amp; LAPACK implementation compare to
the one provided by scikit-learn?
• RQ.3 - Comparing pure Pharo with Python. How does pure Pharo implementation of linear
regression compare to equivalent pure Python implementation?
8https://www.netlib.org/lapack/explore-html/d7/d3b/group__double_g_esolve_
ga94bd4a63a6dacf523e25f617719f752.html
Datasets for benchmarking We generated three datasets of diferent sizes using the make_regression()
method of scikit-learn with a random linear regression model with a fixed seed. The sizes of
those datasets can be seen in Table 1. The number of columns was fixed at 20 and the number
of rows was gradually increased. We did this because the height (number of rows) and width
(number of columns) of a dataset have diferent efect on the training time [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and we did not
want to mix them.
      </p>
      <p>The first 19 columns represent the input matrix . They contain floating point numbers that
are normally distributed around 0 with standard deviation 1. The last column of each dataset
represents the output vector  generated by applying a non-biased linear regression model with
gaussian noise.
Measuring the execution time In this study, we only measure that time that it takes to
train the linear regression model as it is generally the most consuming part of the machine
learning workflow. In Pharo, we measure time using the timeToRun method:
time := [ "... some code..." ] timeToRun.</p>
      <p>In Python, we use the time library:
import t i m e
s t a r t _ t i m e = t i m e . t i m e ( )
# . . . some c o d e . . .
t i m e = t i m e . t i m e ( ) − s t a r t . t i m e ( )</p>
    </sec>
    <sec id="sec-5">
      <title>6. Results</title>
      <p>In this section, we answer the research questions and discuss the results of our experiments. The
benchmarks were run in a MacBook Pro 2015 with a Intel i7 processor of 4 cores and 2.2G Hz
and 16 GB of RAM memory. The operating system is Monterrey v12.4. To run the benchmarks,
we closed all the applications that could be closed. In addition, we disconnected the Internet
and did not use the computer while experiment was in progress. The computer was plugged
into the charger for the duration of the experiment.</p>
      <sec id="sec-5-1">
        <title>RQ.1 - Measuring LAPACK speedup</title>
        <p>To answer the first research question, we measured the execution time of two implementations of
linear regression on the three datasets that were discussed in Section 5. The first implementation
was implemented in pure Pharo (no external libraries) and based on gradient descent. The second
implementation was based on least squares, implemented by calling the DGELSD LAPACK
routine from Pharo. As can be seen in Table 2, the LAPACK implementation was 1820 times
faster on small dataset and 2103 times faster on medium dataset. We could not measure the
execution time of pure Pharo implementation on large dataset because it was too long and we
had to stop it after 5 hours. The LAPACK implementation took about 15 seconds to converge
on the same dataset.
Summary: Calling LAPACK routines from Pharo can provide a significant speedup. The
linear regression implemented with Pharo&amp;LAPACK is 462 times faster than the pure
Pharo implementation when measured on small dataset and 284 times faster on medium
dataset.</p>
      </sec>
      <sec id="sec-5-2">
        <title>RQ.2 - Comparing to scikit-learn</title>
        <p>We compared the execution time of LAPACK-based implementation of linear regression in
Pharo to the one provided by LinearRegression class in scikit-learn, which is also based on
the DGELSD routine of LAPACK. As can be seen in Table 3, scikit-learn is 8 times faster than
our implementation on small dataset, 8 times faster on medium dataset, and 5 times faster on
large dataset. Considering that scikit-learn is well designed library with many industrial users
and our implementation is a prototype, we hypothesize that this diference is due to various
optimizations that are performed in scikit-learn. For example, it uses a highly optimized data
structure NumPy9. A further study is needed to explain this phenomenon.
Summary: scikit-learn implementation is 8 times faster than our prototype
implementation in Pharo. It is 5 times faster on a large dataset.
RQ.3 - Comparing pure Pharo with pure Python
We also compare pure Pharo and pure Python implementations of linear regression based on
gradient descent. As can be seen in Table 4, Pharo is 5 times faster than Python on small dataset
and 15 times faster on medium dataset. We could not measure the execution time on large
dataset, because in both cases it was too long and we had to stop the experiment.
Summary: The implementation of gradient descent-based linear regression in pure Pharo
(no external libraries) is about 5-15 times faster than the equivalent implementation in
pure Python, depending on the size of the dataset.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>7. Threats to Validity</title>
      <p>
        We consider the four types of validity threats that were presented by Wohlin et al., [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: internal,
external, construct, and conclusion validity. Below, we discuss the threats to internal and
external validity. We did not find any threats to conclusion and construct validity.
Internal Validity
• The implementation of linear regression with LAPACK was based on least squares while
the pure Pharo implementation was based on gradient descent. This poses a threat to
internal validity because the time diference may be afected by the choice of algorithm.
• The implementation that we propose for Pharo is a prototype, while scikit-learn is the
industry standard which contains many optimizations. We tried to study the source code
of scikit-learn as much as possible, but this still constitutes a validity threat.
• Measuring the execution time is always prone to the noise introduced by diferent
processes that are run on the computer. To reduce this noise, we have closed all unnecessary
windows, disconnected the Internet, plugged the computer to the charger, and killed all
processes that could be stopped.
      </p>
      <p>External Validity
• In this work, we demonstrated how the training time of linear regression can be reduced
using LAPACK. The same technique can be used for many other machine learning
algorithms: logistic regression, neural networks, etc. Nonetheless, there are also algorithms
that can not benefit from it because, unlike linear regression, they do not require heavy
matrix-vector calculations. Some examples include k-means or k-nearest neighbours.
Those algorithms can still be boosted with optimized C code called through FFI, but this
requires a separate study.</p>
    </sec>
    <sec id="sec-7">
      <title>8. Related Work</title>
      <p>
        Over the years, Pharo has been successfully applied in various domains of scientific
computing and numerical optimization. Many of those applications use the algorithms provided by
PolyMath [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] — a library for scientific computing in Pharo. Other applications depend on the
algorithms of artificial intelligence and machine learning [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Bergel et al., [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] use neural
networks and a visual environment to verify various properties related to the source code.
Cota et al., [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] use genetic algorithms for automatic tests generation. Pharo is also used for
agent-based modeling by CORMAS (Common Pool Resources and Multi-Agent Systems) — a
platform dedicated to natural and common-pool resources management [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ]. All those
applications could benefit from speeding up numerical computations in Pharo.
      </p>
      <p>
        In this work, we benchmark our prototype implementation of linear regression against
scikitlearn — a Python machine learning library that is widely used in both research and industry [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
Pedregosa et al., [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] report that scikit-learn outperforms other machine learning libraries in
Python (MLP, PyBrain, pymvpa, MDP, and Shogun) in 4 out of 6 classical machine learning
algorithms.
      </p>
      <p>To the best of our knowledge, this is the first attempt to implement a fast numerical library in
Pharo by calling an external library such as LAPACK. It is also the first study that benchmarks
the machine learning algorithms in Pharo.</p>
    </sec>
    <sec id="sec-8">
      <title>9. Conclusion</title>
      <p>In this work we presented a proof of concept that demonstrates how numerical computations
in Pharo can be boosted by calling the routines highly optimized external libraries through
FFI. We built a prototype implementation of linear regression algorithm in Pharo based on the
DGELSD routine from LAPACK. We measured the execution time on three benchmark datasets
and compared it to the time that is needed to train the analogous models in scikit-learn and in
pure Pharo. We show that LAPACK-based implementation is up to 2103 times faster than pure
Pharo but still 8-5 times slower than scikit-learn implementation, which also uses LAPACK
underneeth. We also show that pure Pharo implementation is up to 15 times faster than the
equivalent model implemented in pure Python. This interesting finding can be explained by
just-in-time compilation that is performed by Pharo virtual machine.
10. Acknowledgements
We are grateful to Vincent Aranega and Guillermo Polito for their valuable advice and
explanations of just-in-time compilation in Pharo. Oleksandr Zaitsev also thanks Arolla10 software
company for financing his work during this study.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Black</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ducasse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nierstrasz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pollet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cassou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Denker</surname>
          </string-name>
          , Pharo by Example, Square Bracket Associates, Kehrsatz, Switzerland,
          <year>2009</year>
          . URL: http://books.pharo.org.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Marra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Polito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Gonzalez</given-names>
            <surname>Boix</surname>
          </string-name>
          ,
          <article-title>A debugging approach for live big data applications</article-title>
          ,
          <source>Science of Computer Programming</source>
          <volume>194</volume>
          (
          <year>2020</year>
          )
          <article-title>102460</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.scico.
          <year>2020</year>
          .
          <volume>102460</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Marra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Live</given-names>
            <surname>Debugging</surname>
          </string-name>
          <article-title>Approach for Big Data Processing Applications</article-title>
          ,
          <source>Ph.D. thesis, Vrije Universiteit Brussel</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bergel</surname>
          </string-name>
          , Agile Artificial Intelligence in Pharo, Apress,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Costiou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ducasse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stinckwich</surname>
          </string-name>
          ,
          <article-title>A pharothings tutorial, in: A PharoThings Tutorial</article-title>
          , Square Bracket Associates,
          <year>2021</year>
          . URL: https://github.com/SquareBracketAssociates/ Booklet-APharoThingsTutorial.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Géron</surname>
          </string-name>
          ,
          <article-title>Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems, "</article-title>
          <string-name>
            <surname>O'Reilly Media</surname>
          </string-name>
          ,
          <source>Inc."</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Polito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ducasse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tesone</surname>
          </string-name>
          , T. Brunzie, Unified fi - calling
          <source>foreign functions from pharo</source>
          ,
          <year>2020</year>
          . URL: http://books.pharo.org/booklet-ufi/ .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wohlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Runeson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Höst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Ohlsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Regnell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wesslén</surname>
          </string-name>
          ,
          <article-title>Experimentation in software engineering: an introduction</article-title>
          , Kluwer Academic Publishers, Norwell, MA, USA,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Besset</surname>
          </string-name>
          ,
          <article-title>Object-Oriented Implementation of Numerical Methods An Introduction with Pharo</article-title>
          , Square Bracket Associates,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bergel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Melatagia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stinckwich</surname>
          </string-name>
          ,
          <article-title>An api and visual environment to use neural network to reason about source code</article-title>
          ,
          <source>in: Conference Companion of the 2nd International Conference on Art, Science, and Engineering of Programming</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cota Vidaure</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Cusi</given-names>
            <surname>Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Sandoval Alcocer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bergel</surname>
          </string-name>
          , Testevoviz:
          <article-title>Visual introspection for genetically-based test coverage evolution</article-title>
          .,
          <source>in: VISSOFT</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bommel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Becu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Le Page</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bousquet</surname>
          </string-name>
          ,
          <article-title>Cormas: an agent-based simulation platform for coupling human decisions with computerized dynamics</article-title>
          ,
          <source>in: Simulation and gaming in the network society</source>
          , Springer,
          <year>2016</year>
          , pp.
          <fpage>387</fpage>
          -
          <lpage>410</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Le Page</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Becu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bommel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bousquet</surname>
          </string-name>
          ,
          <article-title>Participatory agent-based simulation for renewable resource management: the role of the cormas simulation platform to nurture a community of practice</article-title>
          ,
          <source>Journal of Artificial Societies and Social Simulation</source>
          <volume>15</volume>
          (
          <issue>1</issue>
          ) (
          <year>2012</year>
          )
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Binder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Richter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schratz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pfisterer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Coors</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Au</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Casalicchio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kotthof</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. Bischl,</surname>
          </string-name>
          <article-title>mlr3: A modern object-oriented machine learning framework in r</article-title>
          ,
          <source>Journal of Open Source Software</source>
          <volume>4</volume>
          (
          <year>2019</year>
          )
          <year>1903</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          , et al.,
          <article-title>Scikit-learn: Machine learning in python</article-title>
          ,
          <source>the Journal of machine Learning research 12</source>
          (
          <year>2011</year>
          )
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>