<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>GMLS-Nets: A machine learning framework for unstructured data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nathaniel Trask</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ravi G. Patel</string-name>
          <email>rgpatelg@sandia.gov</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ben J. Gross</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paul J. Atzberger</string-name>
          <email>atzberg@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Sandia National Laboratories Center for Computing Research</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of California Santa Barbara Department of Mathematics and Mechanical Engineering</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Data fields sampled on irregularly spaced points arise in many science and engineering applications. For regular grids, Convolutional Neural Networks (CNNs) gain benefits from weight sharing and invariances. We generalize CNNs by introducing methods for data on unstructured point clouds using Generalized Moving Least Squares (GMLS). GMLS is a nonparametric meshfree technique for estimating linear bounded functionals from scattered data, and has emerged as an effective technique for solving partial differential equations (PDEs). By parameterizing the GMLS estimator, we obtain learning methods for linear and non-linear operators with unstructured stencils. The requisite calculations are local, embarrassingly parallelizable, and supported by a rigorous approximation theory. We show how the framework may be used for unstructured physical data sets to perform operator regression, develop predictive dynamical models, and obtain feature extractors for engineering quantities of interest. The results show the promise of these architectures as foundations for data-driven model development in scientific machine learning applications.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Many scientific and engineering applications require
processing data sets sampled on irregularly spaced points. Consider
e.g. GIS data associating geospatial locations with
measurements, or scientific simulations with unstructured meshes.
This need is amplified by the recent surge of interest in
scientific machine learning (SciML) [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] targeting the application
of data-driven techniques to the sciences. In this setting, data
typically takes the form of e.g. synthetic simulation data from
meshes, or from sensors associated with data sites evolving
      </p>
      <p>Sandia National Laboratories is a multimission laboratory
managed and operated by National Technology and Engineering
Solutions of Sandia, LLC.,a wholly owned subsidiary of Honeywell
International, Inc., for the U.S. Department of Energys National
Nuclear Security Administration under contract DE-NA-0003525.
This paper describes objective technical results and analysis. Any
subjective views or opinions that might be expressed in the paper
do not necessarily represent the views of the U.S. Department of
Energy or the United States Government.</p>
      <p>
        Copyright c 2020, for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CCBY
4.0).
under partially known dynamics. This data is often scarce or
highly constrained, and it has been proposed that successful
SciML strategies will leverage prior knowledge to enhance
information gained from such data [
        <xref ref-type="bibr" rid="ref18 ref25">18, 25</xref>
        ]. One may exploit
physical properties such as transformation symmetries,
conservation structure, or solution regularity [
        <xref ref-type="bibr" rid="ref18 ref6 ref9">6, 9, 18</xref>
        ]. This new
application space necessitates ML architectures capable of
utilizing such knowledge.
      </p>
      <p>For data sampled on regular grids, Convolutional Neural
Networks (CNNs) are widely used to exploit translation
invariance and hierarchical structure to extract features from
data. Here we generalize this technique to the SciML setting
by introducing GMLS-Nets based on the scattered data
approximation theory underlying GMLS. Similar to how CNNs
learn stencils which benefit from weight-sharing,
GMLSNets operate by using local reconstructions to learn operators
between function spaces. The resulting architecture is
similarly interpretable and serves as an effective generalization
of CNNs to unstructured data, while providing mechanisms
to incorporate knowledge of underlying physics.</p>
      <p>In this work we show how GMLS-Nets may be used in a
SciML setting. Our results show GMLS-Nets are an effective
tool to discover PDEs, which may be used as a foundation
to construct data-driven models while preserving physical
invariants like conservation principles. We also show they
may be used to improve traditional scientific components,
such as time integrators. We show they also can be used
to regress engineering quantities of interest from scientific
simulation data. Finally, we briefly show GMLS-Nets can
perform reasonably relative to convNets on traditional
computer vision benchmarks. These results indicate the promise
of GMLS-Nets to support data-driven modeling efforts in
SciML applications. Implementations in TensorFlow and
PyTorch are available at https://github.com/rgp62/gmls-nets and
https://github.com/atzberg/gmls-nets.</p>
      <sec id="sec-1-1">
        <title>Generalized Moving Least Squares (GMLS)</title>
        <p>Generalized Moving Least Squares (GMLS) is a
nonparametric functional regression technique to construct
approximations of linear, bounded functionals from scattered
samples of an underlying field by solving local least-square
problems. On a Banach space V with dual space V , we aim
to recover an estimate of a given target functional x~ [u] 2 V
acting on u = u(x) 2 V, where x; x~ denote associated
locations in a compactly supported domain Rd. We assume
u is characterized by an unstructured collection of sampling
functionals, (u) := f j (u)gjN=1 V .</p>
        <p>To construct this estimate, we consider P V and seek an
element p 2 P which provides an optimal reconstruction of
the samples in the following weighted-`2 sense.</p>
        <p>N
p = argmin X ( j (u)
p2P j=1
j (p))2 !( j ; x~ ):
(1)
Here !( j ; x~ ) is a positive, compactly supported kernel
function establishing spatial correlation between the
target functional and sampling set. If one associates locations
Xh := fxj gjN=1 with (u), then one may consider
radial kernels ! = W (jjxj x~jj2), with support r &lt; .</p>
        <p>Assuming the basis P = spanf 1; :::; dim(P)g, and
denoting (x) = f i(x)gi=1;:::;dim(P), the optimal reconstruction
may be written in terms of an optimal coefficient vector a(u)
p =</p>
        <p>(x)|a(u):</p>
        <p>Provided one has knowledge of how the target functional
acts on P, the final GMLS estimate may be obtained by
applying the target functional to the optimal reconstruction
x~h[u] = x~ ( )|a(u):</p>
        <p>
          Sufficient conditions for the existence of solutions to Eqn.
1 depend only upon the unisolvency of over V, the
distribution of samples Xh, and mild conditions on the domain
; they are independent of the choice of x~ . For theoretical
underpinnings and recent applications, we refer readers to [
          <xref ref-type="bibr" rid="ref16 ref29 ref30">5,
16, 29, 30</xref>
          ].
        </p>
        <p>GMLS has primarily been used to obtain point estimates
of differential operators to develop meshfree discretizations
of PDEs. The abstraction of GMLS however provides a
mathematically rigorous approximation theory framework which
may be applied to a wealth of problems, whereby one may
tailor the choice of x~ , , P and ! to a given application. In
the current work, we will assume the action of x~ on P is
unknown, and introduce a parameterization x~; ( ), where
denote hyperparameters to be inferred from data. Classically,
GMLS is restricted to linear bounded target functionals; we
will also consider a novel nonlinear extension by considering
estimates of the form</p>
        <p>
          x~h[u] = qx~; (a(u));
where qx~; is a family of nonlinear operators parameterized
by acting upon the GMLS reconstruction. Where
unambiguous, we will drop the x~ dependence of operators and
simply write e.g. h[u] = q (a(u)). We have recently used
related non-linear variants of GMLS to develop solvers for
PDEs on manifolds in [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ].
        </p>
        <p>For simplicity, in this work we specialize as follows. Let:
be point evaluations on Xh; P be m(Rd), the space of
mth-order polynomials; let W (r) = (1 r= )p+, where f+
denotes the positive part of a function f and p 2 N. We
(2)
(3)
(4)
stress however that this framework supports a much broader
application. Consider e.g. learning from flux data related to
H(div)-conforming discretizations, where one may select
as sampling functional i(u) = Rfi u dA, or consider the
physical constraints that may be imposed by selecting P as
be divergence free or satisfy a differential equation.</p>
        <p>We illustrate now the connection between GMLS and
convolutional networks in the case of a uniform grid, Xh Zd.
Consider a sampling functional j (u) = (u(xj ) u(xi)),
and assume the parameterization x~; ( ) = 1; :::; dim(P) ,
xi;j = xi xj . Then the GMLS estimate is given explicitly
at a point xi by
xh~i [u] = X
; ;j</p>
        <p>X
k
(xk)W (xi;k)
(xk)</p>
        <p>Contracting terms involving ; and k, we may write
xh~i [u] = Pj c( ; )ij (uj ui). The collection of stencil
coefficients at xi 2 Xh are fc( ; )ij gj . Therefore, one
application for GMLS is to build stencils similar to
convolutional networks. A major distinction is that GMLS can
handle scattered data sets and a judicious selection of , P
and ! can be used to inject prior information. Alternatively,
one may interpret the regression over P as an encoding in a
low-dimensional space well-suited to characterize common
operators. For continuous functions for example, an
operator’s action on the space of polynomials is often sufficient
to obtain a good approximation. Unlike CNNs there is no
need to handle boundary effects; GMLS-nets instead learns
one-sided stencils.</p>
      </sec>
      <sec id="sec-1-2">
        <title>GMLS-Nets</title>
        <p>From an ML perspective, GMLS estimation consists of two
parts: (i) data is encoded via the coefficient vector a(u)
providing a compression of the data in terms of P, (ii) the
operator is regressed over P ; this is equivalent to finding a
function q : a(u) ! R. We propose GMLS-Layers
encoding this process in Figure 1, parameterizing a(u) = N N (u).</p>
        <p>This architecture accepts input channels indexed by
which consist of components of the data vector-field [u]
sampled over the scattered points Xh. We allow for different
sampling points for each channel, which may be helpful for
heterogeneous data. Each of these input channels is then used
to obtain an encoding of the input field as the vector a(u)
identifying the optimal representer in P.</p>
        <p>We next select our parameterization of the functional
via q , which may be any family of functions trainable by
back-propagation. We will consider two cases in this work
appropriate for linear and non-linear operators. In the
linear case we consider q (a) = T a, which is sufficient to
exactly reproduce differential operators. For the nonlinear
case we parameterize with a multi-layer perceptron (MLP),
q (a) = MLP(a). Note that in the case of linear activation
function, the single layer MLP model reduces to the linear
model.</p>
        <p>Nonlinearity may thus be handled within a single
nonlinear GMLS-Layer, or by stacking multiple linear
GMLS</p>
        <p>Mapping MLP
O h</p>
        <p>C
s
l
tu en
p n
In ah</p>
        <p>C
coeCffiahnnC
layers with intermediate ReLU’s, the later mapping more
directly onto traditional CNN construction. We next
introduce pooling operators applicable to unstructured data,
whereby for each point in a given target point cloud Xtharget,
(xi) = F (fxj jj 2 Xh; jxj xij &lt; g). Here F represents
the pooling operator (e.g. max, average, etc.). With this
collection of operators, one may construct architectures similar
to CNNs by stacking GMLS-Layers together with pooling
layers and other NN components. Strided GMLS-layers
generalizing strided CNN stencils may be constructed by
choosing target sites on a second, smaller point cloud.</p>
      </sec>
      <sec id="sec-1-3">
        <title>Relation to other work.</title>
        <p>
          Many recent works aim to generalize CNNs away from the
limitations of data on regular grids [
          <xref ref-type="bibr" rid="ref12 ref8">8, 12</xref>
          ]. This includes work
on handling inputs in the form of directed and un-directed
graphs [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], processing graphical data sets in the form of
meshes and point-clouds [
          <xref ref-type="bibr" rid="ref14 ref17 ref5">14, 17</xref>
          ], and in handling scattered
sub-samplings of images [
          <xref ref-type="bibr" rid="ref19 ref8">8, 19</xref>
          ]. Broadly, these works: (i) use
the spectral theory of graphs and generalize convolution in the
frequency domain [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], (ii) develop localized notions similar to
convolution operations and kernels in the spatial domain [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ].
GMLS-Nets is most closely related to the second approach.
        </p>
        <p>
          The closest works include SplineCNNs [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], MoNet [
          <xref ref-type="bibr" rid="ref10 ref11">10,
11</xref>
          ], KP-Conv [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], and SpiderCNN [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. In each of these
methods a local spatial convolution kernel is approximated
by a parameterized family of functions: open/closed
BSplines [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], a Gaussian correlation kernel [
          <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
          ], or a
kernel function based on a learnable combination of
radial ReLu’s [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. The SpiderCNNs share many similarities
with GMLS-Nets using a kernel that is based on a learnable
degree-three Taylor polynomial that is taken in product with
a learnable radial piecewise-constant weight function [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ].
A key distinction of GMLS-Nets is that operators are
regressed directly over the dual space V without constructing
shape/kernel functions. Both approaches provide ways to
approximate the action of a processing operator that aggregates
over scattered data.
        </p>
        <p>
          We also mention other meshfree learning frameworks:
PointNet [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ] and Deep Sets [
          <xref ref-type="bibr" rid="ref17 ref5">17</xref>
          ], but these are aimed
primarily at set-based data and geometric processing tasks for
segmentation and classification. Additionally, Radial Basis
Function (RBF) networks are similarly built upon similar
approximation theory [
          <xref ref-type="bibr" rid="ref31">1, 2</xref>
          ].
        </p>
        <p>
          Related work on operator regression in a SciML context
include [
          <xref ref-type="bibr" rid="ref15 ref21 ref22 ref23 ref26 ref27 ref9">4, 9, 15, 21–23, 26, 27</xref>
          ]. In PINNs [
          <xref ref-type="bibr" rid="ref23 ref27">23, 27</xref>
          ], a versatile
framework based on DNNs is developed to regress both linear
and non-linear PDE models while exploiting physics
knowledge. In [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] and PDE-Nets [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], CNNs are used to learn
stencils to estimate operators. In [
          <xref ref-type="bibr" rid="ref15 ref9">9, 15</xref>
          ] dictionary learning
is used along with sparse optimization methods to identify
dynamical systems to infer physical laws associated with
time-series data. In [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ], regression is performed over a class
of nonlinear pseudodifferential operators, formed by
composing neural network parameterized Fourier multipliers and
pointwise functionals.
        </p>
        <p>
          GMLS-Nets can be used in conjunction with the above
methods. GMLS-Nets have the distinction of being able to
move beyond reliance on CNNs on regular grids, no longer
need moment conditions to impose accuracy and
interpretability of filters for estimating differential operators [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], and do
not require as strong assumptions about the particular form of
the PDE or a pre-defined dictionary as in [
          <xref ref-type="bibr" rid="ref15 ref27">15, 27</xref>
          ]. We expect
that prior knowledge exploited globally in PINNs methods
may be incorporated into the GMLS-Layers. In particular,
the ability to regress natively over solver degrees of freedom
will be particularly useful for SciML applications.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Results</title>
      <sec id="sec-2-1">
        <title>Learning differential operators and identifying governing equations.</title>
        <p>Many data sets arising in the sciences are generated by
processes for which there are expected governing laws
expressible in terms of ordinary or partial differential equations.
GMLS-Nets provide natural features to regress such
operators from observed state trajectories or responses to
fluctuations. We consider the two settings
= L[u(t; x)] and L[u(x)] =
f (x):
The L[u] can be a linear or non-linear operator. When the
data are snapshots of the system state un = u(tn) at discrete
times tn = n t, we use estimators based on
= L[fukgk2K; ]:
(6)</p>
        <p>In the case that K = fn + 1g, this corresponds to using an
Implicit Euler scheme to model the dynamics. Many other
choices are possible, and later we shall discuss estimators
with conservation properties. The learning capabilities of
GMLS-Nets to regress differential operators are shown in
Fig. 2. As we shall discuss in more detail, this can be used
to identify the underlying dynamics and obtain governing
equations.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Long-time integrators: discretization for native data-driven modeling.</title>
        <p>
          The GMLS framework provides useful ways to target and
sample arbitrary functionals. In a data transfer context, this
has been leveraged to couple heterogeneous codes. For
example, one may sample the flux degrees of freedom of a
Raviart-Thomas finite element space and target cell integral
degrees of freedom of a finite volume code to perform native
data transfer. This avoids the need to perform intermediate
projections/interpolations [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. Motivated by this, we
demonstrate that GMLS may be used to learn discretization native
data-driven models, whereby dynamics are learned in the
natural degrees of freedom for a given model. This provides
access to structure preserving properties such as conservation,
e.g., conservation of mass in a physical system.
        </p>
        <p>We take as a source of training data the following analytic
solution to the 1D unsteady advection-diffusion equation with
advection and diffusion coefficients a and on the interval
= [0; 30].</p>
        <p>uex(x; t) =</p>
        <p>1
ap4
t
exp
x
(x0 + at)
4 t
(8)
To construct a finite difference model (FDM), we assume
a node set N = fx0 = 0; x1; :::; xN 1; xN = 30g. To
construct a finite volume model (FVM), we construct the set
of cells C = f[xi; xi+1]; xi; xi+1 2 N; i 2 f0; :::; N 1gg,
with associated cell measure (ci) = jxi+1 xij and set of
oriented boundary faces Fi = @ci = fxi+1; xig. We then
assume for uniform timestep t = tn+1 tn the Implicit
t=0histogram
filtered
t=5
prediction
particle density
x
1
x
1
For the advection-diffusion equation in the limit t ! 0,
LF DM;ex = a ru + r2u and LF V M;ex = au + ru. By
construction, for any choice of hyperparameters the FVM
will be locally conservative. In this sense, the physics of mass
conservation are enforced strongly via the discretization, and
we parameterize only an empirical closure for fluxes - GMLS
naturally enables such native flux regression.</p>
        <p>We use a single linear GMLS-net layer to parameterize
both LF DM and LF V M , and train over a single timestep by
using Eqn. 8 to evaluate the exact time increment in Eqns.
910 . We perform gradient descent to minimize the RMS of the
residual with respect to . For the FDM and FVM we use a
cubic and quartic polynomial space, respectively. Recall that
to resolve the diffusion and advective timescales one would
select a timestep of roughly tCF L = min 21 a xt ; 14 x2t .</p>
        <p>After regressing the operator, we solve the extracted
scheme to advance from ui0 = u(xi; t0) i to nuitfinal oi.
As implicit Euler is unconditionally stable, one may
select t tCF L at the expense of introducing
numerical dissipation, ”smearing” the solution. We consider
t 2 f0:1 tCF L; tCF L; 10 tCF Lg and compare both
the learned FDM/FVM dynamics to those obtained with
a standard discretization (i.e. letting LF DM = LF DM;ex.
From Fig. 3 we observe that for t= tCF L 1 both the
regressed and reference models agree well with the analytic
solution. However, for t = 10 tCF L, we see that while the
reference models are overly dissipative, the regressed models
match the analytic solution. Inspection of the `2 norm of the
solutions at tf inal in Table 1 indicates that as expected, the
classical solutions corresponding to LF DM;ex and LF V M;ex
converge as O( t). The regressed FDM is consistently more
accurate than the exact operator. Most interesting, the
regressed FVM is roughly independent of t, providing a 20
improvement in accuracy over the classical model. This
preliminary result suggests that GMLS-Nets offer promise as a
tool to develop non-dissipative implicit data-driven models.
We suggest that this is due to the ability for GMLS-Nets to
regress higher-order differential operator corrections to the
discrete time dynamics, similar to e.g.
Lax-Friedrichs/LaxWendroff schemes.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Data-driven modeling from molecular dynamics.</title>
        <p>In science and engineering applications, there are often
highfidelity descriptions of the physics based on molecular
dynamics. One would like to extract continuum descriptions
to allow for predictions over longer time/length-scales or
reduce computational costs. Coarse-grained modeling efforts
also have similar aims while retaining molecular degrees
of freedom. Each seek lower-fidelity models that are able
0.0
0.2
0.4
x
0.6
0.8
1.0
to accurately predict important statistical moments of the
high-fidelity model over longer timescales. As an example,
consider a mean-field continuum model derived by
coarsegraining a molecular dynamics simulation. Classically, one
may pursue homogenization analysis to carefully derive such
a continuum model, but such techniques are typically
problem specific and can become technical. We illustrate here how
GMLS-Nets can be used to extract a conservative continuum
PDE model from particle-level simulation data.</p>
        <p>Brownian motion has as its infinitesimal generator the
unsteady diffusion equation [3]. As a basic example, we
will extract a 1D diffusion equation to predict the
longterm density of a cloud of particles undergoing
pseudo1D Brownian motion. We consider the periodic domain
= [0; 1] [0; 0:1], and generate a collection of Np
particles with initial position xp(t = 0) drawn from the uniform
distribution U [0; 0:5] U [0; 0:1].</p>
        <p>Due to this initialization and domain geometry, the particle
density is statistically one dimensional. We estimate the
density field (x; t) along the first dimension by constructing a
collection C of N uniform width cells and build a histogram,</p>
        <p>Np
(x; t) = X
The 1x2A is the indicator function taking unit value for x 2
A and zero otherwise.</p>
        <p>We evolve the particle positions xp(t) under 2D Brownian
motion (the density will remain statistically 1D as the
particles evolve). In the limit Np=N ! 1, the particle density
satisfies a diffusion equation, and we can scale the Brownian
motion increments to obtain a unit diffusion coefficient in
this limit.</p>
        <p>As the ratio Np=N is finite, there is substantial noise in the
extracted density field. We obtain a low pass filtered density,
e(x; t), by convolving (x; t) with a Gaussian kernel of width
twice the histogram bin width.</p>
        <p>We use the FVM scheme in the same manner as in the
previous section. In particular, we regress a flux that matches
the increment (e(x; t = 10) e(x; t = 12))=2 t. This
window was selected, since the regression at t = 0 is ineffective
as the density approximates a heaviside function. Such near
discontinuities are poorly represented with polynomials and
subsequently not expected to train well. Additionally, we
train over a time interval of 2 t, where in general k t steps
can be used to help mollify high-frequency temporal noise.</p>
        <p>To show how the GMLS-Nets’ inferred operator can be
used to make predictions, we evolve the regressed FVM
for one hundred timesteps and compare to the density field
obtained from the particle solver. We apply Dirichlet
boundary conditions (0; t) = (1; t) = 1 and initial conditions
matching the histogram (x; t = 0). Again, the FVM by
construction is conservative, where it is easily shown for all
t that R dx = Np. A time series summarizing the
evolution of density in both the particle solver and the regressed
continuum model is provided in Fig 4. While this is a
basic example, this illustrates the potential of GMLS-nets in
constructing continuum-level models from molecular data.
These techniques also could have an impact on data-driven
approaches for numerical methods, such as projective
integration schemes.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Image processing: MNIST benchmark.</title>
        <p>While image processing is not the primary application area
we intend, GMLS-Nets can be used for tasks such as
classification. For the common MNIST benchmark task, we compare
use of GMLS-Nets with CNNs in Figure 5. CNNs use kernel
size 5, zero-padding, max-pool reduction 2, channel sizes
16; 32, FC as linear map to soft-max prediction of the
categories. The GMLS-Nets use the same architecture with a
GMLS using polynomial basis of monomials in x; y up to
degree porder = 4.</p>
        <p>
          We find that despite the features extracted by GMLS-Nets
being more restricted than a general CNN, there is only a
modest decrease in the accuracy for the basic MNIST task.
We do expect larger differences on more sophisticated image
tasks. This basic test illustrates how GMLS-Nets with a
polynomial basis extracts features closely associated with taking
derivatives of the data field. We emphasize for other choices
GMLS-Layer
a[0]
a[5]
of basis for p and sampling functionals j , other features
may be extracted. For polynomials with terms in dictionary
order, coefficients are shown in Fig. 5. Notice the clear trends
and directional dependence on increases and decreases in the
image intensity, indicating c[
          <xref ref-type="bibr" rid="ref31">1</xref>
          ] @x and c[2] @y. Given
the history of PDE modeling, for many classification and
regression tasks arising in the sciences and engineering, we
expect such derivative-based features extracted by
GMLSNets will be useful in these applications.
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>GMLS-Net on unstructured fluid simulation data.</title>
        <p>We consider the application of GMLS-Nets to unstructured
data sets representative of scientific machine learning
applications. Many hydrodynamic flows can be experimentally
characterized using velocimetry measurements. While
velocity fields can be estimated even for complex geometries, in
such measurements one often does not have access directly
to fields, such as the pressure. However, integrated
quantities of interest, such as drag are fundamental for performing
engineering analysis and yet depend upon both the velocity
and pressure. This limits the level of characterization that
can be accomplished when using velocimetry data alone. We
construct GMLS-Net architectures that allow for prediction
of the drag directly from unstructured fluid velocity data,
without any direct measurement of the pressure.</p>
        <p>We illustrate the ideas using flow past a cylinder of radius
L. This provides a well-studied canonical problem whose
drag is fully characterized experimentally in terms of the
Reynolds number, Re = U L= . For incompressible flow
past a cylinder, one may apply dimensional analysis to relate
drag Fd to the Reynolds number via the drag coefficient Cd:
The U1 is the free-stream velocity, A is the frontal area of the
cylinder, and Cd : R ! R. Such analysis requires in practice
engineering judgement to identify relevant dimensionless
groups. After such considerations, this allows one to collapse
relevant experimental parameters to ( ; U1; A; L; ) onto a
single curve.</p>
        <p>For the purposes of training a GMLS-Net, we construct a
synthetic data set by solving the Reynolds averaged
NavierStokes (RANS) equations with a steady state finite volume
code. Let L = = 1 and consider U 2 [0:1; 20] and
2 10 2; 108 . We consider a k turbulence model
with inlet conditions consistent with a 10% turbulence
intensity and a mixing length corresponding to the inlet size. From
the solution, we extract the velocity field u at cell centers
to obtain an unstructured point cloud Xh. We compute Cd
directly from the simulations. We then obtain an
unstructured data set of 400 (u)i features over Xh, with associated
labels Cd. We emphasize that although U1 and are used to
generate the data, they are not included as features, and the
Reynolds number is therefore hidden.</p>
        <p>We remark that the k model is well known to perform
poorly for flows with strong curvature such as recirculation
zones. Here, in our proof-of-concept demonstration, we treat
the RANS-k solution as ground truth for simplicity,
despite its short-comings and acknowledge that a more physical
study would consider ensemble averages of LES/DNS data
in 3D. We aim here just to illustrate the potential utility of
GMLS-Nets in a scientific setting for processing such
unstructured data sets.</p>
        <p>As an architecture, we provide two input channels for the
two velocity components to three stacked GMLS layers. The
first layer acts on the cell centers, and intermediate pooling
layers down-sample to random subsets of Xh. We conclude
with a linear activation layer to extract the drag coefficient
as a single scalar output. We randomly select 80% of the
samples for training, and use the remainder as a test set. We
quantify using the root-mean-square (MSE) error which we
find to be below 1:5%.</p>
        <p>The excellent predictive capability demonstrated in Fig. 6
highlights GMLS-Nets ability to provide an effective means
of regressing engineering quantities of interest directly from
velocity flow data; the GMLS-Net architecture is able to
identify a latent low-dimensional parameter space which is
typically found by hand using dimensional analysis. This
similarity relationship across the Reynolds numbers is
identified, despite the fact that it does not have direct access to the
viscosity parameter. These initial results indicate some of the
potential of GMLS-Nets in processing unstructured data sets
for scientific machine learning applications.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Conclusions</title>
      <p>
        We have introduced GMLS-Nets for processing scattered
data sets leveraging the framework of GMLS. GMLS-Nets
allow for generalizing convolutional networks to scattered
data, while still benefiting from underlying translational
invariances and weight sharing. The GMLS-layers provide
feature extractors that are natural particularly for regressing
differential operators, developing dynamical models, and
predicting quantities of interest associated with physical systems.
GMLS-Nets were demonstrated to be capable of obtaining
dynamical models for long-time integration beyond the
limits of traditional CFL conditions, for making predictions of
density evolution of molecular systems, and for predicting
directly from flow data quantities of interest in fluid
mechanics. These initial results indicate some promising capabilities
of GMLS-Nets for use in data-driven modeling in scientific
machine learning applications.
[
        <xref ref-type="bibr" rid="ref31">1</xref>
        ]
[2]
[3]
[4]
[5]
      </p>
      <p>We give here some details on the derivation of the gradients for
the learnable GMLS operator [u] and intermediate steps. This
can be used in implementations for back-propagation and other
applications.</p>
      <p>GMLS works by mapping data to a local polynomial fit in region
i around xi with p (x) u(x) for x 2 i. To find the optimal
fitting polynomial p (x) 2 V to the function u(x), we can consider
the case with j (x) = (x xj ) and weight function wij =
w(xi xj ). In a region around a reference point x the optimization
problem can be expressed parameterically in terms of coefficients a
as
a (xi) = arg am2Rinm</p>
      <p>X
uj
p(xj )T a
2
wij :
We write for short p(xj ) = p(xj ; xi), where the basis elements
in fact do depend on xi. Typically, for polynomials we just use
p(xj ; xi) = p(xj xi). This is important in the case we want to
take derivatives in the input values xi of the expressions.</p>
      <p>We can compute the derivative in a` to obtain
Derivation of Gradients of the Operator xi [u].
where
=
+
+
j</p>
      <p>#
X p(xj )wij p(xj )T ; r =</p>
      <p>X wij p(xj )uj ;
Typically, the weights will not be spatially dependent q(xi) = q0.
Throughout, we shall denote this simply as q and assume there is
no spatial dependence, unless otherwise indicated.</p>
      <sec id="sec-3-1">
        <title>Derivatives of ~ in xi, a(xi), and q.</title>
        <p>The derivative in xi is given by</p>
        <p>X</p>
        <p>wij
p(xj ; xi)p(xj ; xi)T @wij :
@xi</p>
        <p>T
The derivatives in r are given by</p>
        <p>@wij :
p(xj ) uj wij + p(xj )uj @xi
The full derivative of the linear operator ~ can be expressed as
In the constant case q(xi) = q0, the derivative of ~ simplifies to</p>
        <p>The derivatives of the other terms follow more readily. For
derivative of the linear operator ~ in the coefficients a(xi), we have</p>
        <p>T
~(xi) = q0</p>
        <p>a (xi) :</p>
        <p>In the case of nonlinear operators ~ = q(a(xi)) there are further
dependencies beyond just xi and a(xi), and less explicit
expressions. For example, when using MLP’s there may be hierarchy of
trainable weights w. The derivatives of the non-linear operator can
be expressed as</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>D.S.</given-names>
            <surname>Broomhead</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Lowe</surname>
          </string-name>
          . “
          <article-title>Multivariable Functional Interpolation and Adaptive Networks”</article-title>
          .
          <source>In: Complex Systems 2.1</source>
          (
          <issue>1988</issue>
          ), pp.
          <fpage>321</fpage>
          -
          <lpage>355</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>T.</given-names>
            <surname>Poggio</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Girosi</surname>
          </string-name>
          . “
          <article-title>Networks for approximation and learning”</article-title>
          .
          <source>In: Proceedings of the IEEE 78.9</source>
          (
          <issue>1990</issue>
          ), pp.
          <fpage>1481</fpage>
          -
          <lpage>1497</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Ioannis</given-names>
            <surname>Karatzas and Steven E Shreve.</surname>
          </string-name>
          <article-title>“Brownian Motion and Stochastic Calculus”</article-title>
          . In: Springer,
          <year>1998</year>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Lagaris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Likas</surname>
          </string-name>
          , and
          <string-name>
            <surname>D. I. Fotiadis.</surname>
          </string-name>
          “
          <article-title>Artificial neural networks for solving ordinary and partial differential equations”</article-title>
          .
          <source>In: IEEE Transactions on Neural Networks 9.5</source>
          (
          <issue>1998</issue>
          ), pp.
          <fpage>987</fpage>
          -
          <lpage>1000</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          Vol.
          <volume>17</volume>
          . Cambridge university press,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Susanne</given-names>
            <surname>Brenner</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ridgway</given-names>
            <surname>Scott</surname>
          </string-name>
          .
          <source>The Mathematical Theory of Finite Element Methods</source>
          . Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Franco</given-names>
            <surname>Scarselli</surname>
          </string-name>
          et al. “
          <article-title>The Graph Neural Network Model”</article-title>
          .
          <source>In: Trans. Neur. Netw</source>
          .
          <volume>20</volume>
          .1 (
          <issue>Jan</issue>
          .
          <year>2009</year>
          ), pp.
          <fpage>61</fpage>
          -
          <lpage>80</lpage>
          . ISSN:
          <fpage>1045</fpage>
          -
          <lpage>9227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Joan</given-names>
            <surname>Bruna</surname>
          </string-name>
          et al. “
          <article-title>Spectral networks and locally connected networks on graphs”. English (US)</article-title>
          .
          <source>In: International Conference on Learning Representations (ICLR2014)</source>
          , CBLS,
          <year>April 2014</year>
          .
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Steven</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Brunton</surname>
          </string-name>
          , Joshua L.
          <string-name>
            <surname>Proctor</surname>
            , and
            <given-names>J. Nathan</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
          </string-name>
          . “
          <article-title>Discovering governing equations from data by sparse identification of nonlinear dynamical systems”</article-title>
          .
          <source>In: 113.15</source>
          (
          <year>2016</year>
          ), pp.
          <fpage>3932</fpage>
          -
          <lpage>3937</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Thomas</surname>
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Kipf</surname>
            and
            <given-names>Max</given-names>
          </string-name>
          <string-name>
            <surname>Welling</surname>
          </string-name>
          . “
          <article-title>Semi-Supervised Classification with Graph Convolutional Networks”</article-title>
          .
          <source>In: ArXiv abs/1609</source>
          .02907 (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Federico</given-names>
            <surname>Monti</surname>
          </string-name>
          et al. “
          <article-title>Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs”</article-title>
          .
          <source>In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          (
          <year>2016</year>
          ), pp.
          <fpage>5425</fpage>
          -
          <lpage>5434</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. M. Bronstein</surname>
          </string-name>
          et al. “
          <article-title>Geometric Deep Learning: Going beyond Euclidean data”</article-title>
          .
          <source>In: IEEE Signal Processing Magazine 34.4</source>
          (
          <issue>2017</issue>
          ), pp.
          <fpage>18</fpage>
          -
          <lpage>42</lpage>
          . ISSN:
          <fpage>1053</fpage>
          -
          <lpage>5888</lpage>
          . DOI:
          <volume>10</volume>
          .1109/MSP.
          <year>2017</year>
          .
          <volume>2693418</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Charles</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Qi</surname>
          </string-name>
          et al. “
          <article-title>PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation”</article-title>
          .
          <source>In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          .
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Charles</given-names>
            <surname>Ruizhongtai</surname>
          </string-name>
          Qi et al. “
          <article-title>PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space”</article-title>
          .
          <source>In: Advances in Neural Information Processing Systems</source>
          <volume>30</volume>
          . Ed. by I. Guyon et al. Curran Associates, Inc.,
          <year>2017</year>
          , pp.
          <fpage>5099</fpage>
          -
          <lpage>5108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Samuel</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Rudy</surname>
          </string-name>
          et al. “
          <article-title>Data-driven discovery of partial differential equations”</article-title>
          .
          <source>In: 3.4</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Nathaniel</surname>
            <given-names>Trask</given-names>
          </string-name>
          , Mauro Perego, and Pavel Bochev.
          <article-title>“A high-order staggered meshless method for elliptic problems”</article-title>
          .
          <source>In: SIAM Journal on Scientific Computing 39.2</source>
          (
          <issue>2017</issue>
          ),
          <fpage>A479</fpage>
          -
          <lpage>A502</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Manzil</given-names>
            <surname>Zaheer</surname>
          </string-name>
          et al. “
          <article-title>Deep Sets”</article-title>
          .
          <source>In: Advances in Neural Information Processing Systems</source>
          <volume>30</volume>
          . Ed. by I. Guyon et al. Curran Associates, Inc.,
          <year>2017</year>
          , pp.
          <fpage>3391</fpage>
          -
          <lpage>3401</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Atzberger</surname>
          </string-name>
          . “
          <article-title>Importance of the Mathematical Foundations of Machine Learning Methods for Scientific and Engineering Applications”</article-title>
          .
          <source>In: SciML2018 Workshop</source>
          , position paper, https://arxiv.org/abs/
          <year>1808</year>
          .02213 (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fey</surname>
          </string-name>
          et al. “
          <article-title>SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels”</article-title>
          .
          <source>In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          .
          <year>2018</year>
          , pp.
          <fpage>869</fpage>
          -
          <lpage>877</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Paul</given-names>
            <surname>Allen</surname>
          </string-name>
          <string-name>
            <given-names>Kuberry</given-names>
            , Pavel B Bochev, and
            <surname>Kara J Peterson</surname>
          </string-name>
          .
          <article-title>A virtual control meshfree coupling method for non-coincident interfaces</article-title>
          .
          <source>Tech. rep. Sandia National Lab</source>
          .
          <article-title>(SNL-NM), Albuquerque</article-title>
          ,
          <source>NM (United States)</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Zichao</given-names>
            <surname>Long</surname>
          </string-name>
          et al. “
          <string-name>
            <surname>PDE-Net</surname>
          </string-name>
          :
          <article-title>Learning PDEs from Data”</article-title>
          .
          <source>In: Proceedings of the 35th International Conference on Machine Learning. Ed. by Jennifer Dy and Andreas Krause</source>
          . Vol.
          <volume>80</volume>
          .
          <source>Proceedings of Machine Learning Research. Stockholmsmssan</source>
          , Stockholm Sweden: PMLR,
          <year>2018</year>
          , pp.
          <fpage>3208</fpage>
          -
          <lpage>3216</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Ravi</surname>
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Patel</surname>
            and
            <given-names>Olivier</given-names>
          </string-name>
          <string-name>
            <surname>Desjardins</surname>
          </string-name>
          . “
          <article-title>Nonlinear integro-differential operator regression with neural networks”</article-title>
          . In: ArXiv abs/
          <year>1810</year>
          .08552 (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Maziar</given-names>
            <surname>Raissi</surname>
          </string-name>
          and George Em Karniadakis. “
          <article-title>Hidden physics models: Machine learning of nonlinear partial differential equations”</article-title>
          .
          <source>In: Journal of Computational Physics</source>
          <volume>357</volume>
          (
          <year>2018</year>
          ), pp.
          <fpage>125</fpage>
          -
          <lpage>141</lpage>
          . ISSN:
          <fpage>0021</fpage>
          -
          <lpage>9991</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Yifan</given-names>
            <surname>Xu</surname>
          </string-name>
          et al. “
          <article-title>SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters”</article-title>
          . In: Computer Vision - ECCV 2018. Ed. by Vittorio Ferrari et al. Cham: Springer International Publishing,
          <year>2018</year>
          , pp.
          <fpage>90</fpage>
          -
          <lpage>105</lpage>
          . ISBN:
          <fpage>978</fpage>
          -3-
          <fpage>030</fpage>
          -01237-3.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Nathan</given-names>
            <surname>Baker</surname>
          </string-name>
          et al.
          <source>Workshop report on basic research needs for scientific machine learning: Core technologies for artificial intelligence. Tech. rep. USDOE Office of Science (SC)</source>
          , Washington, DC (United States),
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Yohai</given-names>
            <surname>Bar-Sinai</surname>
          </string-name>
          et al. “
          <article-title>Learning data-driven discretizations for partial differential equations”</article-title>
          .
          <source>In: Proceedings of the National Academy of Sciences 116.31</source>
          (
          <year>2019</year>
          ), pp.
          <fpage>15344</fpage>
          -
          <lpage>15349</lpage>
          . ISSN:
          <fpage>0027</fpage>
          -
          <lpage>8424</lpage>
          . DOI:
          <volume>10</volume>
          . 1073/pnas.1814058116.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>M.</given-names>
            <surname>Raissi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Perdikaris</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Karniadakis</surname>
          </string-name>
          . “
          <article-title>Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations”</article-title>
          .
          <source>In: Journal of Computational Physics</source>
          <volume>378</volume>
          (
          <year>2019</year>
          ), pp.
          <fpage>686</fpage>
          -
          <lpage>707</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Hugues</surname>
          </string-name>
          Thomas et al. “
          <article-title>KPCONV: Flexible and deformable convolution for point clouds”</article-title>
          .
          <source>In: Proceedings of the IEEE International Conference on Computer Vision</source>
          .
          <year>2019</year>
          , pp.
          <fpage>6411</fpage>
          -
          <lpage>6420</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>BJ</given-names>
            <surname>Gross</surname>
          </string-name>
          et al. “
          <article-title>Meshfree methods on manifolds for hydrodynamic flows on curved surfaces: a generalized moving least-squares (GMLS) approach”</article-title>
          .
          <source>In: Journal of Computational Physics</source>
          (
          <year>2020</year>
          ), p.
          <fpage>109340</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Nathaniel</surname>
            <given-names>Trask</given-names>
          </string-name>
          , Pavel Bochev, and Mauro Perego.
          <article-title>“A conservative, consistent, and scalable meshfree mimetic method”</article-title>
          .
          <source>In: Journal of Computational Physics</source>
          <volume>409</volume>
          (
          <year>2020</year>
          ), p.
          <fpage>109187</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>1r:</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>