<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neural Ordinary Differential Equations for Data-Driven Reduced Order Modeling of Environmental Hydrodynamics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sourav Dutta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Rivera-Casillas</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthew W. Farthing</string-name>
          <email>matthew.w.farthingg@erdc.dren.mil</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory</institution>
          ,
          <addr-line>Vicksburg, MS</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Model reduction for fluid flow simulation continues to be of great interest across a number of scientific and engineering fields. Here, we explore the use of Neural Ordinary Differential Equations, a recently introduced family of continuousdepth, differentiable networks (Chen et al. 2018), as a way to propagate latent-space dynamics in reduced order models. We compare their behavior with two classical non-intrusive methods based on proper orthogonal decomposition and radial basis function interpolation as well as dynamic mode decomposition. The test problems we consider include incompressible flow around a cylinder as well as real-world applications of shallow water hydrodynamics in riverine and estuarine systems. Our findings indicate that Neural ODEs provide an elegant framework for stable and accurate evolution of latentspace dynamics with a promising potential of extrapolatory predictions. However, in order to facilitate their widespread adoption for large-scale systems, significant effort needs to directed at accelerating their training times. This will enable a more comprehensive exploration of the hyperparameter space for building generalizable Neural ODE approximations over a wide range of system dynamics.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Despite the trend of hardware improvements and significant
gains in the algorithmic efficiency of standard
discretization procedures, high-fidelity numerical simulation of
engineering systems governed by nonlinear partial differential
equations still pose a prohibitive computational challenge
        <xref ref-type="bibr" rid="ref70">(Quarteroni, Manzoni, and Negri 2016)</xref>
        for several
decisionmaking applications involving control
        <xref ref-type="bibr" rid="ref17 ref55 ref69">(Proctor, Brunton,
and Kutz 2016)</xref>
        , optimal design and multi-fidelity
optimization
        <xref ref-type="bibr" rid="ref67">(Peherstorfer, Willcox, and Gunzburger 2016)</xref>
        , and/or
uncertainty quantification
        <xref ref-type="bibr" rid="ref47 ref79">(Sapsis and Majda 2013)</xref>
        .
Reduced order models (ROMs) offer a valuable alternative way
to simulate such dynamical systems with considerably
reduced computational cost
        <xref ref-type="bibr" rid="ref12 ref15 ref78">(Benner, Gugercin, and Willcox
2015)</xref>
        .
      </p>
      <p>
        Reduced basis (RB) methods
        <xref ref-type="bibr" rid="ref70">(Quarteroni, Manzoni, and
Negri 2016)</xref>
        constitute a family of widely popular ROM
techniques that are usually implemented with an
offlineonline decomposition paradigm. The offline stage involves
the construction of a solution-dependent, linear basis space
spanned by a set of RB “modes”, which are extracted from
a collection of high-fidelity solutions, also called snapshots.
The RB “modes” can be thought of as a set of global
basis functions spanning a linear subspace that can be used to
approximate the dynamics of the high-fidelity model. The
most well known method to extract the reduced basis is
called proper orthogonal decomposition (POD)
        <xref ref-type="bibr" rid="ref13 ref83">(Sirovich
1987; Berkooz, Holmes, and Lumley 1993)</xref>
        , which is
particularly effective when the coherent structures of the flow can
be hierarchically ranked in terms of their energy content.
      </p>
      <p>
        In the online stage of traditional RB methods, a linear
combination of the reduced order RB modes is used to
approximate the high-fidelity solution for a new
configuration of flow parameters. The procedure adopted to
compute the expansion coefficients leads to the classification
of these methods into two broad categories: intrusive and
non-intrusive. In an intrusive RB method, the expansion
coefficients are determined by the solution of a reduced
order system of equations, which is typically obtained via a
Galerkin or Petrov-Galerkin projection of the high-fidelity
(full-order) system onto the RB space
        <xref ref-type="bibr" rid="ref16 ref3 ref56 ref61">(Lozovskiy, Farthing,
and Kees 2017)</xref>
        . Typically this projection and solution
involves modification of high-fidelity simulators and hence the
label intrusive. For linear systems, Galerkin projection is the
most popular choice. However, in the presence of
nonlinearities, an affine expansion of the nonlinear (or non-affine)
differential operator must be recovered in order to make the
evaluation of the projection-based reduced model
independent of the number of DOFs of the high-fidelity solution.
      </p>
      <p>
        Several different techniques, collectively referred to as
hyper-reduction methods
        <xref ref-type="bibr" rid="ref7">(Amsallem et al. 2015)</xref>
        , have been
proposed to address this problem. These include the
empirical interpolation method (EIM), its discrete counterpart
DEIM
        <xref ref-type="bibr" rid="ref22">(Chaturantabut and Sorensen 2010)</xref>
        , “gappy POD”
        <xref ref-type="bibr" rid="ref92">(Willcox 2006)</xref>
        , as well as the residual DEIM method
        <xref ref-type="bibr" rid="ref95">(Xiao
et al. 2014)</xref>
        . Beyond the need for hyper-reduction to
recover efficiency, in complex nonlinear problems it is also
common that some of the intrinsic structures present in the
high-fidelity model may be lost during order reduction
using Galerkin projection-based approaches. This is because
the Galerkin projection approach inherently assumes that
the residual generated by the truncated representation of the
high-fidelity model is orthogonal to the reduced basis space
which leads to the loss of higher order nonlinear
interaction terms in the reduced representation. This can result in
qualitatively wrong solutions or instability issues
        <xref ref-type="bibr" rid="ref6">(Amsallem
and Farhat 2012)</xref>
        . As a remedy, Petrov-Galerkin
projection based approaches have been proposed
        <xref ref-type="bibr" rid="ref19 ref20">(Carlberg,
BouMosleh, and Farhat 2011; Carlberg et al. 2013; Fang et al.
2013)</xref>
        .
      </p>
      <p>
        An alternative family of methods to address the issues of
instability and loss of efficiency in the intrusive ROM
frameworks is represented by non-intrusive reduced order models
(NIROMs), and forms the subject of this study. The primary
advantage of this class of methods is that complex
modifications to the source code describing the physical model can
be avoided, thus making it easier to develop reduced models
when the legacy or proprietary source codes are not
available. In these methods, instead of a Galerkin-type
projection, the expansion coefficients for the reduced solution are
obtained via interpolation on the space of a reduced basis
extracted from snapshot data. However, since the reduced
dynamics generally belong to nonlinear, matrix manifolds,
a variety of interpolation techniques have been proposed
that are capable of enforcing the constraints characterizing
those manifolds. Regression-based non-intrusive methods
have been proposed that, among others, use artificial
neural networks (ANNs), in particular multi-layer perceptrons
        <xref ref-type="bibr" rid="ref42 ref45">(Hesthaven and Ubbiali 2018)</xref>
        , Gaussian process regression
(GPR)
        <xref ref-type="bibr" rid="ref21 ref36 ref43 ref57 ref76">(Guo and Hesthaven 2019)</xref>
        , and radial basis function
(RBF)
        <xref ref-type="bibr" rid="ref10 ref47 ref79">(Audouze, De Vuyst, and Nair 2013)</xref>
        to perform the
interpolation.
      </p>
      <p>
        Here, we will explore an alternative approach to
propagating latent-space dynamics based on Neural ODEs, which are
a family of continuous-depth, differentiable networks that
can be seen as an extension of ResNets in the limit of a zero
discretization step size
        <xref ref-type="bibr" rid="ref21 ref27 ref36 ref43 ref57 ref76">(Dupont, Doucet, and Teh 2019)</xref>
        .
Details of our approach follow below. In addition, we consider
two NIROM techniques - a) based on linear dimension
reduction via POD and latent space evolution via Radial
Basis Functions (RBF), and b) Dynamic Mode Decomposition
(DMD) that will serve as the benchmarks in our numerical
experiments to provide comparisons with the Neural ODE
approach. We then proceed with several numerical
experiments based on incompressible flow around a cylinder and
shallow water hydrodynamics in order to evaluate the
methods’ performance for fast replay applications in complex
fluid-dynamics problems.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Methodology</title>
      <p>The standard ROM development framework usually consists
of three stages:
1. identification of a low-dimensional latent (or
reducedorder) space,
2. determining a latent-space representation of the nonlinear
dynamical system in terms of the reduced basis and
modeling the evolution of the system of modal coefficients,
and
3. reconstruction in the high-fidelity space for validation and
analysis.</p>
      <p>
        Machine learning techniques can be introduced at any of
these stages. For example, many works have explored the
use of deep learning-based approaches like autoencoders as
a way to introduce a nonlinear alternative for dimension
reduction
        <xref ref-type="bibr" rid="ref40 ref42 ref45 ref63">(Lusch, Nathan Kutz, and Brunton 2018;
Ghorbanidehno et al. 2021)</xref>
        . Combining these methods with
datadriven latent-space propagation (for example via fully
connected or recurrent neural networks) leads to a fully
nonintrusive approach
        <xref ref-type="bibr" rid="ref42 ref45">(Gonzalez and Balajewicz 2018)</xref>
        . On the
other hand, one can also combine nonlinear dimension
reduction with intrusive projection to create a hybrid method
(Lee and Carlberg
        <xref ref-type="bibr" rid="ref35 ref5">2020; Kim et al. 2020</xref>
        ). In this work,
we study three different data-driven strategies for accurate
learning of system dynamics within the context of linear
dimension reduction. In the first two methods we adopt the
POD technique for identification of an optimal global
basis. For the latent space evolution, we utilize a kernel-based
multivariate interpolation method called radial basis
function (RBF) interpolation, and a machine learning strategy
designed for sequential learning of time-series data called
neural ordinary differential equations (NODE). In the third
strategy, the three stages of ROM development are combined
together by using a classical modal decomposition technique
called the dynamic mode decomposition (DMD), that is
supported by rigorous mathematical analysis of Koopman mode
theory.
      </p>
      <sec id="sec-2-1">
        <title>Proper orthogonal decomposition</title>
        <p>
          POD is a popular technique for dimension reduction
          <xref ref-type="bibr" rid="ref9">(Antoulas and Sorensen 2001)</xref>
          of the solution manifold of a
dynamical system by determining a linear reduced space
spanned by an orthogonal basis with an associated energetic
hierarchy.
          <xref ref-type="bibr" rid="ref86">(Taira et al. 2020)</xref>
          provides an excellent overview
of POD as well as a comparison with other
dimensionreduction techniques.
        </p>
        <p>Consider a snapshot matrix S = [v1; : : : ; vM ] 2 RN M
b b
containing a collection of M high-fidelity snapshots of the
solution manifold from time t = 0 to t = T such that
vbk 2 RN is the kth snapshot with the temporal mean value
removed, i.e. , vbk = vk v where v = PiM=1 Mvi is the
timeaveraged solution. The goal of the POD procedure is to
identify a linear subspace = span 1; : : : ; r ; (r M )
which approximates the solution manifold optimally with
respect to the L2-norm.</p>
        <p>The POD bases can be efficiently extracted by performing
a “thin” singular value decomposition (SVD) of the
snap</p>
        <p>T
shot matrix S = e e e , where e = diag( 1; : : : ; R) is
a R R diagonal matrix containing the singular values
arranged in decreasing order of magnitude, 1 2 : : :
R
and R &lt; minfN; M g is the rank of S. e and e are N R
and M R matrices respectively, whose columns are the
orthonormal left and right singular vectors of S such that</p>
        <p>T T
e e = IR = e e . The columns n of the matrix e are
ordered corresponding to the singular values n and these
provide the desired POD basis. Let denote the matrix of
the first m columns of e , be the matrix containing the
first m rows of e , and be a diagonal matrix containing
the first m singular values from e , then the high-fidelity
solution vn at time tn can be approximated as,
vn
v +
zn = v +
m
X zin i;
i=1
(1)
where zn 2 Rm is a vector of modal coefficients with
respect to the reduced basis. The modal coefficient matrix
Z = T S constitutes our training data for the latent space
learning methods. Due to the Eckart-Young-Mirsky
theorem, the POD basis provides an optimal rank-m
approximation Sb = T of the snapshot matrix S with a desired
level of accuracy, P OD.</p>
        <p>
          The POD method has been successfully applied in
statistics
          <xref ref-type="bibr" rid="ref49">(Jolliffe 1986)</xref>
          , signal analysis and pattern recognition
          <xref ref-type="bibr" rid="ref25">(Deheuvels and Martynov 2008)</xref>
          , ocean models
          <xref ref-type="bibr" rid="ref89 ref92">(Vermeulen
and Heemink 2006)</xref>
          , air pollution models
          <xref ref-type="bibr" rid="ref31 ref95">(Fang et al. 2014)</xref>
          ,
convective Boussinesq flows
          <xref ref-type="bibr" rid="ref15 ref78">(San and Borggaard 2015)</xref>
          , and
Shallow Water Equation (SWE) models
          <xref ref-type="bibr" rid="ref62 ref84">(Stefanescu, Sandu,
and Navon 2014; Lozovskiy et al. 2016)</xref>
          .
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Latent space evolution</title>
        <p>In this section, we outline two non-intrusive methods for
modeling the evolution of time-series data in the latent space
defined by the POD basis. RBF interpolation is a classical,
data-driven, kernel-based method for computing an
approximate continuous response surface that aligns with the given
multivariate data. The second technique called NODE is a
neural-network based method to predict the continuous
evolution of a vector c over time, that is designed to preserve
memory effects within the architecture.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Radial basis function interpolation For simplicity, let</title>
        <p>the time evolution of the modal coefficients z be represented
as a semi-discrete dynamical system,
z_ = f (z; t); with z0 =</p>
        <p>
          T v0
v
(2)
where all the information about the temporal dynamics
including the effects of any numerical stabilization of the
highfidelity solver and all the nonlinear terms are embedded in
f (z; t). In the POD-RBF NIROM framework
          <xref ref-type="bibr" rid="ref28">(Dutta et al.
2020)</xref>
          , instead of the Galerkin projection, the components of
the time derivative function fj (j = 1; : : : ; m) are
approximated using RBF interpolation.
        </p>
        <p>Let Fj denote a RBF approximation of the time derivative
function fj , which is defined by a linear combination of Ni
instances of a radial basis function ,
j;k (kz
bzkk) ;
j = 1; : : : ; m;
(3)</p>
        <p>Ni
Fj (z) = X</p>
        <p>k=1
where fbzk j k = 1; : : : ; Nig denotes the set of interpolation
centers and j;k (k = 1; : : : ; Ni) is the unknown
interpolation coefficient corresponding to the kth center for the jth
component of the modal coefficient. These interpolation
coefficients are computed by enforcing the interpolation
function Fj to exactly match the time derivative of the modal
coefficients at Ne test points (Ne Ni). Choosing the
centers and the test points identically from the set of snapshot
modal coefficients as fzl j l = 0; : : : ; M 1g such that
Ni = Ne = M , and making some simplifying assumptions
leads to a symmetric, linear system of M equations to solve
for the unknown interpolation coefficients, j;k</p>
        <p>
          A j = gj ; for j = 1; : : : ; m;
(4)
where
The coefficients j define a unique RBF interpolant which
can then be used to approximate eq. (2) and generate a
nonintrusive model for the evolution of the modal coefficients
In this work, a first-order forward Euler scheme has been
employed for the discretization of the time derivative, and a
strictly positive-definite Mate´rn C0 kernel, given by (r) =
e cr has been adopted, where r is the Euclidean distance
and c is the RBF shape factor
          <xref ref-type="bibr" rid="ref33">(Fasshauer 2007)</xref>
          .
        </p>
        <p>
          Adopting RBF interpolation for modeling the latent space
evolution of the modal coefficients has been shown to be
quite successful for nonlinear, time-dependent partial
differential equations (PDEs)
          <xref ref-type="bibr" rid="ref28 ref96">(Xiao et al. 2015; Dutta et al.
2020)</xref>
          , nonlinear, parametrized PDEs
          <xref ref-type="bibr" rid="ref10 ref47 ref79 ref98">(Audouze, De Vuyst,
and Nair 2013; Xiao et al. 2017)</xref>
          , and aerodynamic shape
optimization
          <xref ref-type="bibr" rid="ref47 ref79">(Iuliano and Quagliarella 2013)</xref>
          , to name a few.
Neural ordinary differential equations Recurrent neural
network (RNN) architectures like LSTM and GRU are
often employed to encode time-series data and forecast
future states, as their internal memory preserving
architecture allows them to incorporate state information over a
sequence of input data. Although RNNs have seen great
success in natural language processing tasks, they have had
relatively limited success in high-fidelity scientific computing
applications
          <xref ref-type="bibr" rid="ref34 ref57 ref90">(Ferrandis et al. 2019; Wang, Ripamonti, and
Hesthaven 2020)</xref>
          , as it has been observed that a sequence
generated by an RNN may fail to preserve temporal
regularity of the underlying signal, and thus may not
represent true continuous dynamics. (Chen et al. 2018). With
deep neural networks (DNN) such as ResNet, the
evolution of the features over the network depth is equivalent
to solving an ordinary differential equation (ODE) such as
ddzt = F (z; ) using the forward Euler method, and this
connection between ResNet’s architecture and numerical
integrators has been explored in details by
          <xref ref-type="bibr" rid="ref21 ref36 ref43 ref57 ref76">(Ruthotto and Haber
2019)</xref>
          and others. Several other deep learning methods have
been proposed for learning ODEs and PDEs. These include
using PDE-based network
          <xref ref-type="bibr" rid="ref21 ref36 ref43 ref57 ref59 ref76">(Long, Lu, and Dong 2019)</xref>
          ,
training DNNs using physics-informed soft penalty constraints
          <xref ref-type="bibr" rid="ref21 ref36 ref43 ref57 ref71 ref76">(Raissi, Perdikaris, and Karniadakis 2019)</xref>
          , and using sparse
regularizers and regression
          <xref ref-type="bibr" rid="ref17 ref69">(Brunton et al. 2016; Champion
et al. 2019)</xref>
          , to name a few.
        </p>
        <p>
          Chen et al. (2018) proposed a ’continuous-depth’
neural network called ODE-Net that effectively replaces the
layers in ResNet-like architectures with a trainable ODE
solver. The memory efficiency and stability of this neural
ordinary differential equation (NODE) approach was further
improved in
          <xref ref-type="bibr" rid="ref21 ref21 ref27 ref36 ref36 ref38 ref43 ref43 ref57 ref57 ref76 ref76">(Gholami, Keutzer, and Biros 2019; Dupont,
Doucet, and Teh 2019)</xref>
          and others.
          <xref ref-type="bibr" rid="ref64">(Maulik et al. 2020)</xref>
          applied the NODE framework to obtain latent space closure
models for ROMs of a one-dimensional advecting shock
problem and a one-dimensional Burgers’ turbulence
problem that exhibits multiscale behavior in the wavenumber
space. Some other notable recent applications of NODE
include the identification of ODE or PDE models from
timedependent data
          <xref ref-type="bibr" rid="ref57 ref85">(Sun, Zhang, and Schaeffer 2020)</xref>
          , modeling
of irregularly spaced time series data
          <xref ref-type="bibr" rid="ref21 ref36 ref43 ref57 ref75 ref76">(Rubanova, Chen, and
Duvenaud 2019)</xref>
          , modeling of spatio-temporal information
in video signals
          <xref ref-type="bibr" rid="ref50">(Kanaa et al. 2019)</xref>
          . Finlay et al. (2020) used
a combination of optimal transport theory and stability
regularizations to propose a neural-ODE generative model that
can be efficiently trained on large-scale datasets. Here we
further explore the application of the POD-NODE
methodology to complex, real-world flows characterized by systems
of two-dimensional, nonlinear PDEs.
        </p>
        <p>We assume that the time evolution of the modal
coefficients of the high-fidelity dynamical system in the latent
space can be modeled using a (first-order) ODE,
d z
dt
= F (t; z(t)); with z(0) = z0; z 2 Rd; d
1:
(5)
The goal is to obtain a NN approximation Fb of the dynamics
function F such that ddtz net(t; z) = Fb(t; z; !). The full
procedure can be outlined as follows:</p>
        <sec id="sec-2-3-1">
          <title>1. Compute the time series of</title>
          <p>[z0; : : : ; zM 1] for t 2 f0; : : : ; M
modal coefficients
1g where zk 2 Rm.
2. Initialize a NN approximation for the dynamics function</p>
          <p>Fb(t; z; !) where ! represents the initial NN parameters.
3. The NN parameters are optimized iteratively through the
following steps.
(a) Compute the approximate forward time trajectory of
the modal coefficients by solving eq. (5) using a
standard ODE integrator as,
z^M 1 = ODESolve(Fb; !; z0; t0; tM 1)
(6)
(b) The free parameters of the NODE model are
f!; t0; tM 1g. Evaluate the differentiable loss function</p>
          <p>ODESolve(Fb; !; z0; t0; tM 1)</p>
          <p>L
(c) To optimise the loss, compute gradients with respect to
the free parameters. Similar to the usual
backpropagation algorithm, this can be achieved by first computing
the gradient @L=@z(t), and then a performing a reverse
b
traversal through the intermediate states of the ODE
integrator. For a memory-efficient implementation, the
adjoint method (Chen et al. 2018) can be used to
backpropagate the errors by solving an adjoint system for
the augmented state vector b = [ @@Lz ; @@!L ; @@Lt ]T
backwards in time from tM 1 to t0. b
zM 1 .
(d) The gradient @@!L (t = 0) computed in the previous step
is used to update the parameters ! by using an
optimization algorithm like RMSProp or Adam.
4. The trained NODE approximation of the dynamics
function can be used to compute predictions for the time
trajectory of the modal coefficients.</p>
          <p>
            In this work, we utilize the TFDiffEq (https://github.com/
titu1994/tfdiffeq) library that runs on the Tensorflow Eager
Execution platform to train the NODE models. Although a
single layer architecture guarantees upper-bounds
according to the universal approximation theorem
            <xref ref-type="bibr" rid="ref11">(Barron 1993)</xref>
            ,
deeper networks with up to four layers as well as several
linear and nonlinear activation functions are also explored
due to their possibly improved expressibility for more
complex nonlinear dynamics
            <xref ref-type="bibr" rid="ref100">(Zhang et al. 2019)</xref>
            . RMSProp is
adopted for loss minimization with an initial learning rate
of 0:001, a staircase decay function with a range of variable
decay schedules, and a momentum coefficient of 0:9. NODE
predictions of comparable accuracy were obtained for all
the numerical experiments by using both the adjoint method
as well as by backpropagating gradients directly through
the hidden steps of the ode solver. However, for large-scale
training data the latter method may lead to memory issues,
especially while computing on GPU nodes.
          </p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Dynamic mode decomposition</title>
        <p>
          As a final point of comparison, we consider Dynamic mode
decomposition (DMD). DMD is a data-driven ROM
technique that represents the temporal dynamics of a
complex, nonlinear system
          <xref ref-type="bibr" rid="ref55 ref81">(Schmid 2010; Kutz et al. 2016)</xref>
          as the combination of a few linearly evolving, spatially
coherent modes that oscillate at a fixed frequency, and
which are closely related to the eigenvectors of the
infinitedimensional Koopman operator
          <xref ref-type="bibr" rid="ref53 ref65">(Koopman 1931; Mezic´
2013)</xref>
          . Consider the following snapshot matrices
containing a few temporally-equispaced snapshots of a
highdimensional dynamical system:
        </p>
        <p>X = v0 v1 : : : vM 1 ;</p>
        <p>
          X0 = v1 v2 : : : vM
where vk 2 RN is the kth solution snapshot, N is the
spatial degrees of freedom of the discretized system, and M is
the total number of temporal snapshots. DMD involves the
identification of the best-fit linear operator AX that relates
the above matrices as X0 = AX X, and computing its
eigenvalues and eigenvectors. Computing a least-square
approximation of AX using the Moore-Penrose pseudoinverse(y)
may pose computational challenges due to the size of the
discrete dynamical system. For computational efficiency, the
exact DMD algorithm (adopted here) avoids computing the
Moore-Penrose pseudoinverse(y) by projecting the operator
on to a reduced space obtained by POD, as outlined in
          <xref ref-type="bibr" rid="ref16 ref3 ref56">(Alla
and Kutz 2017)</xref>
          .
        </p>
        <p>
          In recent years, Koopman mode theory has provided a
rigorous theoretical background for an efficient modal
decomposition in problems describing oscillations and other
nonlinear dynamics using DMD
          <xref ref-type="bibr" rid="ref73 ref74">(Rowley et al. 2009)</xref>
          . Several
variants of the DMD algorithm have been proposed
          <xref ref-type="bibr" rid="ref1 ref16 ref17 ref17 ref3 ref55 ref55 ref56 ref69 ref69">(Proctor,
Brunton, and Kutz 2016; Kutz, Fu, and Brunton 2016;
Alekseev et al. 2016; Le Clainche and Vega 2017)</xref>
          and have been
successfully applied as efficient ROM techniques for
determining the optimal global basis modes for nonlinear,
timedependent problems
          <xref ref-type="bibr" rid="ref15 ref16 ref3 ref56 ref78">(Bistrian and Navon 2015, 2017)</xref>
          . For
non-parametrized PDEs, DMD presents an efficient
framework that combines all the three stages of ROM
development to learn a linear operator in an optimal least square
sense. However, this approach cannot be directly applied to
parametrized problems
          <xref ref-type="bibr" rid="ref4">(Alsayyari et al. 2021)</xref>
          .
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Numerical experiments</title>
      <p>In this section, we first assess the performance of different
NODE architectures for a benchmark flow problem
characterized by the incompressible Navier Stokes equations
(NSE), and then further evaluate the relative performance
of all three NIROM models for two real-world applications
governed by the shallow water equations (SWE). The
PODRBF and DMD NIROM training runs were performed on
a Macbook Pro 2018 with a 2:9 GHz 6-Core Intel Core i9
processor and 32 GB 2400 MHz DDR4 RAM. The NODE
models were trained in serial on Vulcanite, a high
performance computer at the U.S. Army Engineer Research and
Development Center DoD Supercomputing Resource Center
(ERDC-DSRC). Vulcanite is equipped with NVIDIA Tesla
V100 PCIe GPU accelerator nodes and has 32GBytes
memory/node.</p>
      <sec id="sec-3-1">
        <title>Flow around a cylinder</title>
        <p>This problem simulates a time-periodic fluid flow through
a 2D pipe with a circular obstacle. The flow domain is a
rectangular pipe with a circular hole of radius r = 0:05,
denoted by = [0; 2:2] [ 0:2; 0:21] n Br(0:2; 0). The
flow is governed by
where u denotes the velocity, p the pressure, is the outer
product (dyadic product) given by a b = abT , and
= 0:001 is the kinematic viscosity. No slip boundary
conditions are specified along the lower and upper walls, and
on the boundary of the circular obstacle. A parabolic inflow
velocity profile is prescribed on the left wall,
u(0; y) =
4U
(0:21
y)(y
0:412
0:2)
; 0 ;
and zero gradient outflow boundary conditions on the right
wall. High-fidelity simulation data is obtained with
OpenFOAM using an unstructured mesh with 14605 nodes at
Re = 100, such that the flow exhibits the periodic shedding
of von Karman vertices. 313 training snapshots are collected
for t = [2:5; 5:0] seconds with t = 0:008 seconds, and the
NIROM predictions are obtained for t = [2:5; 6:0] seconds
with t = 0:002 seconds.</p>
        <p>
          A large collection of NODE architectures and
hyperparameter configurations were trained for 50000 epochs and
details of the best 8 models are presented in Table 1. A
fourth-order Runge-Kutta solver was found to be the
optimal choice in terms of both accuracy and efficiency among
all the available solvers ranging from the fixed-step
forward Euler and the midpoint solvers to the adaptive-step
Dormand-Prince (dopri5) solver. The “tanh” and “elu”
activation functions were found to be the most effective among
all the available linear and nonlinear activation functions.
Due to the nature of the activation functions, the networks
(7)
(8)
(9)
with “tanh” activations were found to train better when
every element of the input state vector was individually scaled
to be bounded in [ 1; 1], while networks with “elu”
activations trained better without scaling of input vectors.
Augmentation of input states as outlined in
          <xref ref-type="bibr" rid="ref21 ref27 ref36 ref43 ref57 ref76">(Dupont, Doucet,
and Teh 2019)</xref>
          was found to have no significant impact on
the training. The RMSProp optimizer paired with either a
step decay function or an exponential decay function were
found to be equally effective. However, further numerical
experiments are necessary to study the efficiency of
alternative first-order and second-order optimization methods. The
number of decay steps were varied in discrete increments
between 5000 to 25000, and decay rates ranging from 0.1 to
0.9 were studied. It was observed that a lower initial learning
rate ( 0:001) combined with either larger decay steps and
smaller decay rates or vice versa led to a desirable training
trajectory. Fig. 1 shows the evolution of the 1st; 3rd, and 5th
latent-space modal coefficients for the pressure and the
xvelocity solutions, obtained using the best 8 NODE models.
All the models generate accurate predictions at a finer
temporal resolution than the training data, and have excellent
agreement with the high-fidelity solution even while
extrapolating outside the training data (5 t 6 seconds).
        </p>
        <p>Fig. 2 compares the time trajectory of the spatial root
mean square errors (RMSE) in the high-fidelity space for
two of the best NODE models with two DMD NIROM
solutions obtained using truncation levels of r = 20 and r = 8.
It is encouraging to note that even though the NODE
solutions are computed using a latent-space representation that
is roughly comparable to the DMD solution with a smaller
truncation level (r = 8), they are superior in accuracy to
the coarsely truncated DMD solutions. Furthermore, unlike
the POD-RBF solution that is trained with a first-order
Euler time discretization, the NODE solutions did not exhibit
any significant loss in accuracy with time, even while
predicting outside the training region. It is, however, important
to note that the training time for any new NODE
architecture was extremely high (see Table 1) when compared to
generating a POD-RBF or a DMD NIROM model, which</p>
        <sec id="sec-3-1-1">
          <title>Range</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>NODE1</title>
          <p>NODE2
NODE3
NODE4
NODE5
NODE6
NODE7
NODE8</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Training No Yes No</title>
          <p>Yes
Yes
No
No
Yes</p>
          <p>No
No
No
Yes
No
No
No
Yes
usually required less than a minute in most cases. Such long
training times may pose a significant challenge for
exhaustive explorations of the design space for optimal
architectures and hyperparameters, and may hinder the adoption of
existing packages for automated architecture search. Thus,
a concerted effort needs to be directed towards acceleration
of NODE training times and towards constraining the design
space by a priori identification of promising architectures.</p>
          <p>DMD(20):p
DMD(8):p</p>
          <p>NODE3:p
NODE5:p</p>
          <p>DMD(20):vx
DMD(8):vx</p>
          <p>NODE3:vx
NODE5:vx
2.5 3.0 3.5 Tim4e.0(seconds) 5.0 5.5 6.0
4.5
2.5 3.0 3.5 Tim4e.0(seconds) 5.0 5.5 6.0
4.5
The next two numerical examples involve flows governed by
the depth-averaged SWE which is written in a conservative
residual formulation as</p>
          <p>
            R
where the state variable q = [h; uxh; uyh]T consists of the
flow depth, h, and the discharges in the x and y directions,
given by uxh and uyh, respectively. Further details about
the flux vectors px, py and the high-fidelity model
equations are available in
            <xref ref-type="bibr" rid="ref28">(Dutta et al. 2020)</xref>
            . The high-fidelity
numerical solutions of the SWE are obtained using the 2D
depth-averaged module of the Adaptive Hydraulics (AdH)
finite element suite, which is a U.S. Army Corps of
Engineers (USACE) high-fidelity, finite element resource for 2D
and 3D dynamics
            <xref ref-type="bibr" rid="ref88">(Trahan et al. 2018)</xref>
            .
          </p>
          <p>Tidal flow in San Diego bay This numerical example
involves the simulation of tide-driven flow in the San Diego
(10)</p>
          <p>
            Bay in California, USA. The AdH high-fidelity model
consists of N = 6311 nodes, uses tidal data obtained from
NOAA/NOS Co-Ops website at a tailwater elevation inflow
boundary and has no flow boundary conditions everywhere
else. Further details are available in
            <xref ref-type="bibr" rid="ref28">(Dutta et al. 2020)</xref>
            .
          </p>
          <p>The training space is generated using 1801 high-fidelity
snapshots obtained between t = 41 minutes to t = 50
hours at a time interval of t = 100 seconds. The
predicted ROM solutions are computed for the same time
interval with t = 50 seconds. A latent space of dimension
265 is generated by using a POD truncation tolerance of
P OD = 5 10 7 for each solution component. The RBF
NIROM approximation is computed using a shape factor,
c = 0:01. The simulation time points provided as input to
the NODE model are normalized to lie in t 2 [0; 1]. The
‘dopri5’ ODE solver is adopted for computing the hidden
states both forward and backward in time. Learning from the
conclusions of the cylinder example, a network consisting of
a single hidden layer with 256 neurons is deployed and the
RMSProp optimizer with an initial learning rate of 0:001,
a staircase decay rate of 0:5 every 5000 epochs, and a
momentum of 0:9 is utilized for training the model over 20000
epochs. For the DMD NIROM, the simulation time points
are normalized to an unit time step, and a truncation level of
r = 115 is used to compute the DMD eigen-spectrum.</p>
          <p>Figure 3 shows the NIROM solutions (top row) for ux at
t = 17:36 hours and the corresponding error plots.</p>
          <p>Figure 4 shows the spatial RMSE over time for the
depth (left) and the x-velocity (right) NIROM solutions. The
NODE NIROM solution has comparable accuracy to the
DMD NIROM solution and unlike the RBF NIROM
solution, does not exhibit any appreciable accumulation of error
over time.</p>
          <p>
            Riverine flow in Red River The final numerical example
involves an application of the 2D SWE to simulate riverine
flow in a section of the Red River in Louisiana, USA. The
AdH high-fidelity model uses N = 12291 nodes, has a
natural inflow velocity condition upstream, a tailwater elevation
boundary downstream, and no flow boundary along the river
bank. For further details see
            <xref ref-type="bibr" rid="ref28">(Dutta et al. 2020)</xref>
            .
          </p>
          <p>RBF solution at t=3.61 hrs
0.18894&lt;ux&lt;0.31979
(a) RBF ux
(b) DMD ux
(c) NODE ux
The training space is generated by using 1081
highfidelity snapshots obtained between t = 16:67 minutes to
t = 9:3 hours at a time interval of t = 30 seconds. The
predicted ROM solutions are computed for the same time
interval with t = 10 seconds. A latent space spanned by
54 modes is generated by using a POD truncation tolerance
of P OD = 0:01 for each solution component. The RBF
NIROM approximation is computed using a shape factor,
c = 0:05. For consistency, the NODE network architecture
is kept identical to the San Diego example and the
training is also performed for 20000 epochs. Also, similar to the
previous example, the simulation time points for DMD
input are normalized to an unit time step. However, a smaller
truncation level of r = 30 is used to compute the DMD
eigen-spectrum.</p>
          <p>Figure 5 shows the NIROM solutions (top row) for ux at
t = 3:61 hours and the corresponding error plots.</p>
          <p>Figure 6 shows the spatial RMSE over time of the depth
(left) and the x-velocity (right) NIROM solutions for the Red
River example. It can be seen that the DMD NIROM
solution has a relatively higher RMSE owing to the lower
truncation level chosen for this example, while the RBF NIROM
is far more accurate. The NODE NIROM solution seems to
match the performance of the RBF NIROM solution. This
indicates that the NODE NIROM framework is successful
with two distinct real-world flow regimes and holds promise
for more widespread applicability to model the evolution of
latent space dynamics.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>We have studied Neural ODEs as a non-intrusive
machinelearning algorithm to model the evolution of modal
coefficients of a system of nonlinear, time-dependent PDEs in the
linearly embedded latent space characterized by a truncated
POD basis. Numerical experiments were carried out with a
benchmark periodic flow problem governed by the
incompressible Navier Stokes equations and two real-world
applications of estuarine and riverine flow dynamics governed
by the two-dimensional shallow water equations.The NODE
formulation demonstrated a stable and accurate learning
trajectory in modeling reduced basis dynamics, even in
comparison to two classical ROM techniques utilizing dynamic
mode decomposition and radial basis function interpolation.
The DMD NIROM exhibited superior accuracy in most of
the examples and was found to be most promising for
longterm predictions. However, the POD-RBF NIROM
technique is easily applicable to parametrized model reduction
scenarios involving parametric training manifolds of very
high dimension, whereas the DMD algorithm does not have
a natural extension to such a setting. The POD-NODE
formulation also produced extremely promising extrapolatory
predictions for the flow around a cylinder example. This
presents an exciting prospect for future exploration as even
for an isolated system, unperturbed by unseen external
forcings, truly extrapolative predictions of reduced order
dynamics in flow regimes that do not correspond to the training data
is a rare feature for most well-established ROM frameworks.</p>
      <p>
        This study leads to several promising avenues of research.
To begin with, an exhaustive search for an optimal NODE
network architecture and optimal model hyperparameters
needs to be conducted for a wide range of flow dynamics
in order to gain insight of the learning trajectory and to
design more generalizable NODE NIROM formulations with
faster training times. With the goal of long-term predictive
formulations in mind, embedding uncertainty estimates in
the NODE NIROM framework might facilitate the
development of adaptive models capable of re-assessing learning
trajectories through in-situ measurements. The construction
of a set of response functions for modeling the prediction
error using machine learning
        <xref ref-type="bibr" rid="ref21 ref36 ref43 ref57 ref76">(Freno and Carlberg 2019)</xref>
        or
Gaussian Process Regression (GPR)
        <xref ref-type="bibr" rid="ref93">(Xiao 2019)</xref>
        are some
recent works in this direction. Another exciting field of study
would be to combine the NODE framework with
machinelearning strategies for the generation of nonlinear manifolds
(Lee and Carlberg
        <xref ref-type="bibr" rid="ref35 ref5">2020; Kim et al. 2020</xref>
        ) that are suitable
for an efficient reduced representation of the system
dynamics for advection-dominated problems and in the presence
of sharp gradients where a truncated linear subspace offers a
poor solution representation. All the relevant data and codes
for this study will be made available in a public repository
at https://github.com/erdc/node nirom upon publication.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>
        This research was supported in part by an appointment of the
first author to the Postgraduate Research Participation
Program at the U.S. Army Engineer Research and Development
Center, Coastal and Hydraulics Laboratory (ERDC-CHL)
administered by the Oak Ridge Institute for Science and
Education through an interagency agreement between the U.S.
Department of Energy and ERDC. The authors would also
like to thank Dr. Gaurav Savant for his valuable help in using
the Adaptive Hydraulics suite (AdH)
        <xref ref-type="bibr" rid="ref88">(Trahan et al. 2018)</xref>
        for
the high-fidelity numerical simulation of the 2D shallow
water flow examples. Permission was granted by the Chief of
Engineers to publish this information.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Alekseev</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bistrian</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bondarev</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>On linear and nonlinear aspects of dynamic mode decomposition</article-title>
          .
          <source>Int.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Numer</surname>
          </string-name>
          .
          <source>Methods Fluids</source>
          <volume>82</volume>
          (
          <issue>6</issue>
          ):
          <fpage>348</fpage>
          -
          <lpage>371</lpage>
          . doi:
          <volume>10</volume>
          .1002/fld.4221.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Alla</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Nonlinear model order reduction via dynamic mode decomposition</article-title>
          .
          <source>SIAM J. Sci. Comput</source>
          .
          <volume>39</volume>
          (
          <issue>5</issue>
          ):
          <fpage>B778</fpage>
          --
          <lpage>B796</lpage>
          . doi:
          <volume>10</volume>
          .1137/16M1059308.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Alsayyari</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ; Perko´,
          <string-name>
            <given-names>Z.</given-names>
            ;
            <surname>Tiberga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Kloosterman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            ; and
            <surname>Lathouwers</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2021</year>
          .
          <article-title>A fully adaptive nonintrusive reduced-order modelling approach for parametrized time-dependent problems</article-title>
          .
          <source>Comput. Methods Appl</source>
          . Mech. Eng.
          <volume>373</volume>
          : 113483. doi:
          <volume>10</volume>
          .1016/j.cma.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <year>2020</year>
          .113483. URL https://doi.org/10.1016/j.cma.
          <year>2020</year>
          .
          <volume>113483</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Amsallem</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Farhat</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2012</year>
          .
          <article-title>Stabilization of projectionbased reduced order models</article-title>
          .
          <source>Int. J. Numer. Methods Eng</source>
          .
          <volume>91</volume>
          :
          <fpage>358</fpage>
          -
          <lpage>377</lpage>
          . doi:
          <volume>110</volume>
          .1002/nme.4274.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Amsallem</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; Zahr,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ; and
            <surname>Farhat</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>2015</year>
          .
          <article-title>Design optimization using hyper-reduced-order models</article-title>
          .
          <source>Struct. Multi. Opt.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <volume>51</volume>
          (
          <issue>4</issue>
          ):
          <fpage>919</fpage>
          -
          <lpage>940</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00158-014-1183-y.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Antoulas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Sorensen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>Approximation of LargeScale Dynamical Systems: An overview</article-title>
          .
          <source>Int. J. Appl. Math. Comput. Sci</source>
          .
          <volume>11</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1093</fpage>
          -
          <lpage>1121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Audouze</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>De Vuyst</surname>
            , F.; and Nair,
            <given-names>P.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Nonintrusive Reduced-Order Modeling of Parametrized Time-Dependent Partial Differential Equations</article-title>
          .
          <source>Numer. Methods Partial Differ. Equation</source>
          <volume>29</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1587</fpage>
          -
          <lpage>1628</lpage>
          . doi:
          <volume>10</volume>
          .1002/num.21768.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Barron</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          <year>1993</year>
          .
          <article-title>Universal approximation bounds for superpositions of a sigmoidal function</article-title>
          .
          <source>IEEE Trans. Inf. Theory</source>
          <volume>39</volume>
          (
          <issue>3</issue>
          ):
          <fpage>930</fpage>
          -
          <lpage>945</lpage>
          . doi:
          <volume>10</volume>
          .1109/18.256500.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Benner</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Gugercin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Willcox</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems</article-title>
          .
          <source>SIAM Rev</source>
          .
          <volume>57</volume>
          (
          <issue>4</issue>
          ):
          <fpage>483</fpage>
          -
          <lpage>531</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Berkooz</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Holmes,
          <string-name>
            <given-names>P.</given-names>
            ; and
            <surname>Lumley</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <year>1993</year>
          .
          <article-title>The Proper Orthogonal Decomposition in the Analysis of Turbulent Flows</article-title>
          .
          <source>Annu. Rev.</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>Fluid</given-names>
            <surname>Mech</surname>
          </string-name>
          .
          <volume>25</volume>
          (
          <issue>1</issue>
          ):
          <fpage>539</fpage>
          -
          <lpage>575</lpage>
          . doi:
          <volume>10</volume>
          .1146/annurev.fl.
          <volume>25</volume>
          .010193.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Bistrian</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>An improved algorithm for the shallow water equations model reduction: Dynamic Mode Decomposition vs POD</article-title>
          .
          <source>Int. J. Numer. Methods Fluids .</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Bistrian</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Randomized dynamic mode decomposition for nonintrusive reduced order modelling</article-title>
          .
          <source>Int. J. Numer. Methods Eng</source>
          .
          <volume>112</volume>
          :
          <fpage>3</fpage>
          -
          <lpage>25</lpage>
          . doi:
          <volume>10</volume>
          .1002/nme.5499.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Brunton</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Proctor</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Bialek</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>S. A</surname>
          </string-name>
          .
          <volume>113</volume>
          (
          <issue>15</issue>
          ):
          <fpage>3932</fpage>
          -
          <lpage>3937</lpage>
          . ISSN 10916490. doi:
          <volume>10</volume>
          .1073/pnas.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Carlberg</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bou-Mosleh</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Farhat</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Efficient nonlinear model reduction via a least-squares Petrov-Galerkin projection and compressive tensor approximations</article-title>
          .
          <source>Int. J. Numer. Methods Eng</source>
          .
          <volume>86</volume>
          :
          <fpage>155</fpage>
          -
          <lpage>181</lpage>
          . doi:
          <volume>10</volume>
          .1002/nme.3050.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Carlberg</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Farhat</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Cortial</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Amsallem</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>The GNAT method for nonlinear model reduction: Effective implementation and application to computational fluid dynamics and turbulent flows</article-title>
          .
          <source>J. Comput. Phys</source>
          .
          <volume>242</volume>
          :
          <fpage>623</fpage>
          -
          <lpage>647</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          2019.
          <article-title>Data-driven discovery of coordinates and governing equations</article-title>
          .
          <source>Proc. Natl. Acad. Sci</source>
          . U. S. A.
          <volume>116</volume>
          (
          <issue>45</issue>
          ):
          <fpage>22445</fpage>
          -
          <lpage>22451</lpage>
          . ISSN 10916490. doi:
          <volume>10</volume>
          .1073/pnas.1906995116.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Chaturantabut</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Sorensen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Nonlinear model reduction via Discrete Empirical Interpolation</article-title>
          .
          <source>SIAM J. Sci. Comput</source>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <volume>32</volume>
          (
          <issue>5</issue>
          ):
          <fpage>2737</fpage>
          -
          <lpage>2764</lpage>
          . doi:
          <volume>10</volume>
          .1137/090766498.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          2018.
          <article-title>Neural Ordinary Differential Equations</article-title>
          .
          <source>doi:10.1007/978-3- 662-55774-7 3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Deheuvels</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and Martynov,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2008</year>
          .
          <article-title>A Karhunen-Loeve decomposition of a Gaussian process generated by independent pairs of exponential random variables</article-title>
          .
          <source>J. Func. Anal</source>
          .
          <volume>255</volume>
          (
          <issue>9</issue>
          ):
          <fpage>2363</fpage>
          -
          <lpage>2394</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>doi:10</source>
          .1016/j.jfa.
          <year>2008</year>
          .
          <volume>07</volume>
          .021.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Dupont</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Doucet</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and Teh,
          <string-name>
            <surname>Y. W.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Augmented Neural ODEs</article-title>
          . URL http://arxiv.org/abs/
          <year>1904</year>
          .01681.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Dutta</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Farthing,
          <string-name>
            <given-names>M. W.</given-names>
            ;
            <surname>Perracchione</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ;
            <surname>Savant</surname>
          </string-name>
          , G.; and Putti,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>A greedy non-intrusive reduced order model for shallow water equations</article-title>
          . URL https://arxiv.org/abs/
          <year>2002</year>
          .11329.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          2013.
          <article-title>Non-linear Petrov-Galerkin methods for reduced order hyperbolic equations and discontinuous finite element methods</article-title>
          .
          <source>J.</source>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <source>Comput. Phys</source>
          .
          <volume>234</volume>
          :
          <fpage>540</fpage>
          -
          <lpage>559</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2012</year>
          .
          <volume>10</volume>
          .011.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ; Zhang, T.;
          <string-name>
            <surname>Pavlidis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Pain</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Buchanan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Reduced order modelling of an unstructured mesh air pollution model and application in 2D/3D urban street canyons</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <surname>Atmos</surname>
          </string-name>
          . Env.
          <volume>96</volume>
          :
          <fpage>96</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <surname>Fasshauer</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Meshfree Approximation Methods with MATLAB</article-title>
          , volume
          <volume>6</volume>
          <source>of Interdisciplinary Mathematical Sciences. World Scientific Publishing Company.</source>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Ferrandis</surname>
            ,
            <given-names>J. d. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Triantafyllou</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Chryssostomidis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and Karniadakis,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Learning functionals via LSTM neural networks for predicting vessel dynamics in extreme sea states</article-title>
          . URL https://arxiv.org/abs/
          <year>1912</year>
          .13382.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          2020.
          <article-title>How to train your neural ODE: the world of Jacobian and kinetic regularization</article-title>
          . URL http://arxiv.org/abs/
          <year>2002</year>
          .02798.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>Freno</surname>
            ,
            <given-names>B. A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Carlberg</surname>
            ,
            <given-names>K. T.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Machine-learning error models for approximate solutions to parameterized systems of nonlinear equations</article-title>
          .
          <source>Comput. Methods Appl</source>
          . Mech. Eng.
          <volume>348</volume>
          :
          <fpage>250</fpage>
          -
          <lpage>296</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.cma.
          <year>2019</year>
          .
          <volume>01</volume>
          .024. URL https://doi.org/10.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          1016/j.cma.
          <year>2019</year>
          .
          <volume>01</volume>
          .024.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <surname>Gholami</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Keutzer</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ; and Biros,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          URL https://arxiv.org/abs/
          <year>1902</year>
          .10298.
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <string-name>
            <surname>Ghorbanidehno</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Farthing,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Hesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ;
            <surname>Darve</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. F.</surname>
          </string-name>
          ; and Kitanidis,
          <string-name>
            <surname>P. K.</surname>
          </string-name>
          <year>2021</year>
          .
          <article-title>Deep learning technique for fast inference of large-scale riverine bathymetry</article-title>
          .
          <source>Advances in Water Resources</source>
          <volume>147</volume>
          :
          <fpage>103715</fpage>
          .
          <string-name>
            <surname>ISSN</surname>
          </string-name>
          0309-
          <fpage>1708</fpage>
          . doi:https://doi.org/ 10.1016/j.advwatres.
          <year>2020</year>
          .103715. URL http://www.sciencedirect.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>com/science/article/pii/S0309170819309418.</mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Gonzalez</surname>
            ,
            <given-names>F. J.;</given-names>
          </string-name>
          and Balajewicz,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Hesthaven</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Data-driven reduced order modeling for time-dependent problems</article-title>
          .
          <source>Comput. Methods Appl</source>
          . Mech.
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          Eng.
          <volume>345</volume>
          :
          <fpage>75</fpage>
          -
          <lpage>99</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.cma.
          <year>2018</year>
          .
          <volume>10</volume>
          .029.
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Hesthaven</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Ubbiali</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Non-intrusive reduced order modeling of nonlinear problems using neural networks</article-title>
          .
          <source>J. Comput.</source>
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          Phys.
          <volume>363</volume>
          :
          <fpage>55</fpage>
          -
          <lpage>78</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2018</year>
          .
          <volume>02</volume>
          .037.
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Iuliano</surname>
          </string-name>
          , E.; and
          <string-name>
            <surname>Quagliarella</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Aerodynamic shape optimization via non-intrusive POD-based surrogate modelling</article-title>
          .
          <source>In 2013 IEEE Congr. Evol. Comput. CEC</source>
          <year>2013</year>
          ,
          <volume>1467</volume>
          -
          <fpage>1474</fpage>
          . IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <source>doi:10</source>
          .1109/CEC.
          <year>2013</year>
          .
          <volume>6557736</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <string-name>
            <surname>Jolliffe</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>1986</year>
          .
          <article-title>Principal Component Analysis</article-title>
          . Springer New York, USA.
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <string-name>
            <surname>Kanaa</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Voleti</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kahou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Pal</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Simple Video Generation using Neural ODEs</article-title>
          .
          <source>In Annu. Conf. Neural Inf. Process. Syst</source>
          .
          <year>2019</year>
          ,
          <string-name>
            <surname>NeurIPS</surname>
          </string-name>
          <year>2019</year>
          . Vancouver, BC, Canada.
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>URL https://voletiv.github.io/docs/publications/2019e NeurIPSW EncODEDec.pdf.</mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Widemann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and Zohdi,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder</article-title>
          . URL http://arxiv.org/abs/
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <surname>Koopman</surname>
            ,
            <given-names>B. O.</given-names>
          </string-name>
          <year>1931</year>
          .
          <article-title>Hamiltonian Systems and Transformation in Hilbert Space</article-title>
          .
          <source>Proc. Natl. Acad. Sci</source>
          . U. S. A.
          <volume>17</volume>
          (
          <issue>5</issue>
          ):
          <fpage>315</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          2016.
          <article-title>Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems</article-title>
          . Philadelphia, PA: Society for Industrial and Applied Mathematics,
          <article-title>third edition</article-title>
          .
          <source>ISBN 978-1-61197-449-2. doi: 10.1137/1</source>
          .9781611974508.
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Brunton</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Multiresolution dynamic mode decomposition</article-title>
          .
          <source>SIAM J. Appl. Dyn. Syst</source>
          .
          <volume>15</volume>
          (
          <issue>2</issue>
          ):
          <fpage>713</fpage>
          -
          <lpage>735</lpage>
          . doi:
          <volume>10</volume>
          .1137/15M1023543.
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          <string-name>
            <given-names>Le</given-names>
            <surname>Clainche</surname>
          </string-name>
          , S.; and
          <string-name>
            <surname>Vega</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Higher order dynamic mode decomposition</article-title>
          .
          <source>SIAM J. Appl. Dyn. Syst</source>
          .
          <volume>16</volume>
          (
          <issue>2</issue>
          ):
          <fpage>882</fpage>
          -
          <lpage>925</lpage>
          . doi:
          <volume>10</volume>
          .1137/15M1054924.
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Carlberg</surname>
            ,
            <given-names>K. T.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders</article-title>
          .
          <source>J. Comput. Phys</source>
          .
          <volume>404</volume>
          : 108973. doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          108973. URL https://doi.org/10.1016/j.jcp.
          <year>2019</year>
          .
          <volume>108973</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          <string-name>
            <surname>Long</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Dong</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network</article-title>
          .
          <source>J.</source>
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          <source>Comput. Phys</source>
          .
          <volume>399</volume>
          : 108925. doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2019</year>
          .108925. URL https://doi.org/10.1016/j.jcp.
          <year>2019</year>
          .
          <volume>108925</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          <string-name>
            <surname>Lozovskiy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Farthing</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kees</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Evaluation of Galerkin and Petrov-Galerkin model reduction for finite element approximations of the shallow water equations</article-title>
          .
          <source>Comput. Methods Appli. Mech. Eng</source>
          .
          <volume>318</volume>
          :
          <fpage>537</fpage>
          -
          <lpage>571</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.cma.
          <year>2017</year>
          .
          <volume>01</volume>
          .027.
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          <string-name>
            <surname>Lozovskiy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Farthing</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kees</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Gildin</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>PODbased Model Reduction for Stabilized Finite Element Approximations of Shallow Water Flows</article-title>
          .
          <source>J. Comput. Appl</source>
          . Math.
          <volume>302</volume>
          :
          <fpage>50</fpage>
          -
          <lpage>70</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          <string-name>
            <surname>Lusch</surname>
            ,
            <given-names>B.; Nathan</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Brunton</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Deep learning for universal linear embeddings of nonlinear dynamics</article-title>
          .
          <source>Nature Comm</source>
          .
          <volume>9</volume>
          (
          <issue>1</issue>
          ). doi:https://doi.org/10.1038/s41467-018-07210-0.
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          <string-name>
            <surname>Maulik</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mohan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lusch</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Madireddy</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Balaprakash,
          <string-name>
            <given-names>P.</given-names>
            ; and
            <surname>Livescu</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Time-series learning of latent-space dynamics for reduced-order model closure</article-title>
          .
          <source>Phys. D Nonlinear Phenom</source>
          .
          <volume>405</volume>
          : 132368. doi:
          <volume>10</volume>
          .1016/j.physd.
          <year>2020</year>
          .132368. URL https://doi.org/10.1016/j.physd.
          <year>2020</year>
          .
          <volume>132368</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          <string-name>
            <surname>Mezic´</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Analysis of Fluid Flows via Spectral Properties of the Koopman Operator</article-title>
          .
          <source>Ann. Rev. Flu. Mech</source>
          .
          <volume>45</volume>
          (
          <issue>1</issue>
          ):
          <fpage>357</fpage>
          -
          <lpage>378</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          <source>doi:10</source>
          .1146/annurev-fluid-
          <volume>011212</volume>
          -140652.
        </mixed-citation>
      </ref>
      <ref id="ref67">
        <mixed-citation>
          <string-name>
            <surname>Peherstorfer</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Willcox</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ; and Gunzburger,
          <string-name>
            <surname>M. A. X.</surname>
          </string-name>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref68">
        <mixed-citation>
          <article-title>Optimal model management for multifidelity Monte Carlo estimation</article-title>
          .
          <source>SIAM J. Sci. Comput</source>
          .
          <volume>38</volume>
          (
          <issue>5</issue>
          ):
          <fpage>A3163</fpage>
          -
          <lpage>A3194</lpage>
          . doi:
          <volume>10</volume>
          .1137/ 15M1046472.
        </mixed-citation>
      </ref>
      <ref id="ref69">
        <mixed-citation>
          <string-name>
            <surname>Proctor</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Brunton</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Dynamic mode decomposition with control</article-title>
          .
          <source>SIAM J. Appl. Dyn. Syst</source>
          .
          <volume>15</volume>
          (
          <issue>1</issue>
          ):
          <fpage>142</fpage>
          -
          <lpage>161</lpage>
          . doi:
          <volume>10</volume>
          .1137/15M1013857.
        </mixed-citation>
      </ref>
      <ref id="ref70">
        <mixed-citation>
          <string-name>
            <surname>Quarteroni</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Manzoni</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Negri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Reduced Basis Methods for Partial Differential Equations</article-title>
          . Springer, Cham. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -15431-2.
        </mixed-citation>
      </ref>
      <ref id="ref71">
        <mixed-citation>
          <string-name>
            <surname>Raissi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Perdikaris</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and Karniadakis,
          <string-name>
            <surname>G. E.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations</article-title>
          .
          <source>J. Comput. Phys</source>
          .
          <volume>378</volume>
          :
          <fpage>686</fpage>
          -
          <lpage>707</lpage>
          . ISSN 10902716.
        </mixed-citation>
      </ref>
      <ref id="ref72">
        <mixed-citation>
          <source>doi:10</source>
          .1016/j.jcp.
          <year>2018</year>
          .
          <volume>10</volume>
          .045. URL https://doi.org/10.1016/j.jcp.
        </mixed-citation>
      </ref>
      <ref id="ref73">
        <mixed-citation>
          <string-name>
            <surname>Rowley</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mezi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ; Bagheri,
          <string-name>
            <surname>S.</surname>
          </string-name>
          ; Schlatter,
          <string-name>
            <given-names>P.</given-names>
            ; and
            <surname>Henningson</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2009</year>
          .
          <article-title>Spectral analysis of nonlinear flows</article-title>
          .
          <source>J. Fluid Mech.</source>
        </mixed-citation>
      </ref>
      <ref id="ref74">
        <mixed-citation>
          <source>641(Rowley</source>
          <year>2005</year>
          ):
          <fpage>115</fpage>
          -
          <lpage>127</lpage>
          . doi:
          <volume>10</volume>
          .1017/S0022112009992059.
        </mixed-citation>
      </ref>
      <ref id="ref75">
        <mixed-citation>
          <string-name>
            <surname>Rubanova</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>R. T. Q.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Duvenaud</surname>
            ,
            <given-names>D. K.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Latent Ordinary Differential Equations for Irregularly-Sampled Time Series</article-title>
          . In Wallach, H.; Larochelle,
          <string-name>
            <given-names>H.</given-names>
            ;
            <surname>Beygelzimer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A</given-names>
            .;
            <surname>d'Alche´- Buc</surname>
          </string-name>
          , F.;
          <string-name>
            <surname>Fox</surname>
          </string-name>
          , E.; and Garnett, R., eds.,
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>32</volume>
          ,
          <fpage>5320</fpage>
          -
          <lpage>5330</lpage>
          . Curran Associates, Inc. URL https://proceedings.neurips.cc/paper/2019/file/ 42a6845a557bef704ad8ac9cb4461d43-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref76">
        <mixed-citation>
          <string-name>
            <surname>Ruthotto</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Haber</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Deep Neural Networks Motivated by Partial Differential Equations</article-title>
          .
          <source>J. Math. Imaging Vis</source>
          .
          <fpage>2</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref77">
        <mixed-citation>
          <source>doi:10.1007/s10851-019-00903-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref78">
        <mixed-citation>
          <string-name>
            <surname>San</surname>
          </string-name>
          , O.; and
          <string-name>
            <surname>Borggaard</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Principal interval decomposition framework for POD reduced-order modeling of convective Boussinesq flow</article-title>
          .
          <source>Int. J. Numer. Methods Fluids</source>
          <volume>78</volume>
          (
          <issue>1</issue>
          ):
          <fpage>37</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref79">
        <mixed-citation>
          <string-name>
            <surname>Sapsis</surname>
            ,
            <given-names>T. P.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Majda</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Statistically accurate loworder models for uncertainty quantification in turbulent dynamical systems</article-title>
          .
          <source>Proc. Natl. Acad. Sci</source>
          . U. S. A.
          <volume>110</volume>
          (
          <issue>34</issue>
          ):
          <fpage>13705</fpage>
          -
          <lpage>13710</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref80">
        <mixed-citation>
          <source>doi:10</source>
          .1073/pnas.1313065110.
        </mixed-citation>
      </ref>
      <ref id="ref81">
        <mixed-citation>
          <string-name>
            <surname>Schmid</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Dynamic mode decomposition of numerical and experimental data</article-title>
          .
          <source>J. Fluid Mech</source>
          .
          <volume>656</volume>
          (
          <year>July 2010</year>
          ):
          <fpage>5</fpage>
          -
          <lpage>28</lpage>
          . doi:10.
        </mixed-citation>
      </ref>
      <ref id="ref82">
        <mixed-citation>
          1109/PIERS-FALL.
          <year>2017</year>
          .
          <volume>8293532</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref83">
        <mixed-citation>
          <string-name>
            <surname>Sirovich</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>1987</year>
          .
          <article-title>Turbulence and the dynamics of coherent structures. Part I:Coherent structures</article-title>
          .
          <source>Quart. Appl</source>
          . Math.
          <volume>45</volume>
          :
          <fpage>561</fpage>
          -
          <lpage>571</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref84">
        <mixed-citation>
          <string-name>
            <surname>Stefanescu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sandu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Comparison of POD reduced order strategies for the nonlinear 2D shallow water equations</article-title>
          .
          <source>Int. J. Numer. Methods Fluids</source>
          <volume>76</volume>
          (
          <issue>8</issue>
          ):
          <fpage>497</fpage>
          -
          <lpage>521</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref85">
        <mixed-citation>
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ; Zhang, L.; and Schaeffer,
          <string-name>
            <surname>H.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data</article-title>
          .
          <source>Proc. Mach. Learn. Res</source>
          .
          <volume>107</volume>
          (
          <year>2016</year>
          ):
          <fpage>352</fpage>
          -
          <lpage>372</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref86">
        <mixed-citation>
          <string-name>
            <surname>Taira</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hemati</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Brunton</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Duraisamy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bagheri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dawson</surname>
          </string-name>
          , S. T.; and
          <string-name>
            <surname>Yeh</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Modal analysis of fluid flows: Applications and outlook</article-title>
          .
          <source>AIAA J</source>
          .
          <volume>58</volume>
          (
          <issue>3</issue>
          ):
          <fpage>998</fpage>
          -
          <lpage>1022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref87">
        <mixed-citation>
          <source>doi:10</source>
          .2514/1.J058462.
        </mixed-citation>
      </ref>
      <ref id="ref88">
        <mixed-citation>
          <string-name>
            <surname>Trahan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Savant</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Berger,
          <string-name>
            <surname>R.</surname>
          </string-name>
          ; Farthing,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>McAlpin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ;
            <surname>Pettey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ;
            <surname>Choudhary</surname>
          </string-name>
          , G.; and
          <string-name>
            <surname>Dawson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Formulation and application of the adaptive hydraulics three-dimensional shallow water and transport models</article-title>
          .
          <source>J. Comput. Phys</source>
          .
          <volume>374</volume>
          :
          <fpage>47</fpage>
          -
          <lpage>90</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2018</year>
          .
          <volume>04</volume>
          .055.
        </mixed-citation>
      </ref>
      <ref id="ref89">
        <mixed-citation>
          <string-name>
            <surname>Vermeulen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Heemink</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Model-reduced variational data assimilation</article-title>
          .
          <source>Monthly Weather Review</source>
          <volume>134</volume>
          :
          <fpage>2888</fpage>
          -
          <lpage>2899</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref90">
        <mixed-citation>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ripamonti</surname>
          </string-name>
          , N.; and
          <string-name>
            <surname>Hesthaven</surname>
            ,
            <given-names>J. S.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Recurrent neural network closure of parametric POD-Galerkin reduced-order models based on the Mori-Zwanzig formalism</article-title>
          .
          <source>J. Comput. Phys.</source>
        </mixed-citation>
      </ref>
      <ref id="ref91">
        <mixed-citation>
          <volume>410</volume>
          :
          <fpage>109402</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2020</year>
          .109402. URL https://doi.org/ 10.1016/j.jcp.
          <year>2020</year>
          .
          <volume>109402</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref92">
        <mixed-citation>
          <string-name>
            <surname>Willcox</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Unsteady flow sensing and estimation via the gappy Proper Orthogonal Decomposition</article-title>
          .
          <source>Comput. Fluids</source>
          <volume>35</volume>
          (
          <issue>2</issue>
          ):
          <fpage>208</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref93">
        <mixed-citation>
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Error estimation of the parametric non-intrusive reduced order model using machine learning</article-title>
          .
          <source>Comput. Methods Appl</source>
          . Mech. Eng.
          <volume>355</volume>
          :
          <fpage>513</fpage>
          -
          <lpage>534</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.cma.
          <year>2019</year>
          .
          <volume>06</volume>
          .018.
        </mixed-citation>
      </ref>
      <ref id="ref94">
        <mixed-citation>
          URL https://doi.org/10.1016/j.cma.
          <year>2019</year>
          .
          <volume>06</volume>
          .018.
        </mixed-citation>
      </ref>
      <ref id="ref95">
        <mixed-citation>
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Buchan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Pain</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ; Du, J.; and Hu,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2014</year>
          .
          <article-title>Non-linear model reduction for the Navier-Stokes equations using residual DEIM method</article-title>
          .
          <source>J. Comput. Phys</source>
          .
          <volume>263</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcp.
          <year>2014</year>
          .
          <volume>01</volume>
          .011.
        </mixed-citation>
      </ref>
      <ref id="ref96">
        <mixed-citation>
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Pain</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and Hu,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2015</year>
          .
          <article-title>Non-intrusive reduced-order modelling of the Navier-Stokes equations based on RBF interpolation</article-title>
          .
          <source>Int. J. Numer. Methods Fluids</source>
          <volume>79</volume>
          :
          <fpage>580</fpage>
          -
          <lpage>595</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref97">
        <mixed-citation>
          <source>doi:10</source>
          .1002/fld.406.
        </mixed-citation>
      </ref>
      <ref id="ref98">
        <mixed-citation>
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Pain</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>A parameterized non-intrusive reduced order model and error analysis for general time-dependent nonlinear partial differential equations and its applications</article-title>
          .
          <source>Comput. Methods Appl</source>
          . Mech. Eng.
          <volume>317</volume>
          :
          <fpage>868</fpage>
          -
          <lpage>889</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref99">
        <mixed-citation>
          <source>doi:10</source>
          .1016/j.cma.
          <year>2016</year>
          .
          <volume>12</volume>
          .033.
        </mixed-citation>
      </ref>
      <ref id="ref100">
        <mixed-citation>
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , H.;
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Unterman</surname>
            , J.; and Arodz,
            <given-names>T.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Approximation Capabilities of Neural ODEs and Invertible Residual Networks</article-title>
          . CoRR abs/
          <year>1907</year>
          .12998. URL http://arxiv.org/abs/
          <year>1907</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>