<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>DeepXDE: A Deep Learning Library for Solving Differential Equations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lu Lu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xuhui Meng</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhiping Mao</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>George Em Karniadakis</string-name>
          <email>karniadakis@brown.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Division of Applied Mathematics, Brown University Providence</institution>
          ,
          <addr-line>RI 02906 george</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Deep learning has achieved remarkable success in diverse applications; however, its use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation. PINNs solve inverse problems similarly to forward problems. We also present a Python library for PINNs, DeepXDE. DeepXDE supports complex-geometry domains based on the technique of constructive solid geometry, and enables the user code to be compact, resembling closely the mathematical formulation. We introduce the usage of DeepXDE, and we also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for two different examples.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>More recently, solving partial differential equations (PDEs)
via deep learning has emerged as a potentially new sub-field
under the name of Scientific Machine Learning. To solve a
PDE via deep learning, a key step is to constrain the
neural network to minimize the PDE residual, and several
approaches have been proposed to accomplish this. Compared
to the traditional mesh-based methods, such as the finite
difference method and the finite e lement m ethod, d eep
learning could be a mesh-free approach by taking advantage of
the automatic differentiation, and could break the curse of
dimensionality. Among these approaches, one could use the
PDE in strong form directly; in this form, automatic
differentiation could be used directly to avoid truncation errors.
This approach is called physics-informed neural networks
(PINNs). An attractive feature of PINNs is that it can be used
to solve inverse problems similarly to forward problems.</p>
      <p>In this paper, we present PINN algorithms implemented
in a Python library DeepXDE (https://github.com/lululxvi/
deepxde). DeepXDE can be used to solve multi-physics
problems, and supports complex-geometry domains based
on the technique of constructive solid geometry (CSG),
hence avoiding tedious and time-consuming computational
geometry tasks. Last but not least, DeepXDE is designed to
make the user code stay compact and manageable,
resembling closely the mathematical formulation.</p>
      <p>Copyright c 2020, for this paper by its authors. Use permitted
under Creative Commons License Attribution 4.0 International
(CCBY 4.0).</p>
      <p>1 Physics-informed neural networks
We consider the PDE parameterized by for the solution
u(x) with x = (x1; : : : ; xd) defined on a domain Rd:
f
x;</p>
      <p>; : : : ;
= 0;
(1)
with suitable boundary conditions (BCs) B(u; x) = 0 on
@ . For time-dependent problems, we consider time t as a
special component of x.</p>
      <p>NN(x, t; θ)</p>
      <p>σ
x
t
σ
σ
...
σ
uˆ</p>
      <p>PDE(λ)
∂∂t
∂∂x22
I
∂∂n
BC &amp; IC
∂∂uˆt − λ∂∂2xuˆ2</p>
      <p>Tf
uˆ(x, t) − gD(x, t)
∂∂nuˆ (x, t) − gR(u, x, t) Tb</p>
      <p>Loss Minimize θ∗
R</p>
      <p>The algorithm of PINN is shown visually in the schematic
of Fig. 1 solving a diffusion equation. We explain each
step as follows. In a PINN, we first construct a neural
network u^(x; ) as a surrogate of the solution u(x). Here,
= fW `; b`g1 ` L is the set of all weight matrices and
bias vectors in the network u^. One advantage of choosing
neural networks as the surrogate of u is that we can take the
derivatives of u^ with respect to x by the automatic
differentiation. In the next step, we need to restrict u^ to satisfy the
PDE and BCs. We only restrict u^ on some scattered points,
i.e., the training data T = fx1; x2; : : : ; xjT jg of size jT j. T
is comprised of two sets Tf and Tb @ , which are the
points in the domain and on the boundary, respectively. We
refer Tf and Tb as the sets of “residual points”. To measure
the discrepancy between u^ and the constraints, we consider
the loss defined as:</p>
      <p>L( ; T ) = wf Lf ( ; Tf ) + wbLb( ; Tb);
where Lf ( ; Tf ) = jT1f j Px2Tf f x; @@xu^1 ; : : : ; 22,
Lb( ; Tb) = jT1bj Px2Tb kB(u^; x)k22, and wf and wb are the
weights. In the last step, the procedure of searching for a
good by minimizing the loss L( ; T ) using gradient-based
optimizers is called “training”.</p>
      <p>In inverse problems, there are some unknown
parameters in Eq. (1), but we have extra information on points
Ti : I(u; x) = 0, for x 2 Ti. PINNs solve inverse
problems by adding an extra term to Eq. (2): Li( ; ; Ti) =
jT1ij Px2Ti kI(u^; x)k22. We then optimize and together:
;
= arg min ; L( ; ; T ).</p>
      <p>A 16
2 DeepXDe E usage
la 12
u
In this section, we introduce therv usage of DeepXDE.
DeepXDE makes the code stay comteempa8ct and nTTirrcuueee,ρσreseIIddmeennbttiilffiiieenddgρσ
closely the mathematical formraula4tion. SolvTriunegβdiffIederentnifiteidalβ
a
equations in DeepXDE is no moPre than specifying the
prob0
lem using the build-in modules, inclu0ding1co#m2Itpeurattiao3tnison(1a044l) do-5
main (geometry and time), differential equations, ICs, BCs,
constraints, training data, network architecture, and training
hyperparameters. The workflow is shown in Procedure 1.</p>
      <p>A
A | B
A - B</p>
      <p>A &amp; B</p>
      <p>B</p>
      <p>|
&amp;</p>
      <p>In DeepXDE, The built-in primitive geometries
include interval, triangle, rectangle, polygon,
disk, cuboid and sphere. Other geometries can be
constructed from these primitive geometries using three
boolean operations: union (|), difference (-) and
intersection (&amp;). This technique is called
constructive solid geometry (CSG), see Fig. 2 for examples.</p>
      <p>DeepXDE supports four standard BCs, including
Dirichlet, Neumann, Robin, and Periodic, and
a more general BC can be defined using OperatorBC.
The initial condition can be defined using IC. There
are two networks available in DeepXDE: feed-forward
neural network (maps.FNN) and residual neural network
(maps.ResNet). It is also convenient to choose different
training hyperparameters, such as loss types, metrics,
optimizers, learning rate schedules, initializations and
regularizations.
centrations CA, CBkafnCdACCCB2,(A@C+B2=B D!@C2C)Bis described by
@@CtA = D @@2xC2A @t @x2 2kf CACB2
feor2x0x2an[0d; 1B]C;ts 2C A[0(;01;0t]) w=ith IC CA(x; 0) = CB(x; 0) =
CB(0; t) = 1, CA(1; t) =
CB(1; t) = 0. We estimate the diffusion coefficient D =
2 10 3 and the reaction rate kf = 0:1 based on 40000
observations of the concentrations CA and CB in the
spatiotemporal domain. The identified D (1:98 10 3) and kf
(0.0971) are displayed in Fig. 3.</p>
    </sec>
    <sec id="sec-2">
      <title>True kf</title>
      <p>True D</p>
    </sec>
    <sec id="sec-3">
      <title>Identified kf</title>
      <p>Identified D
6</p>
      <p>Procedure 1 Usage of DeepXDE for solving differential
equations.
1. Specify the computational domain using the geometry
module.
2. Specify the differential equations using the grammar of</p>
      <p>TensorFlow.
3. Specify the boundary and initial conditions.
4. Combine the geometry, PDE, and IC/BCs together into
data.PDE or data.TimePDE for time-independent or
time-dependent problems, respectively. To specify
training data, we can either set the specific point locations, or
only set the number of points and then DeepXDE will
sample the required number of points on a grid or
randomly.
5. Construct a neural network using the maps module.
6. Define a Model by combining the PDE problem in Step
4 and the neural net in Step 5.
7. Call Model.compile to set the optimization
hyperparameters, such as optimizer and learning rate. The weights
in Eq. (2) can be set here by loss weights.
8. Call Model.train to train the network from random
initialization or a pre-trained model using the argument
model restore path. It is extremely flexible to
monitor and modify the training behavior using callbacks.
9. Call Model.predict to predict the PDE solution at
different locations.</p>
      <p>References
Lu, L.; Meng, X.; Mao, Z.; and Karniadakis, G. E. 2019.
Deepxde: A deep learning library for solving differential
equations. arXiv preprint arXiv:1907.04502.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>