=Paper= {{Paper |id=Vol-2964/article_86 |storemode=property |title=Graph-Informed Neural Networks |pdfUrl=https://ceur-ws.org/Vol-2964/article_86.pdf |volume=Vol-2964 |authors=Søren Taverniers,Eric J. Hall,Markos A. Katsoulakis,Daniel M. Tartakovsky |dblpUrl=https://dblp.org/rec/conf/aaaiss/TaverniersHKT21 }} ==Graph-Informed Neural Networks== https://ceur-ws.org/Vol-2964/article_86.pdf
                                         Graph-Informed Neural Networks
            Søren Taverniers,1 Eric J. Hall,2 Markos A. Katsoulakis,3 Daniel M. Tartakovsky4
                    1
                   Palo Alto Research Center (PARC), 3333 Coyote Hill Road, Palo Alto, CA 94304, USA
                         2
                           Division of Mathematics, University of Dundee, Dundee, DD1 4HN, UK
        1
          Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01003, USA
              4
                Department of Energy Resources Engineering, Stanford University, Stanford, CA 94305, USA
                 ehall001@dundee.ac.uk (Eric J. Hall), tartakovsky@stanford.edu (Daniel M. Tartakovsky)


                            Abstract                                where it is deployed to build highly accurate kernel den-
                                                                    sity estimators (KDEs) for the probability density functions
  Graph-Informed Neural Networks (GINNs) present a strat-
  egy for incorporating domain knowledge into scientific ma-
                                                                    (PDFs) of relevant output quantities of interest (QoIs).
  chine learning for complex physical systems. The construc-
  tion utilizes probabilistic graphical models (PGMs) to incor-                                                    GINN (surrogate model)
  porate expert knowledge, available data, constraints, etc. with                                                                                                             EDL Formation
                                                                                                                                                                              Environmental
  physics-based models such as systems of ordinary and partial                                                                                                                Operating
                                                                                                                                                                              Upscaling
  differential equations (ODEs and PDEs). Computationally in-                                                                                                                 Macro Diffusion
                                                                                                                                                                              Structure
  tensive nodes in this hybrid model are replaced by the hid-                                                         .                             .
                                                                                                                                                                              Latent
                                                                                                                      .                             .
  den nodes of a neural network (i.e., learned features). Once                                                        .                             .

  trained, the resulting GINN surrogate can cheaply generate
  physically-relevant predictions at scale thereby enabling ro-
  bust sensitivity analysis and uncertainty quantification (UQ).
                                                                                                                          Learned Features
                                                                                  r
  As proof of concept, we build a GINN for a multiscale model
  of electrical double-layer capacitor dynamics embedded into       Domain-               lpor                                                                                       Deff
                                                                                                                                                                                      +


  a Bayesian network (BN) PDE hybrid model.                         aware
                                                                                  ω
                                                                    PGM
   In recent years, several approaches have been proposed                         T
                                                                                                                                                                         κeff
                                                                                                                                                                                       t+


to inform deep neural networks (DNNs) of physical laws                                    λD

and constraints to ensure they produce physically sound                                   ϕΓ
                                                                                                                                                                      D−eff
predictions. Two main classes of DNNs for building sur-                           cin
rogate representations of physics-based models described                                                                    ϕEDL
                                                                                                                                                             Quantities of Interest
                                                                                                                                          χ±                       EDL Formation
                                                                               Control Variables
by PDEs have emerged: physics-informed NNs (PINNs)                                                                                                                 Environmental
                                                                                                                                                                   Operating
                                                                                                                      Computational Bottleneck                     Upscaling
(Raissi, Perdikaris, and Karniadakis 2019) and “data-free”                                                                                                         Macro Diffusion
                                                                                                                                                                   Structure
                                                                                                                            .                       .
physics-constrained NNs (Zhu et al. 2019). Our approach                                                                     .
                                                                                                                            .
                                                                                                                                                    .
                                                                                                                                                    .
                                                                                                                                                                   Latent



uses the well-known concept of PGMs to embed domain                                                                Hybrid model (high delity)
knowledge, including correlations between control variables
(CVs), into standard DNNs by only modifying their input             Figure 1: A domain-aware PGM encoding structured priors
                                                                                                    r
                                                                                                                                Learned Features



layer structure and enabling the use of a standard penalty          on CVs serves as input to both the BN PDE (lower route)
                                                                                fi
                                                                                                            lpor                                                         Deff
                                                                                                                                                                          +




in the loss function, e.g., `1 (lasso regression) or `2 (ridge      and trained GINN (upper route) for a homogenized model
                                                                                                    ω




regression) regularization. This non-intrusive approach per-        of ion diffusion in supercapacitors (Taverniers et al. 2020).
                                                                                                    T
                                                                                                                                                               κeff
                                                                                                                                                                              t+

                                                                                                            λD

mits the use of off-the-shelf software like TensorFlow or                                                   ϕΓ
                                                                                                                                                            Deff
                                                                                                                                                             −

PyTorch with minimal effort from the user, while remain-                                            cin
                                                                                                                                                        Quantities of Interest
ing compatible with PINNs and other customized NN archi-                   Constructing and training a GINN
                                                                                                 Control Variables
                                                                                                                                   ϕEDL
                                                                                                                                               χ±


tectures which can be used to replace individual computa-                                                                   Computational Bottleneck

                                                                    Simulation-based decision-making for design tasks involv-
tional bottlenecks in the physics-based representation.             ing complex multiscale/multiphysics systems requires pre-
   GINNs are particularly suited to enhance the compu-              dicting the impact of tunable CVs on the system’s QoIs. Typ-
tational workflow for complex systems featuring intrinsic           ically, this is modeled by recasting the problem in a prob-
computational bottlenecks and intricate physical relations          abilistic framework where CVs and QoIs are represented
among input CVs. Hence, to showcase the potential of this           as random quantities that can be sampled from their cor-
approach, we apply a GINN to simulation-based decision-             responding probability distributions. For most real-world
making in electrical double-layer (EDL) supercapacitors,            applications, these are continuous, non-Gaussian variables
Copyright © 2021, for this paper by its authors. Use permitted      that need to be characterized by their full PDF rather than
under Creative Commons License Attribution 4.0 International        through a finite set of moments.
(CCBY 4.0).                                                            Figure 1 visualizes the construction of a GINN surrogate
for a multiscale model of EDL supercapacitor dynamics.                                                                         intervals for an equivalent computational cost (since learn-
A BN, a type of directed acyclic PGM, systematically in-                                                                       ing the GINN’s parameters and predicting new data with the
corporates domain knowledge into the physics-based model                                                                       GINN carries a negligible computational expense).
through structured priors on CVs, resulting in a hybrid BN
                                                                                                                                                                                                             5
PDE model for macroscopic diffusion QoIs. The GINN re-                                                                         2.5                                                                                                                                           (Hybrid model)
                                                                                                                                                                                                             4
tains the structured priors as inputs but replaces the hybrid                                                                   2
                                                                                                                                                                                                                                                                             (GINN surrogate)
                                                                                                                                                                                                             3
model’s computationally intensive nodes, related to upscal-                                                                    1.5                                    (Hybrid model)
                                                                                                                                                                                                             2
ing via homogenization, with learned features to speed up                                                                       1
                                                                                                                                                                      (GINN surrogate)
                                                                                                                                                                                                             1
                                                                                                                               0.5
the generation of QoIs while maintaining physical relevance.                                                                    0                                                                            0

   The GINN workflow, summarized in Fig. 2, consists of:                                                                              0.1       0.2        0.3        0.4         0.5            0.6              0.15   0.2    0.25   0.3      0.35   0.4    0.45   0.5    0.55   0.6     0.65

                                                                                                                               0.12                                                                          10
                                                                                                                                                                                                                               (Hybrid model)
1. Data generation: Generate Nsam input-output (io) sam-                                                                        0.1                                                    (Hybrid model)
                                                                                                                                                                                                              8


   ples, divided into Ntrain training and Ntest test samples.                                                                  0.08
                                                                                                                                                                                       (GINN surrogate)       6
                                                                                                                                                                                                                               (GINN surrogate)

                                                                                                                               0.06


2. Training: Train the GINN with Ntrain training samples.                                                                      0.04
                                                                                                                                                                                                              4


                                                                                                                                                                                                              2
                                                                                                                               0.02

3. Testing: Test the trained GINN’s ability to handle unseen                                                                     0
                                                                                                                                  0         5         10         15         20           25             30
                                                                                                                                                                                                              0
                                                                                                                                                                                                                   0.4          0.45            0.5          0.55          0.6           0.65

   data using the Ntest test samples.
4. Repeat steps 1 through 3 (modifying Ntrain ) until both the                                                                 Figure 3: Estimated marginal densities for the QoIs in the
   training and test error tolerance are satisfied.                                                                            supercapacitor testbed based on 8 × 103 samples computed
                       pred                                                                                                    with the hybrid BN PDE (solid/blue) or 107 samples com-
5. Prediction: Draw Nsam    inputs from the structured pri-                                                                    puted with the GINN (dashed/red) (Hall et al. 2021).
   ors on the CVs and predict corresponding QoIs with the
   trained GINN surrogate.
                                                                                                                                                                                        Conclusions
          PGM
                                                                                   PB computations in,
  domain
 knowledge
                   structured
                   priors (SP)
                                        Physics-based
                                      (PB) or surrogate?
                                                            PB
                                                                                     e.g., COMSOL,                             Our full analysis, in (Hall et al. 2021; Taverniers et al. 2020),
                                                                 SP inputs          MATLAB, FENICS              outputs
                                                                                                                               suggests that GINNs, which take structured PGMs as inputs,
GENERATING DATA
                                 SURROGATE                                                                      estimate
                                                                                                                  QoIs         produce physically relevant QoIs that can be used to gener-
                                                                                                                               ate KDEs for robust and reliable sensitivity analysis and fur-
      Learning
     completed?
                       NO                                                                                                      ther UQ. Trained on a small set of high-fidelity input-output
             YES
                                   SP inputs                      outputs                                                      data from a domain-aware hybrid model, GINNs can quickly
                                                 PB outputs        NO                  computed
                                                 computed?                              outputs                                generate large amounts of output predictions, yielding an ap-
                                                           forward propagation

      SP inputs
                                                YES        using initial guesses
                                                           for weights/biases
                                                                                                   PB outputs
                                                                                                   computed?
                                                                                                                  NO           proach that is orders of magnitude faster than counterparts
                                                                                                         YES
                                                                                                                               that rely on physics-based models alone.
     hidden layers
   from trained GINN                   GINN outputs (initial)
                                                                                                         forward
                                                           backpropagation

                                       learned weights/biases
                                                                                       SP inputs
                                                                                                         propagation
                                                                                                         using learned
                                                                                                                                                                            Acknowledgments
                                                                                                         weights/biases
    GINN outputs                                           forward propagation                                                 This work was performed while S. T. was employed by Stan-
     estimate QoIs                                                                                 GINN outputs
                                                                                                                               ford University.
        cheaply                         GINN outputs (final)                                                         modify

 PREDICTING                                                               YES
                                                                                                                          NO
                                                                                                                                                                                         References
                                 modify
  DATA-DRIVEN UQ
                                 NN hyperparameters NO
                                                                  TRAINING          TESTING              YES                   Hall, E. J.; Taverniers, S.; Katsoulakis, M. A.; and Tar-
                                                                                                                               takovsky, D. M. 2021. GINNs: Graph-Informed Neu-
Figure 2: Overview of the global algorithm for GINN-based                                                                      ral Networks for Multiscale Physics. J. Comput. Phys.
training, testing, and predicting (Hall et al. 2021).                                                                          433: 110192. doi:10.1016/j.jcp.2021.110192. Share link
                                                                                                                               authors.elsevier.com/a/1ccIO508Hokch valid
                                                                                                                               until 2021-04-10.
                    GINN-based decision-making                                                                                 Raissi, M.; Perdikaris, P.; and Karniadakis, G. 2019.
A GINN’s ability to cheaply generate io sample pairs can                                                                       Physics-informed neural networks: A deep learning frame-
be leveraged to construct KDEs for the marginal and joint                                                                      work for solving forward and inverse problems involving
PDFs of QoIs with appropriate confidence intervals. Such                                                                       nonlinear partial differential equations. J. Comput. Phys.
nonparametric estimators form the building blocks for UQ                                                                       378: 686–707.
tasks such as sensitivity analysis.                                                                                            Taverniers, S.; Hall, E. J.; Katsoulakis, M. A.; and Tar-
   In Fig. 3, we plot KDEs for QoIs based on 8 × 103 sam-                                                                      takovsky, D. M. 2020. Mutual Information for Explainable
ples simulated using the BN PDE (the minimum amount                                                                            Deep Learning of Multiscale Systems. ArXiv:2009.04570.
of io data needed to train the GINN) and on 107 samples
                                                                                                                               Zhu, Y.; Zabaras, N.; Koutsourelakis, P.-S.; and Perdikaris,
predicted with the GINN. We find that the GINN-predicted
                                                                                                                               P. 2019.     Physics-constrained deep learning for high-
KDEs do not include spurious features observed with the
                                                                                                                               dimensional surrogate modeling and uncertainty quantifica-
smaller, expensive-to-compute data set generated with the
                                                                                                                               tion without labeled data. J. Comput. Phys. 394: 56–81.
physics-based model, and achieve much tighter confidence