=Paper= {{Paper |id=Vol-2964/article_190 |storemode=property |title=Stiff-PINN: Physics-Informed Neural Network for Stiff Chemical Kinetics |pdfUrl=https://ceur-ws.org/Vol-2964/article_190.pdf |volume=Vol-2964 |authors=Weiqi Ji,Weilun Qiu,Zhiyu Shi,Shaowu Pan,Sili Deng |dblpUrl=https://dblp.org/rec/conf/aaaiss/JiQSPD21 }} ==Stiff-PINN: Physics-Informed Neural Network for Stiff Chemical Kinetics== https://ceur-ws.org/Vol-2964/article_190.pdf
                        Stiff-PINN: Physics-Informed Neural Network for
                                                    Stiff Chemical Kinetics
                          Weiqi Ji 1, Weilun Qiu 2, Zhiyu Shi 2, Shaowu Pan 3, Sili Deng 1*
                      1 Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA

                                           2 College of Engineering, Peking University, Beijing, China

                               3 Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI

                                                                  silideng@mit.edu




                              Abstract                                       of the task such as the scale and rotation invariant of the
   Recently developed physics-informed neural network                        convolutional kernel in CNN. Among them, the recently de-
   (PINN) has achieved success in many science and engineer-                 veloped Physics-Informed Neural Network approach
   ing disciplines by encoding physics laws into the loss func-              (PINN) [11–17] enables the construction of the solution
   tions of the neural network, such that the network not only
                                                                             space of differential equations using deep neural networks
   conforms to the measurements, initial and boundary condi-
   tions but also satisfies the governing equations. This work               with space and time coordinates as the inputs. The govern-
   first investigates the performance of PINN in solving stiff               ing equations (mainly differential equations) are enforced
   chemical kinetic problems with governing equations of stiff               by minimizing the residual loss function using automatic
   ordinary differential equations (ODEs). The results elucidate             differentiation and thus it becomes a physics regularization
   the challenges of utilizing PINN in stiff ODE systems. Con-
                                                                             of the deep neural network. This framework permits solving
   sequently, we employ Quasi-Steady-State-Assumptions
   (QSSA) to reduce the stiffness of the ODE systems, and the                differential equations (i.e., forward problems) and conduct-
   PINN then can be successfully applied to the converted                    ing parameter inference from observations (i.e., inverse
   non/mild-stiff systems. Therefore, the results suggest that               problems). PINN has been employed for predicting the so-
   stiffness could be the major reason for the failure of the reg-           lutions for the Burgers’ equation, the Navier–Stokes equa-
   ular PINN in the studied stiff chemical kinetic systems. The
                                                                             tions, and the Schrodinger equation [12]. To enhance the ro-
   developed Stiff-PINN approach that utilizes QSSA to enable
   PINN to solve stiff chemical kinetics shall open the possibil-            bustness and generality of PINN, multiple variations of
   ity of applying PINN to various reaction-diffusion systems                PINN have also been developed, such as Variational PINNs
   involving stiff dynamics.                                                 [18], Parareal PINNs [19], and nonlocal PINN [20].

                                                                             Despite the successful demonstration of PINN in many of
                          Introduction                                       the above works, Wang et al. [21] investigated a fundamen-
Deep learning has enabled advances in many scientific and                    tal mode of failure of PINN that is related to numerical stiff-
engineering disciplines, such as computer visions, natural                   ness leading to unbalanced back-propagated gradients be-
language processing, and autonomous driving. Depending                       tween the loss function of initial/boundary conditions and
on the applications, many different neural network architec-                 the loss function of residuals of the differential equations
tures have been developed, including Deep Neural Net-                        during model training. In addition to the numerical stiffness,
works (DNN), Convolutional Neural Networks (CNN), Re-                        physical stiffness might also impose new challenges in the
current Neural Networks (RNN), and Graph Neural Net-                         training of PINN. While PINN has been applied for solving
work (GNN). Some of them have also been employed for                         chemical reaction systems involving a single-step reaction
data-driven physics modeling [1–8], including turbulent                      [15], stiffness usually results from the nonlinearity and com-
flow modeling [9] and chemical kinetic modeling [10].                        plexity of the reaction network, where the characteristic
Those different neural network architectures introduce spe-                  time scales for species span a wide range of magnitude. Con-
cific regularization to the neural network based on the nature               sequently, the challenges for PINN to accommodate stiff


Copyright ©2021 for this paper by its authors. Use permitted under Crea-
tive Commons License Attribution 4.0 International (CC BY 4.0)
kinetics can potentially arise from several reasons, including    can also be applied to other data-driven approaches, such as
the high dimensionality of the state variables (i.e., the num-    neural ordinary differential equations.
ber of species), the high nonlinearity resulted from the inter-
actions among species, the imbalance in the loss functions
for different state variables since the species concentrations                                 Results
could span several orders of magnitudes. Nonetheless, stiff       We present the results of regular-PINN and stiff-PINN to
chemical kinetics is essential for the modeling of almost         solve the classical stiff ROBER problem, i.e.,
every real-world chemical system such as atmospheric                      𝑑𝑦1
chemistry and the environment, energy conversion and stor-                     = −𝑘1 𝑦1 + 𝑘3 𝑦2 𝑦3 ,
                                                                           𝑑𝑡
age, materials and chemical engineering, biomedical and                   𝑑𝑦2
pharmaceutical engineering. Enabling PINN for handling                          = 𝑘1 𝑦1 − 𝑘2 𝑦22 − 𝑘3 𝑦2 𝑦3 ,
                                                                           𝑑𝑡
stiff kinetics will open the possibilities of using PINN to fa-           𝑑𝑦3
cilitate the design and optimization of these wide ranges of                    = 𝑘2 𝑦22 .
                                                                           𝑑𝑡
chemical systems.
                                                                  The results are then shown in the figure below. It is found
In chemical kinetics, the evolution of the species concentra-     that the regular-PINN failed to capture the dynamics of
tions can be described as ordinary differential equation          such a stiff system while stiff-PINN with QSSA can suc-
(ODE) systems with the net production rates of the species        cessfully solve it.
as the source terms. If the characteristic time scales for spe-
cies span a wide range of magnitude, integrating the entire
ODE systems becomes computationally intensive. Quasi-
Steady-State-Assumptions (QSSA) have been widely
adopted to simplify and solve stiff kinetic problems, espe-
cially in the 1960s when efficient ODE integrators were un-
available [22]. A canonical example of the utilization of
QSSA is the Michaelis–Menten kinetic formula, which is            Figure 1. Solutions of the benchmark ROBER problem us-
still widely adopted to formulate enzyme reactions in bio-
                                                                  ing the BDF solver (the exact solution), regular-PINN, and
chemistry. Nowadays, QSSA is still widely employed in nu-
                                                                  Stiff-PINN with QSSA. While the regular-PINN fails to
merical simulations of reaction-transport systems to remove
                                                                  predict the kinetic evolution of the stiff system, Stiff-PINN
chemical stiffness and enable explicit time integration with
                                                                  with QSSA works very well. The associated code can be
relatively large time steps [23,24]. Moreover, imposing
QSSA also reduces the number of state variables and               found at https://github.com/DENG-MIT/Stiff-PINN.
transport equations by eliminating the fast species such that
the computational cost can be greatly reduced. From a phys-                                 References
ical perspective [22,25], QSSA identifies the species
(termed as QSS species) that are usually radicals with rela-      [1] T. Qin, K. Wu, D. Xiu, Data driven governing equations ap-
tively low concentrations. Their net production rates are         proximation using deep neural networks, J. Comput. Phys. 395
                                                                  (2019) 620–635. https://doi.org/10.1016/j.jcp.2019.06.042.
much lower than their consumption and production rates and
                                                                  [2] Z. Long, Y. Lu, X. Ma, B. Dong, PDE-net: Learning PDEs
thus can be assumed zero. From a mathematical perspective
                                                                  from data, ArXiv Prepr. ArXiv1710.09668. (2017).
[22], the stiffness of the ODEs can be characterized by the
                                                                  [3] M. Raissi, Z. Wang, M.S. Triantafyllou, G.E. Karniadakis,
largest absolute eigenvalues of the Jacobian matrix, i.e., the    Deep learning of vortex-induced vibrations, J. Fluid Mech. 861
Jacobian matrix of the reaction source term to the species        (2019) 119–137. https://doi.org/10.1017/jfm.2018.872.
concentrations. QSSA identifies the species that correspond       [4] A.M. Schweidtmann, J.G. Rittig, A. König, M. Grohe, A.
to the relatively large eigenvalues of the chemical Jacobian      Mitsos, M. Dahmen, Graph Neural Networks for Prediction of Fuel
matrix and then approximates the ODEs with differential-          Ignition Quality, Energy & Fuels. 34 (2020) 11395–11407.
algebraic equations to reduce the magnitude of the largest        https://doi.org/10.1021/acs.energyfuels.0c01533.
eigenvalue of the Jacobian matrix and thus the stiffness.         [5] Y. Bar-Sinai, S. Hoyer, J. Hickey, M.P. Brenner, Learning
                                                                  data-driven discretizations for partial differential equations, Proc.
                                                                  Natl. Acad. Sci. U. S. A. 116 (2019) 15344–15349.
In the current work, we evaluate the performance of PINN          https://doi.org/10.1073/pnas.1814058116.
in solving two classical stiff dynamics problems and com-
                                                                  [6] K. Champion, B. Lusch, J. Nathan Kutz, S.L. Brunton, Data-
pare it with the performance of Stiff-PINN, which incorpo-        driven discovery of coordinates and governing equations, Proc.
rates QSSA into PINN to reduce stiffness. While current           Natl. Acad. Sci. U. S. A. 116 (2019) 22445–22451.
work focuses on PINN, the mitigation of stiffness via QSSA        https://doi.org/10.1073/pnas.1906995116.
[7] T. Xie, J.C. Grossman, Crystal Graph Convolutional Neural           [23] T. Lu, C.K. Law, C.S. Yoo, J.H. Chen, Dynamic stiffness re-
Networks for an Accurate and Interpretable Prediction of Material       moval for direct numerical simulations, Combust. Flame. 156
Properties,          Phys.          Rev.          Lett.       (2018).   (2009)           1542–1551.          https://doi.org/10.1016/j.com-
https://doi.org/10.1103/PhysRevLett.120.145301.                         bustflame.2009.02.013.
[8] R.T.Q. Chen, Y. Rubanova, J. Bettencourt, D. Duvenaud,              [24] A. Felden, P. Pepiot, L. Esclapez, E. Riber, B. Cuenot, In-
Neural ordinary differential equations, in: Adv. Neural Inf. Pro-       cluding analytically reduced chemistry (ARC) in CFD applica-
cess.          Syst.,         2018:           pp.        6571–6583.     tions,      Acta     Astronaut.      158       (2019)     444–459.
https://doi.org/10.2307/j.ctvcm4h3p.19.                                 https://doi.org/10.1016/j.actaastro.2019.03.035.
[9] K. Duraisamy, G. Iaccarino, H. Xiao, Turbulence modeling            [25] C.K. Law, Combustion Physics, Cambridge University Press,
in the age of data, Annu. Rev. Fluid Mech. (2019).                      Cambridge, 2006. https://doi.org/10.1017/CBO9780511754517.
https://doi.org/10.1146/annurev-fluid-010518-040547.
[10] R. Ranade, S. Alqahtani, A. Farooq, T. Echekki, An extended
hybrid chemistry framework for complex hydrocarbon fuels, Fuel.
251 (2019) 276–284. https://doi.org/10.1016/j.fuel.2019.04.053.
[11] M. Raissi, A. Yazdani, G.E. Karniadakis, Hidden fluid me-
chanics: Learning velocity and pressure fields from flow visualiza-
tions,     Science      (80-.    ).    367     (2020)    1026–1030.
https://doi.org/10.1126/science.aaw4741.
[12] M. Raissi, P. Perdikaris, G.E. Karniadakis, Physics-informed
neural networks: A deep learning framework for solving forward
and inverse problems involving nonlinear partial differential equa-
tions,     J.    Comput.       Phys.     378      (2019)    686–707.
https://doi.org/https://doi.org/10.1016/j.jcp.2018.10.045.
[13] D. Zhang, L. Lu, L. Guo, G.E. Karniadakis, Quantifying total
uncertainty in physics-informed neural networks for solving for-
ward       and      inverse       stochastic      problems,     2019.
https://doi.org/10.1016/j.jcp.2019.07.048.
[14] I.E. Lagaris, A. Likas, D.I. Fotiadis, Artificial neural net-
works for solving ordinary and partial differential equations, IEEE
Trans.       Neural      Networks.       9      (1998)     987–1000.
https://doi.org/10.1109/72.712178.
[15] L. Lu, X. Meng, Z. Mao, G.E. Karniadakis, DeepXDE: A
deep learning library for solving differential equations, (2019).
http://arxiv.org/abs/1907.04502 (accessed February 16, 2020).
[16] X. Jin, S. Cai, H. Li, G.E. Karniadakis, NSFnets (Navier-
Stokes Flow nets): Physics-informed neural networks for the in-
compressible          Navier-Stokes          equations,       (2020).
http://arxiv.org/abs/2003.06496.
[17] L. Sun, H. Gao, S. Pan, J.X. Wang, Surrogate modeling for
fluid flows based on physics-constrained deep learning without
simulation data, Comput. Methods Appl. Mech. Eng. (2020).
https://doi.org/10.1016/j.cma.2019.112732.
[18] E. Kharazmi, Z. Zhang, G.E. Karniadakis, Variational Phys-
ics-Informed Neural Networks For Solving Partial Differential
Equations, (2019) 1–24. http://arxiv.org/abs/1912.00873.
[19] X. Meng, Z. Li, D. Zhang, G.E. Karniadakis, PPINN:
Parareal physics-informed neural network for time-dependent
PDEs, Comput. Methods Appl. Mech. Eng. 370 (2020) 113250.
https://doi.org/10.1016/j.cma.2020.113250.
[20] E. Haghighat, A.C. Bekar, E. Madenci, R. Juanes, A nonlocal
physics-informed deep learning framework using the peridynamic
differential operator, (2020). http://arxiv.org/abs/2006.00446.
[21] S. Wang, Y. Teng, P. Perdikaris, Understanding and mitigat-
ing gradient pathologies in physics-informed neural networks,
(2020). http://arxiv.org/abs/2001.04536 (accessed August 26,
2020).
[22] T. Turányi, A.S. Tomlin, M.J. Pilling, On the error of the
quasi-steady-state approximation, J. Phys. Chem. 97 (1993) 163–
172. https://doi.org/10.1021/j100103a028.