Preface: Combining Artificial Intelligence and Machine Learning with Physical Sciences Jonghyun Lee1∗ , Eric F. Darve2 , Peter K. Kitanidis2 , Matthew W. Farthing3 Tyler Hesser3 1 University of Hawai‘i at Mānoa, HI, USA 2 Stanford University, CA, USA 3 U.S. Army Engineer Research and Development Center, MS, USA This volume contains the contributed papers selected of • High-order and adaptive methods. The depth in DNNs has the AAAI 2020 spring symposium on “Combining Artificial been associated with highly accurate representations of Intelligence and Machine Learning with Physics Sciences.” high-order schemes. For example, deep networks can effi- The symposium was held on 23 to 25 March 2020 in a vir- ciently represent high-order polynomials using relatively tual form because of the SARS-CoV-2 virus (Covid-19) out- few layers. In addition, DNNs have also shown great ac- break. curacy when approximating functions with rapid changes This symposium aimed to present the current state of or even discontinuous jumps. the art and identify opportunities and gaps in AI/ML-based • High-dimensional problems. DNNs are also very effective physics modeling and analysis. With recent advances in in representing high-dimensional problems, for example scientific data acquisition and high-performance comput- in certain applications in probability which represent the ing, Artificial Intelligence (AI) and Machine Learning (ML) evolution of high-dimensional probability distributions. have received significant attention from the applied mathe- Applications to high-dimensional parabolic PDEs such matics and physics science community. From successes re- as the nonlinear Black–Scholes equation, the Hamilton– ported by industry, academia, and the research community Jacobi–Bellman equation, and the Allen–Cahn equation at large, we observe that AI and ML hold great potential for have also been demonstrated. leveraging scientific domain knowledge to support new sci- entific discoveries and enhance the development of physical • Finally, Generative Adversarial Networks offer new av- models for complex natural and engineered systems. enues to approximate complex probability density func- tions to model stochastic processes and for uncertainty Despite this progress, there are still many open questions. quantification. They allow going beyond Gaussian pro- Our current understanding is limited regarding how and why cess approximations and model more complex dependen- AI/ML work and why they can be predictive. AI has been cies and distributions. shown to outperform traditional methods in many cases, es- pecially with high-dimensional, inhomogeneous data sets. However, a rigorous understanding of when AI/ML is the Areas where deep learning methods have been demonstrated right approach is largely lacking. That is, for what class of to outperform traditional numerical schemes include: problems, underlying assumptions, available data sets, and constraints are these new methods best suited? The lack of • Meshless methods. Deep Neural Networks (DNNs) do not interpretability in AI-based modeling and related scientific require a grid and can directly map a spatial coordinate theories makes them insufficient for high-impact, safety- (x, y, z) to an output. This is critical in applications where critical applications such as medical diagnoses, national se- meshing is difficult or the domain of interest is not clearly curity, as well as environmental contamination and remedi- defined (e.g., for certain inverse modeling problems). ation. Some of the main limitations include: • Global schemes. DNNs allow approximating the solution • Difficulty to train a network. This requires solving a com- without resorting to a local scheme based for example on plex non-convex optimization problem. For example, the piecewise polynomial approximation methods. In that re- accuracy of the solution often depends on the choice of spect, deep learning is closely related to spectral methods initial conditions. such as the Fourier decomposition. • Difficulty to assess the accuracy of deep learning predic- ∗ tions. DL is notoriously accurate when the input data re- jonghyun.harry.lee@hawaii.edu Copyright c 2020, Copyright held by the author(s). In J. Lee, E. F. sembles similar points in the training data. However, there Darve, P. K. Kitanidis, M. Farthing, T. Hesser (Eds.), Proceedings is less control over the accuracy when the test point moves of the AAAI 2020 Spring Symposium on Combining Artificial In- away from the training set. Quantifying this error and be- telligence and Machine Learning with Physical Sciences. Stanford ing able to predict the accuracy of DL is currently poorly University, Palo Alto, California, USA, March 23-25, 2020. understood. • Tuning a DNN remains an art. Relatively few guidelines Adam Collins, Katherine L. Brodie, Spicer Bak, Tyler exist to determine the architecture of the network and tune Hesser, Matthew W. Farthing, Douglas W. Gamble, and the hyperparameters (number of layers, depth, choice of Joseph W. Long activation function). A Weighted Sparse-Input Neural Network Tech- With transparency and a clear understanding of data- nique Applied to Identify Important Features for driven mechanisms, the desirable properties of AI should Vortex-Induced Vibration be best utilized to extend current methods in modeling of Leixin Ma, Themistocles Resvanis, and Kim Vandiver physics and engineering problems. At the same time, han- dling expensive training costs and large memory require- Deep Learning for Climate Models of the Atlantic ments for ever-increasing scientific data sets is becoming Ocean more and more important to guarantee scalable science ma- Anton Nikolaev, Ingo Richter, and Peter Sadowski chine learning. The symposium focused on challenges and opportuni- Deep Sensing of Ocean Wave Heights with Syn- ties for increasing the scale, rigor, robustness, and reliabil- thetic Aperture Radar ity of physics-informed AI necessary for routine use in sci- Brandon Quach, Yannik Glaser, Justin Stopa, and Peter ence and engineering applications. The symposium also dis- Sadowski cussed bridging AI and engineering research to significantly advance diverse scientific areas and transform the way sci- Enforcing Constraints for Time Series Prediction ence is done. in Supervised, Unsupervised and Reinforcement Learning The accepted papers were presented over 3 days with two Panos Stinis invited talks each day. The symposium was broadcast live and camera-ready presentations were posted on Youtube. Event-Triggered Reinforcement Learning for As editors of the proceedings we are grateful to everyone Better Sample Efficiency; An Application to who contributed to the symposium. We would like to thank Buildings’ Micro-Climate Control the invited speakers: Ashkan Haji Hosseinloo and Munther Dahleh • Lexing Ying, Stanford University Finding Multiple Solutions of ODEs with Neural • Paris Perdikaris, University of Pennsylvania Networks • Maziar Raissi, University of Colorado, Boulder Marco Di Giovanni, David Sondak, Pavlos Protopapas and Marco Brambilla • Marco Pavone, Stanford University • Stefano Ermon, Stanford University Generalized Physics-Informed Learning through Language-Wide Differentiable Programming • Kevin Carlberg, University of Washington Chris Rackauckas, Alan Edelman, Keno Fischer, Mike for presenting their work to the audience of AAAI- Innes, Elliot Saba, Viral B. Shah and Will Tebbutt MLPS2020. We thank all authors who submitted their pa- pers for consideration. AAAI-MLPS Program Committee GMLS-Nets: A Machine Learning Framework includes for Unstructured Data Nathaniel Trask, Ravi Patel, Paul Atzberger and Ben Gross • Peter Sadowski, University of Hawaii at Manoa, USA • Mario Putti, University of Padova, Italy Physics-Informed Machine Learning for Real- • Hongkyu Yoon, Sandia National Laboratories time Reservoir Management Maruti K. Mudunuru, Daniel O’Malley, Shriram Srini- • Nathaniel Trask, Sandia National Laboratories vasan, Jeffrey D. Hyman, Matthew R. Sweeney, Luke • Hojat Ghorbanidehno, Cisco Systems Frash, Bill Carey, Michael R. Gross, Nathan J. Welch, Satish Karra, Velimir V. Vesselinov, Qinjun Kang, • Mojtaba Forghani, Stanford University, USA Hongwu Xu, Rajesh J. Pawar, Tim Carr, Liwei Li, George • Mohammadamin Tavakoli, University of California D. Guthrie and Hari S. Viswanathan Irvine, USA Physics-Informed Spatiotemporal Deep Learning We also thank all Program Committee members and anony- for Emulating Coupled Dynamical Systems mous referees for their reviewing of the submissions. The Anishi Mehta, Cory Scott, Diane Oyen, Nishant Panda and work was carried out using the EasyChair system supported Gowri Srinivasan by AAAI, and we gratefully acknowledge AAAI. Continuous Representation Of Molecules using Contents Graph Variational Autoencoder Mohammadamin Tavakoli and Pierre Baldi A 2D Fully Convolutional Neural Network For Nearshore And Surf-Zone Bathymetry Inversion Data-Driven Inverse Modeling with Incomplete From Synthetic Imagery Of The Surf-Zone Using Observations The Model Celeris Kailai Xu and Eric Darve DeepXDE: A Deep Learning Library for Solving Differential Equations Lu Lu, Xuhui Meng, Zhiping Mao and George Em Karni- adakis Nonlocal Physics-Informed Neural Networks - A Unified Theoretical and Computational Frame- work for Nonlocal Models Marta D’Elia, George E. Karniadakis, Guofei Pang and Michael L. Parks Permeability Prediction of Porous Media using Convolutional Neural Networks with Physical Properties Hongkyu Yoon, Darryl Melander and Stephen J. Verzi Surfzone Topography-informed Deep Learning Techniques to Nearshore Bathymetry with Sparse Measurements Yizhou Qian, Hojat Ghorbanidehno, Matthew Farthing, Ty Hesser, Peter K. Kitanidis and Eric F. Darve