<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preface: Combining Artificial Intelligence and Machine Learning with Physical Sciences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jonghyun Lee</string-name>
          <email>jonghyun.harry.lee@hawaii.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eric F. Darve</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter K. Kitanidis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthew W. Farthing</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tyler Hesser</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Copyright c 2020, Copyright held by the author(s). In J. Lee, E. F. Darve</institution>
          ,
          <addr-line>P. K. Kitanidis, M. Farthing, T. Hesser (Eds.)</addr-line>
          ,
          <institution>Proceedings of the AAAI 2020 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physical Sciences. Stanford University</institution>
          ,
          <addr-line>Palo Alto, California</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Stanford University</institution>
          ,
          <addr-line>CA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>U.S. Army Engineer Research and Development Center</institution>
          ,
          <addr-line>MS</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Hawai'i at Ma ̄noa</institution>
          ,
          <addr-line>HI</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This volume contains the contributed papers selected of the AAAI 2020 spring symposium on “Combining Artificial Intelligence and Machine Learning with Physics Sciences.” The symposium was held on 23 to 25 March 2020 in a virtual form because of the SARS-CoV-2 virus (Covid-19) outbreak. This symposium aimed to present the current state of the art and identify opportunities and gaps in AI/ML-based physics modeling and analysis. With recent advances in scientific data acquisition and high-performance computing, Artificial Intelligence (AI) and Machine Learning (ML) have received significant attention from the applied mathematics and physics science community. From successes reported by industry, academia, and the research community at large, we observe that AI and ML hold great potential for leveraging scientific domain knowledge to support new scientific discoveries and enhance the development of physical models for complex natural and engineered systems. Despite this progress, there are still many open questions. Our current understanding is limited regarding how and why AI/ML work and why they can be predictive. AI has been shown to outperform traditional methods in many cases, especially with high-dimensional, inhomogeneous data sets. Areas where deep learning methods have been demonstrated to outperform traditional numerical schemes include: Meshless methods. Deep Neural Networks (DNNs) do not require a grid and can directly map a spatial coordinate (x; y; z) to an output. This is critical in applications where meshing is difficult or the domain of interest is not clearly defined (e.g., for certain inverse modeling problems).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Global schemes. DNNs allow approximating the solution</title>
      <p>without resorting to a local scheme based for example on
piecewise polynomial approximation methods. In that
respect, deep learning is closely related to spectral methods
such as the Fourier decomposition.
High-order and adaptive methods. The depth in DNNs has
been associated with highly accurate representations of
high-order schemes. For example, deep networks can
efficiently represent high-order polynomials using relatively
few layers. In addition, DNNs have also shown great
accuracy when approximating functions with rapid changes
or even discontinuous jumps.</p>
    </sec>
    <sec id="sec-2">
      <title>High-dimensional problems. DNNs are also very effective</title>
      <p>in representing high-dimensional problems, for example
in certain applications in probability which represent the
evolution of high-dimensional probability distributions.</p>
    </sec>
    <sec id="sec-3">
      <title>Applications to high-dimensional parabolic PDEs such</title>
      <p>as the nonlinear Black–Scholes equation, the Hamilton–</p>
    </sec>
    <sec id="sec-4">
      <title>Jacobi–Bellman equation, and the Allen–Cahn equation have also been demonstrated.</title>
    </sec>
    <sec id="sec-5">
      <title>Finally, Generative Adversarial Networks offer new av</title>
      <p>enues to approximate complex probability density
functions to model stochastic processes and for uncertainty
quantification. They allow going beyond Gaussian
process approximations and model more complex
dependencies and distributions.</p>
      <p>However, a rigorous understanding of when AI/ML is the
right approach is largely lacking. That is, for what class of
problems, underlying assumptions, available data sets, and
constraints are these new methods best suited? The lack of
interpretability in AI-based modeling and related scientific
theories makes them insufficient for high-impact,
safetycritical applications such as medical diagnoses, national
security, as well as environmental contamination and
remediation. Some of the main limitations include:</p>
    </sec>
    <sec id="sec-6">
      <title>Difficulty to train a network. This requires solving a complex non-convex optimization problem. For example, the accuracy of the solution often depends on the choice of initial conditions.</title>
      <p>Difficulty to assess the accuracy of deep learning
predictions. DL is notoriously accurate when the input data
resembles similar points in the training data. However, there
is less control over the accuracy when the test point moves
away from the training set. Quantifying this error and
being able to predict the accuracy of DL is currently poorly
understood.</p>
    </sec>
    <sec id="sec-7">
      <title>Tuning a DNN remains an art. Relatively few guidelines exist to determine the architecture of the network and tune the hyperparameters (number of layers, depth, choice of activation function).</title>
      <p>With transparency and a clear understanding of
datadriven mechanisms, the desirable properties of AI should
be best utilized to extend current methods in modeling of
physics and engineering problems. At the same time,
handling expensive training costs and large memory
requirements for ever-increasing scientific data sets is becoming
more and more important to guarantee scalable science
machine learning.</p>
      <p>The symposium focused on challenges and
opportunities for increasing the scale, rigor, robustness, and
reliability of physics-informed AI necessary for routine use in
science and engineering applications. The symposium also
discussed bridging AI and engineering research to significantly
advance diverse scientific areas and transform the way
science is done.</p>
    </sec>
    <sec id="sec-8">
      <title>The accepted papers were presented over 3 days with two</title>
      <p>invited talks each day. The symposium was broadcast live
and camera-ready presentations were posted on Youtube.</p>
    </sec>
    <sec id="sec-9">
      <title>As editors of the proceedings we are grateful to everyone who contributed to the symposium. We would like to thank the invited speakers:</title>
    </sec>
    <sec id="sec-10">
      <title>Lexing Ying, Stanford University</title>
    </sec>
    <sec id="sec-11">
      <title>Paris Perdikaris, University of Pennsylvania</title>
    </sec>
    <sec id="sec-12">
      <title>Maziar Raissi, University of Colorado, Boulder</title>
    </sec>
    <sec id="sec-13">
      <title>Marco Pavone, Stanford University</title>
    </sec>
    <sec id="sec-14">
      <title>Stefano Ermon, Stanford University</title>
    </sec>
    <sec id="sec-15">
      <title>Kevin Carlberg, University of Washington</title>
      <p>for presenting their work to the audience of
AAAI</p>
    </sec>
    <sec id="sec-16">
      <title>MLPS2020. We thank all authors who submitted their papers for consideration. AAAI-MLPS Program Committee includes</title>
    </sec>
    <sec id="sec-17">
      <title>Peter Sadowski, University of Hawaii at Manoa, USA</title>
    </sec>
    <sec id="sec-18">
      <title>Mario Putti, University of Padova, Italy</title>
    </sec>
    <sec id="sec-19">
      <title>Hongkyu Yoon, Sandia National Laboratories</title>
    </sec>
    <sec id="sec-20">
      <title>Nathaniel Trask, Sandia National Laboratories</title>
    </sec>
    <sec id="sec-21">
      <title>Hojat Ghorbanidehno, Cisco Systems</title>
    </sec>
    <sec id="sec-22">
      <title>Mojtaba Forghani, Stanford University, USA</title>
    </sec>
    <sec id="sec-23">
      <title>Mohammadamin Tavakoli, University of California</title>
    </sec>
    <sec id="sec-24">
      <title>Irvine, USA</title>
    </sec>
    <sec id="sec-25">
      <title>We also thank all Program Committee members and anonymous referees for their reviewing of the submissions. The work was carried out using the EasyChair system supported by AAAI, and we gratefully acknowledge AAAI.</title>
      <p>Contents
A 2D Fully Convolutional Neural Network For
Nearshore And Surf-Zone Bathymetry Inversion
From Synthetic Imagery Of The Surf-Zone Using
The Model Celeris
Adam Collins, Katherine L. Brodie, Spicer Bak, Tyler
Hesser, Matthew W. Farthing, Douglas W. Gamble, and
Joseph W. Long
A Weighted Sparse-Input Neural Network
Technique Applied to Identify Important Features for
Vortex-Induced Vibration
Leixin Ma, Themistocles Resvanis, and Kim Vandiver
Deep Learning for Climate Models of the Atlantic
Ocean
Anton Nikolaev, Ingo Richter, and Peter Sadowski
Deep Sensing of Ocean Wave Heights with
Synthetic Aperture Radar
Brandon Quach, Yannik Glaser, Justin Stopa, and Peter
Sadowski
Enforcing Constraints for Time Series Prediction
in Supervised, Unsupervised and Reinforcement
Learning
Panos Stinis
Event-Triggered Reinforcement Learning for
Better Sample Efficiency; An Application to
Buildings’ Micro-Climate Control
Ashkan Haji Hosseinloo and Munther Dahleh
Finding Multiple Solutions of ODEs with Neural
Networks
Marco Di Giovanni, David Sondak, Pavlos Protopapas and
Marco Brambilla
Generalized Physics-Informed Learning through
Language-Wide Differentiable Programming
Chris Rackauckas, Alan Edelman, Keno Fischer, Mike
Innes, Elliot Saba, Viral B. Shah and Will Tebbutt
GMLS-Nets: A Machine Learning Framework
for Unstructured Data
Nathaniel Trask, Ravi Patel, Paul Atzberger and Ben Gross
Physics-Informed Machine Learning for
Realtime Reservoir Management
Maruti K. Mudunuru, Daniel O’Malley, Shriram
Srinivasan, Jeffrey D. Hyman, Matthew R. Sweeney, Luke
Frash, Bill Carey, Michael R. Gross, Nathan J. Welch,
Satish Karra, Velimir V. Vesselinov, Qinjun Kang,
Hongwu Xu, Rajesh J. Pawar, Tim Carr, Liwei Li, George
D. Guthrie and Hari S. Viswanathan
Physics-Informed Spatiotemporal Deep Learning
for Emulating Coupled Dynamical Systems
Anishi Mehta, Cory Scott, Diane Oyen, Nishant Panda and
Gowri Srinivasan
Continuous Representation Of Molecules using
Graph Variational Autoencoder
Mohammadamin Tavakoli and Pierre Baldi
Data-Driven Inverse Modeling with Incomplete
Observations
Kailai Xu and Eric Darve
DeepXDE: A Deep Learning Library for Solving
Differential Equations
Lu Lu, Xuhui Meng, Zhiping Mao and George Em
Karniadakis
Nonlocal Physics-Informed Neural Networks - A
Unified Theoretical and Computational
Framework for Nonlocal Models
Marta D’Elia, George E. Karniadakis, Guofei Pang and
Michael L. Parks
Permeability Prediction of Porous Media using
Convolutional Neural Networks with Physical
Properties
Hongkyu Yoon, Darryl Melander and Stephen J. Verzi</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>