<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neural Network Emulator Optimizer: A Preliminary Study on Korean Microphysics Parameterization Model⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sojung An[</string-name>
          <email>sojungan@kiaps.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Inchae Na</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tae-Jin Oh⋆⋆</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Junghan Kim</string-name>
          <email>jhkim@kiaps.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Korea Institute of Atmospheric Prediction Systems 35</institution>
          ,
          <addr-line>Boramae-ro, Dongjak-gu, Seoul</addr-line>
          ,
          <country country="KR">Korea</country>
        </aff>
      </contrib-group>
      <fpage>110</fpage>
      <lpage>119</lpage>
      <abstract>
        <p>Recent years have witnessed great progress in emulators based on neural network (NN). Current state-of-the-art emulators methods often apply shallow NN to attain high performance in physics system, which brings a faster speed processing on resource-constrained environments. Although several works have focused on improving accuracy in physics emulators, an efective and eficient method for tackling time consuming problem of existing system on high-resolution remains lacking. In this paper, we propose a optimum NN emulator of a microphysics (MPS) parameterization scheme to efectively solve the problem in numerical weather prediction (NWP) model, Korea Integrated Model (KIM) in particular. Specifically, we adopt a shallow NN to build an intelligent emulator, which can learn the feature map and estimate the vertical MPS forcing increment profiles. This study mainly relies on two technical contributions: (1) Optimization: reviewing and improving models for simulating non-linear parameters; (2) Feasibility: ecfiient computation with minimal loss of physical information. We validate the proposed model with four seasons (10-day forecast at 200 seconds interval) of KIM. Results indicate that the proposed single-layer network shows best performance for emulating MPS in KIM. Our analyses will provide a guideline for optimal physical parameterizations modeling.</p>
      </abstract>
      <kwd-group>
        <kwd>Emulator • Microphysics • Physical Parameterization • Neu- ral Network • Feature Extraction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        In recent decades, there have been considerable improvements in terms of
predictability in numerical weather prediction (NWP) models. Advancement in
computer hardware technology is considered one of the big drivers
accounting for improvement in this area. However, increase in spatial-temporal
resolution for NWP models, which is critical for better predictability, requires
exponential increase of computational power. Thus, utilizing even more
computational resource and/or achieving better model code optimization is continually
needed. Parameterization of subgrid-scale physical processes has been an active
research area ever since the birth of NWP. There are extensive research
activities on physics parameterization [
        <xref ref-type="bibr" rid="ref13 ref3 ref5 ref6 ref9">3, 5, 6, 9, 13</xref>
        ] utilizing Korea Integrated Model
(KIM), currently the operational NWP model in Korea, which is built upon
nonhydrostatic governing equations and discretized with spectral element and finite
diference method in horizontal and vertical dimensions, respectively [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The
physics module parameterizes subgrid-scale physical processes which cannot be
resolved in the grid-scale in NWP model. Subgrid-scale physics parameterization
development in most cases relies on observational data which are parameterized
to make it fit the observation. Also, physics parameterization is computed in
vertical columns in terms of the three-dimensional data grid structure and is
agnostic to the neighboring horizontal grids. These properties make machine
learning an ideal tool for developing physics parameterization as approximating
complex nonlinear mappings can be done efectively with machine learning, e.g.,
neural networks..
      </p>
      <p>
        Previous studies of parameterization based on machine learning can be
categorized into developing emulation-based methods for accelerated calculation
[
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref16 ref17 ref7 ref8">7, 8, 10–12, 16, 17</xref>
        ] and developing new empirical parameterizations based on
observation or high resolution model data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Recently, a shallow Neural Network
emulator which covers the entire suite of physical parameterization has been
developed [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] which is based on a single hidden layer. Their model showed
satisfactory accuracy with a much faster execution in simulating nonlinear physics
modules compared to the original code. By exploiting faster computation of the
NN based emulator approach, NWP models can be run efectively on a higher
resolution. There are competent studies using NN as emulators (e.g., Krasnopolsky
et al., Nadiga et al.) but studies regarding optimization of network architecture
are yet to be found. When artificial intelligence recognizes patterns in physics,
the structure of neurons, depth of layers, and the choice of activation functions
are important factors. For example, the Rectified Linear Unit (ReLU) given by
max(0, x) is the most often used activation function in the deep learning
community. However, when it comes to physics-informed neural networks, ReLU is
reported to result in spurious wiggles in the computed derivatives
representing fluid flows [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Thus, understanding the underlying physics of your target
system in detail is critical for efective network design.
      </p>
      <p>In this study, we explore various types of NN structure that represent physics
of the atmosphere. Specifically, we compare a series of NN models to test its
efectiveness with the combination of the following options: (i) numbers of each
layer, (ii) neuron structure, and (iii) activation function.</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>This section describes several studies based on emulation similar to our
proposed model. We classify these studies into two groups: (i) single-physics and
(ii) unified-physics emulation.</p>
      <sec id="sec-2-1">
        <title>Single-physics emulations</title>
        <p>
          The state-of-the-art models for physics emulator include a step that optimizes
hidden layer based on machine learning, or approximates these weights based
on various activation functions. Many previous studies demonstrated that the
NN emulator approach can be applied successfully to speed up the computation
of a single physics module. Machine learning is first used to emulate longwave
radiation for the European Centre for Medium-Range Weather Forecasts models
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Krasnopolsky reduced computation time by one to two orders of magnitude
of decadal climate simulations [
          <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
          ]. Also, the authors verified that the speed
of longwave radiation emulators based on NN can be increased by 50 to 80
times [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          Aside from these, multi-perceptrons have been proposed for predicting
nonlinear phenomena in the physics of the atmosphere influenced by a number of
factors. To accelerate expensive radiative transfer computations, deep NN has
been applied to predict vertical profiles of longwave and shortwave radiative
lfuxes in weather and climate models [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Veerman developed a
parametrization to emulate a radiation parametrization based on multi-perceptron and leaky
relu [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Roh evaluated the forecast performance of radiation emulators with 300
to 56 neurons with sigmoid activation for cloud-resolving simulation [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. The
Emulators surpassed the speed of the existing model in a single physics process.
However, when integrated with the main NWP model, the speed up efect was
limited [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>An unified-physics emulation</title>
        <p>
          Traditional emulation-based methods were parameterized based on NN by
dividing the physical process separately. Some errors arising from these models had
a significant efect on the accuracy of the overall NWP model. As all physical
processes closely interacts with each another, one error source in a particular
physics module can cause larger errors in other sub-physics. Belochitski
proposed a shallow NN based emulator of a complete suite of atmospheric physics
parameterizations [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The paper aims to learn all domains of physics intimately
connected. These methods learn an encoder to extract the physical features, and
maintain the representation consistency through minimizing the reconstruction
error between all physics. Their implementation can handle long-term numerical
integration as well as providing 3 times faster computation than the original
physics module. However, these advantages come with a huge computational
cost, and it is hard to train high-resolution data. Fortunately, the paper achieved
good results at a high resolution of 25km, despite their model being trained with
a 100km-resolution source data.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>A Network for Emulation-based Parameterizations</title>
      <p>This paper studies the problem of physics from the perspective of a NN to
propose an emulation-based algorithm.</p>
      <sec id="sec-3-1">
        <title>Defining the Weighted Matrix</title>
        <p>Physics emulator is to learn input features on a network to map the variables
into next input. Let X(t) and Y (t) represent the input and output at time t,
respectively. Denote X = (x0, x1, · · · , xn)T ∈ Rnas the input information related
to physics observed on the network and Y ∈ Rm as the output information,
where n and m are the number of input and output dimension, respectively.
The purpose of training is to find θ that maximizes the conditional probability
of input-output pairs in the training sets. On each t step, the decoder receives
the features of the previous input. If the model is trained to be predicting the
output with constantly updated input presented in the previous phase, we are
training function Φ successively as:
Φ = hx(0t), · · · , x(nt); θ i →
hx(0t+1), · · · , x(mt+1); θ i ,
where t denotes time step. Inputs and outputs consist of the same attributes for
each step and each output attributes are connected in a recursive manner. For
any xi, the inputs are calculated according to the following formula:
where wi,j is the weights between input xi and output yj of physics.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Parameterization Network of the Physical Feature Extraction</title>
        <p>Since the network is a matrix, for any value wi,j in the matrix, its output yi is
defined as the following:
yi =</p>
        <p>M
X x′iwi,j + bj .</p>
        <p>i=1
Using the definition of layers, we can obtain the output matrix of the size n × m.
Assume that there are L network layers, where the 0th layer is the input layer
and the lth(1 ≤ l ≤ L) layers are fully connected layers. For any fully connected
x′ =
1,
 x −
 max −
0,
min
min</p>
        <p>if x &gt; max
, if min ≤ x &lt; max
otherwise
The min and max range of each attribute are set manually as physically
allowable values in KIM. We define the emulator network as a weight vector W
representing the state of the entire attributes and concatenate each value
associated variables as W . W ∈ Rn× m represents the weighted matrix:
w1,1 w1,2 · · ·
w2,1 w2,2 · · ·
W =  ... ... . . .</p>
        <p>wn,1 wn,2 · · ·
w1,n 
w2,m 
. 
. 
. 
wn,m n× m
(1)
(2)
(3)
(4)
layer l ∈ [1, · · · , L], the output of the lth(1 ≤ l ≤
calculated using the following equation:
L) fully connected layer is
where sl− 1 denotes the number of parameters in the (l − 1)th layer; Hs(l) ∈ Rsl− 1
is the sth physics hidden parameter in the lth layer and σ (· ) is the activation
function. The neuron structure can be divided into four following structures
according to the change in the number of each neuron:</p>
        <p>sl− 1
Hr(l) = σ (X fθ l Hs(l− 1)),</p>
        <p>s=1
α : sL &lt; {s2, · · · , sl− 1} ≤ s1
β : sL &lt; s1 &lt; {s2, · · ·</p>
        <p>, sl− 1}
γ : {s2, · · · , sl− 1} &lt; sL &lt; s1
δ : sL ≤ { s2, · · · , sl− 1} &lt; s1,
where {S2, · · · , Sl− 1} denotes the number set of hidden neurons connected to
input layer Hs(1), and sl is a number of output neurons. Activation functions
Hyperbolic tangent
tested in this study are described in Table 1. The sl− 1, number of l, and σ (· ),
defined in this section, are chosen as optimized formula by experiments in the
next section. Finally, the errors between actual scores and the predicted scores
are minimized by L1-norm.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>In this section, we evaluate the NN model for emulating physics among the
models proposed in section 3.
(5)
(6)</p>
      <sec id="sec-4-1">
        <title>Datasets and Implementation Details</title>
        <p>
          To verify the eficacy of the proposed methods, we conducted experiments with
generated MPS dataset for every 200 seconds using ERA5 reanalysis by latest
KIM version 3.6. MPS has been selected for this first study because the
nonlinearity of MPS is hard to predict and it is one of the most time-consuming
part of the atmospheric physical processes. The input and output attributes
are shown in Table 2. The dataset consists of 4 sets with each season to
consider the temporal features. Dynamical core of the NWP is a spectral element
cubed-sphere nonhydrostatic model with a horizontally quasi-uniform resolution
of 100km, with 91 vertical layers (˜0.01hpa) on a hybrid sigma-pressure vertical
coordinate. Data is extracted randomly in each KIM forecasting data, and
composed of 6,998,688 training sets. We use the other 1,749,672 sets for evaluation.
Our methods are implemented using PyTorch [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and optimized using Adam
(λ = 103). For fairness, we train all networks with a batch size of 128 for 500
epochs, on random sets.
        </p>
        <p>We evaluate our emulation both quantitatively and qualitatively for
estimating the optimal emulator. The results of this experiment are evaluated with three
metrics shown in Eq. (7): Root Mean Square Error (RMSE), mean absolute
error (MAE), and Peak Signal-to-noise ratio (PSNR). Given NN output to ei and
KIM output to e¯i, the evaluation function can be written as:</p>
        <p>n
MAE = 1 X
n
i=1</p>
        <p>|ei − e¯i|
v</p>
        <p>n
RMSE = tuu n1 X(ei − e¯i)2</p>
        <p>i=1
PSNR = 10 · log(
max2 n</p>
        <p>X(ei − e¯i)2),
n i=1
(7)
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Case Analysis</title>
        <p>For comparison, we set a benchmark case that consists of 2-layers NN based on
a hyperbolic tangent and the neuron structure of γ . Table. 3 depicts the average
accuracy of emulations with diferent approaches. In our first experiment, we
evaluate the impact of learning each structure for estimating the optimal depth
of NN layers. As L is 2 in the benchmark case, we set the hidden neurons to
α s2 → 822, β s2 → 1096, γ s2 → 400, and δ s2 → 548, respectively. The PSNR
result indicates that the performances of emulation approaches by applying the
β method increased from 29.55 (the γ method commonly used in emulation) to
32.61 of the average accuracy of emulation. The β structure also shows overall
best performance compared to other network structures as shown in the
evaluation metric scores. When the number of hidden neurons decreases relative to
the input, such as in the γ structure, loss of latent features is inevitable. These
losses can cause uncertainties of physics, so the hidden neurons must be greater
than or equal to the number of output neurons.</p>
        <p>Let us compare the impact of learning according to layer depth and
activation function. All networks to which the deep layer was applied except for the
single-layer applied the fully connected layer, and then proceeded with setting
the activation function and dropout (0.2). The result of approximation errors
presented above shows that the shallow NN is capable of providing NN
emulations with small error. Especially, single-layer yields best performance
compared to other cases. As for the activation, the Leaky ReLU with a bend in the
slope shows the worst performance, consistent with previous studies. The results
demonstrate that hyperbolic tangent activation for emulating MPS is efective,
providing accurate and visually promising results. The swish activation
function operated well as hyperbolic tangent with 2.14(· 10− 4) error. Generally, good
results are obtained from functions with a soft and high gradient.</p>
        <p>Finally, we compare the original KIM MPS output with the NN emulator
output. Fig. 1 illustrates the correlations between outputs from the NN-based
emulation with the outputs of the MPS. The single-layer-based emulator, which
is the best result in Table 3, was used to emulate the output of the MPS. The
figure is outputs according to 5-day input data integrated by KIM in consideration
of spin-up.</p>
        <p>Average of Correlation coeficients is 0.998 (Precipitation: 0.989, Snow: 0.989,
Temperature: 0.999, Specific humidity: 0.999, and Ratio: 997). Emulators with
shallow NN also verified similar results. KIM MPS based, NN emulation, and
their diference in precipitation distribution is shown in Fig. 2. Precipitation
simulations of KIM and NN are shown in (a) and (b); (c) is the diference of the
simulations, with single-layer emulator. The precipitation distributions for the
KIM and NN emulator runs are very similar showing little diference as shown
in (c) in Fig. 2.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and Future Work</title>
      <p>As information technology evolves, the tasks of accelerating physics
parameterization and increasing the resolution of NWP have been gaining importance.
Traditional emulator methods tend to design only on the overall composition
such as a kind of input data. This paper focused on building an optimal NN
targeted according to the detail of network settings for physics emulation. Various
experiments were carried out to design the detailed structure of the network,
and we tried to design a network that can understand the characteristics of the
physics. Specifically, to overcome the drawbacks of existing models, we have
attempted methods for optimizing NN details (i.e., neuron structure, number of
layers, and activation function). Through experiments parameterization of the
MPS using a single layer showed the best performance.</p>
      <p>In the case of MPS, the usage of deep-layer does not bring the best result but
it may bring great results in other physics parameterizations. The depth of layer
can be changed according to physics patterns and other elements (e.g. number
of profiles, resolution, parameterization type, so on) used by the NWP model.
So we want an emulation that feasible not only single physics, but also across
physics module. Ideally, we will achieve unified-physics emulation by considering
diferent physical patterns, leveraging dynamic NN modeling.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgement</title>
      <p>This work was carried out through the R&amp;D project ”Development of the
Nextgeneration Operational System of Korea Institute of Atmospheric Prediction
Systems (KIAPS)”, funded by Korea Meteorological Administration
(KMA202002213).</p>
    </sec>
    <sec id="sec-7">
      <title>References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Belochitski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Krasnopolsky</surname>
          </string-name>
          , V.:
          <article-title>Stable Emulation of an Entire Suite of Model Physics in a State-of-the-Art GCM using a Neural Network</article-title>
          , arXiv preprint arXiv:
          <volume>2103</volume>
          .
          <fpage>07028</fpage>
          , (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Chevallier</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Cher´uy,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Scott</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. A.</surname>
          </string-name>
          , and Ched´in, A.:
          <article-title>A neural network approach for a fast and accurate computation of a longwave radiative budget</article-title>
          .
          <source>Journal of applied meteorology</source>
          ,
          <volume>37</volume>
          (
          <issue>11</issue>
          ),
          <fpage>1385</fpage>
          -
          <lpage>1397</lpage>
          (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Han,
          <string-name>
            <given-names>J. Y.</given-names>
            ,
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Y.</given-names>
            ,
            <surname>Sunny Lim</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. S.</surname>
          </string-name>
          , and Han,
          <string-name>
            <surname>J</surname>
          </string-name>
          .:
          <article-title>Sensitivity of a cumulus parameterization scheme to precipitation production representation and its impact on a heavy rain event over Korea</article-title>
          .
          <source>Monthly Weather Review</source>
          ,
          <volume>144</volume>
          (
          <issue>6</issue>
          ),
          <fpage>2125</fpage>
          -
          <lpage>2135</lpage>
          (
          <year>2016</year>
          ).
          <volume>144</volume>
          ,
          <fpage>2125</fpage>
          -
          <lpage>2135</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Hong</surname>
          </string-name>
          , S. Y. et al.,:
          <article-title>The Korean Integrated Model (KIM) system for global weather forecasting</article-title>
          .
          <source>Asia-Pacific Journal of Atmospheric Sciences</source>
          ,
          <volume>54</volume>
          (
          <issue>1</issue>
          ),
          <fpage>267</fpage>
          -
          <lpage>292</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>E. J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hong</surname>
          </string-name>
          , S. Y.:
          <article-title>Impact of air-sea interaction on East Asian summer monsoon climate in WRF</article-title>
          .
          <source>Journal of Geophysical Research: Atmospheres</source>
          ,
          <volume>115</volume>
          (
          <issue>D19</issue>
          ) (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Koo</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>H. J.</given-names>
          </string-name>
          , and Han, J. Y.:
          <article-title>A parameterization of turbulent-scale and mesoscale orographic drag in a global atmospheric model</article-title>
          .
          <source>Journal of Geophysical Research: Atmospheres</source>
          ,
          <volume>123</volume>
          (
          <issue>16</issue>
          ),
          <fpage>8400</fpage>
          -
          <lpage>8417</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Krasnopolsky</surname>
            ,
            <given-names>V. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fox-Rabinovitz</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Chalikov</surname>
            ,
            <given-names>D. V.</given-names>
          </string-name>
          :
          <article-title>New Approach to Calculation of Atmospheric Model Physics: Accurate and Fast Neural Network Emulation of Longwave Radiation in a Climate Model</article-title>
          ,
          <source>Monthly Weather Review</source>
          <volume>133</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1370</fpage>
          -
          <lpage>1383</lpage>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Krasnopolsky</surname>
            ,
            <given-names>V. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fox-Rabinovitz</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Belochitski</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          :
          <article-title>Decadal Climate Simulations Using Accurate and Fast Neural Network Emulation of Full, Longwave</article-title>
          and Shortwave, Radiation, Monthly Weather Review,
          <volume>136</volume>
          (
          <issue>10</issue>
          ),
          <fpage>3683</fpage>
          -
          <lpage>3695</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>E. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwon</surname>
            ,
            <given-names>Y. C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hong</surname>
          </string-name>
          , S. Y.:
          <article-title>Impact of turbulent mixing in the stratocumulus-topped boundary layer on numerical weather prediction</article-title>
          .
          <source>Asia-Pacific Journal of Atmospheric Sciences</source>
          ,
          <volume>54</volume>
          (
          <issue>1</issue>
          ),
          <fpage>371</fpage>
          -
          <lpage>384</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Morcrette</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mozdzynski</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Leutbecher</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A reduced radiation grid for the ECMWF Integrated Forecasting System, Monthly weather review</article-title>
          ,
          <volume>136</volume>
          (
          <issue>12</issue>
          ),
          <fpage>4760</fpage>
          -
          <lpage>4772</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Nadiga</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krasnopolsky</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bayler</surname>
            ,
            <given-names>E. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mehra</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H. C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Behringer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Neural Network Technique For:(a) Gap-Filling Of Satellite Ocean Color Observations, And (b) Bridging Multiple Satellite Ocean Color Missions</article-title>
          .
          <source>In AGU Fall Meeting Abstracts</source>
          ,
          <year>2015</year>
          ,
          <fpage>IN43C</fpage>
          -
          <lpage>1755</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Pal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mahajan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          :
          <article-title>Using deep neural networks as costefective surrogate models for super-parameterized E3SM radiative transfe</article-title>
          ,.
          <source>Geophysical Research Letters</source>
          ,
          <volume>46</volume>
          (
          <issue>11</issue>
          ),
          <fpage>6069</fpage>
          -
          <lpage>6079</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>R. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chae</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hong</surname>
          </string-name>
          , S. Y.:
          <article-title>A revised prognostic cloud fraction scheme in a global forecasting system</article-title>
          .
          <source>Monthly Weather Review</source>
          ,
          <volume>144</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1219</fpage>
          -
          <lpage>1229</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Paszke</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gross</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Massa</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lerer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bradbury</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chanan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Chintala</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          : Pytorch:
          <article-title>An imperative style, high-performance deep learning library</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          ,
          <volume>32</volume>
          ,
          <fpage>8026</fpage>
          -
          <lpage>8037</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Raissi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yazdani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Karniadakis</surname>
            ,
            <given-names>G. E.</given-names>
          </string-name>
          :
          <article-title>Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations</article-title>
          .
          <source>Science</source>
          ,
          <volume>367</volume>
          (
          <issue>6481</issue>
          ),
          <fpage>1026</fpage>
          -
          <lpage>1030</lpage>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Roh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>H. J.</given-names>
          </string-name>
          :
          <article-title>Evaluation of neural network emulations for radiation parameterization in cloud resolving model</article-title>
          .
          <source>Geophysical Research Letters</source>
          ,
          <volume>47</volume>
          (
          <issue>21</issue>
          ),
          <year>e2020GL089444</year>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Veerman</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pincus</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stofer</surname>
            , R., Van Leeuwen,
            <given-names>C. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Podareanu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and Van Heerwaarden,
          <string-name>
            <surname>C. C.</surname>
          </string-name>
          :
          <article-title>Predicting atmospheric optical properties for radiative transfer computations using neural networks</article-title>
          .
          <source>Philosophical Transactions of the Royal Society A</source>
          ,
          <volume>379</volume>
          (
          <issue>2194</issue>
          ),
          <volume>20200095</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>