=Paper= {{Paper |id=Vol-2964/article_74 |storemode=property |title=Convolutional LSTM for Planetary Boundary Layer Height (PBLH) Prediction |pdfUrl=https://ceur-ws.org/Vol-2964/article_74.pdf |volume=Vol-2964 |authors=Dorsa Ziaei,Jennifer Sleeman,Milton Halem,Vanessa Caicedo,Ruben Delgado,Belay Demoz |dblpUrl=https://dblp.org/rec/conf/aaaiss/ZiaeiSHCDD21 }} ==Convolutional LSTM for Planetary Boundary Layer Height (PBLH) Prediction== https://ceur-ws.org/Vol-2964/article_74.pdf
    Convolutional LSTM for Planetary Boundary Layer Height (PBLH) Prediction
                                  Dorsa Ziaei,1 Jennifer Sleeman, 1 Milton Halem, 1
                                Vanessa Caicedo, 2,3 Ruben Delgado, 2,3 Belay Demoz 2,3
1
 University of Maryland, Baltimore County, Dept. of Computer Science & Electrical Engineering, Baltimore, MD 21250 USA
              2
                University of Maryland, Baltimore County, Department of Physics, Baltimore, MD 21250 USA
                         3
                           Joint Center for Earth Systems Technology, Baltimore, MD, 21250, USA
dorsaz1@umbc.edu, jsleem1@umbc.edu, halem@umbc.edu, vacaiced@umbc.edu, delgado@umbc.edu, bdemoz@umbc.edu

                             Abstract                                transitions from day to night. There has been an effort to im-
                                                                     prove PBLH estimations by using LIDAR backscatter pro-
    We describe new work that uses deep learning to learn tem-       files (Talianu et al. 2006; Compton et al. 2013; Sawyer and
    poral changes in Planetary Boundary Layer Height (PBLH).         Li 2013; Caicedo et al. 2017; Delgado et al. 2018). In previ-
    This work is performed in conjunction with a deep edge
                                                                     ous work by Sleeman et al. (Sleeman et al. 2020), a machine
    detection method that identifies edges in imagery based
    on ceilometer backscatter signal from LIDAR observations.        learning derived PBLH (ML-PBLH) was described based on
    We implement a convolutional Long Short Term Memory              a novel deep boundary layer edge detection method.
    (LSTM) to predict small temporal changes in PBLH esti-
    mates. In the presence of rain, clouds, and other unfavor-
    able conditions, PBLH heights are challenging to estimate.
    The convolutional LSTM acts as an internal state representa-
    tion of the external partially observable environment, supple-
    menting the deep edge detection method, providing a predic-
    tion of PBLH in the absence of a reliable estimation. Convo-
    lutional LSTMs trained on image-based frames that define
    the movements of artifacts in the images, such as Moving
    MNIST digits, have been used to predict the movement of
    these artifacts for a set of frames in a sequence. We show
    how a similar network could be extended to learn more com-       Figure 1: Lufft-CHM15K - UMBC - (left) 24 Hour LIDAR
    plex movement across frames and learn new information in-        Backscatter Profiles and (right) Backscatter Image Bound-
    troduced at each frame. Utilizing the convolutional LSTM         ary Detection (ML-PBLH)- 12/1/2016.
    model with our proposed augmentation methodology applied
    to ten-minute frames, we predicted the change of the move-
    ment of edges identified as the PBL over time with favorable        In Figure 1, we show an example of the backscatter pro-
    accuracy. We show the result of the prediction of PBL-based      file and the edges detected for December 1, 2020, using
    edges and evaluate the performance using three different met-    backscatter from a Lufft-CHM15K ceilometer located at
    rics.                                                            UMBC in Baltimore, MD. In Figure 2, we show the PBL
                                                                     heights estimated by our ML-PBLH method denoted by
                                                                     the magenta points. As can be seen in Figure 2, from 0:00
                         Introduction                                to 9:00 UTC the edge detection method detects erroneous
                                                                     points due to the presence of unfavorable conditions.
The Planetary Boundary Layer is the area just above the
                                                                        We address this problem by extending that work and by
earth’s surface and is the bottom turbulent layer of the tro-
                                                                     utilizing a convolutional Long Short Term Memory (LSTM)
posphere (Stull 1988). The height of the PBL, or PBLH,
                                                                     network to predict small temporal changes in PBLH esti-
is identified as the top of the turbulent layer, and is used
                                                                     mates. Convolutional LSTMs have previously been applied
for air quality forecasting and for air pollution studies. The
                                                                     to datasets, such as Moving MNIST, to identify how MNIST
PBL contains most of the sources for pollution (Stull 1988).
                                                                     digits are moving from frame to frame. These datasets used
PBLH can be calculated using Weather Research and Fore-
                                                                     sets of frames with the same MNIST digits moving around
casting models, radiosondes, and also using ground-based
                                                                     the space across the frames.
Ceilometer observing systems LIDAR technology (Dan-
                                                                        We formulate the PBLH estimation prediction as a spatio-
chovski et al. 2019). There are a number of complexities
                                                                     temporal image sequence forecasting problem. In sequence
that hinder accurate estimation of PBLH, such as clouds and
                                                                     forecasting, previously observed data points are used to pre-
Copyright c 2021 for the individual papers by the papers’ authors.   dict a fixed length of the future data points. We create a
Copyright c 2021 for the volume as a collection by its editors.      dataset of edges based on ceilometer backscatter profiles
This volume and its papers are published under the Creative Com-     from December 1st 2016 to December 16th 2016.
mons License Attribution 4.0 International (CC BY 4.0).                 The PBL data introduces two new complexities for con-
                                                                 minimal changes between frames, in terms of shape of the
                                                                 clouds and spatial information.
                                                                    Agrawal et al. (Agrawal et al. 2019) focused on precipi-
                                                                 tation forecasting as an image-to-image translation problem.
                                                                 In their paper they utilized a U-net convolutional neural net-
                                                                 work on a dataset from multi-radar multi-sensor (MRMS)
                                                                 system, developed by NOAA National Severe Storms Labo-
                                                                 ratory (Zhang et al. 2016).
                                                                    Yao et al. (Yao and Li 2017) adopted an architecture of
                                                                 convolutional neural network to predict the short-term pre-
Figure 2: Lufft-CHM15K - UMBC - 24 Hour LIDAR                    cipitation on a CIKM AnalytiCup 2017 challenge dataset in-
Backscatter Profiles and PBLH Points Generated from              cluding radar maps within 1.5 hours contestants (Shenzhen-
our Backscatter Image Boundary Detection (ML-PBLH) -             Meteorological and AlibabaGroup 2017).
12/1/2016.                                                          This study differs from previous efforts, in that we apply
                                                                 this method to predict small changes in PBLH over time us-
                                                                 ing edge-detected imagery. We describe our methodology to
volutional LSTMs: 1) the frames have more information            address the added complexities of our data set. To the best
present than datasets used in previous research, and 2) at       of our knowledge this is the first time convolutional LSTMs
each frame new information is introduced. Using the exist-       have been used to try to predict changes in the PBLH.
ing convolutional LSTM methods from previous research,
when applied to the PBL data, the network was unable to                           Model Architecture
learn to predict the small temporal changes. Our proposed        We utilized a convolutional LSTM architecture, proposed by
augmentation methodology overcomes these challenges and          Shi et al. (Shi et al. 2015). The convolutional LSTM model
enables the network to learn changes between frames.             consists of two networks of stacked LSTM layers: an en-
                                                                 coding network and a forecasting network. The use of con-
                      Background                                 volutional layers helps to represent the features of the im-
Developing an effective prediction model for the PBLH es-        age sequences. The encoding network compresses the input
timates is challenging due to its atmospheric nature and         image sequence into a hidden state tensor and the forecast-
spatio-temporal characteristics. Previous studies on time se-    ing LSTM will decompress the hidden state to output the
ries atmospheric dataset prediction have been based on con-      final prediction. The architecture of the model is shown in
ventional and mathematical approaches (Sun et al. 2014;          Figure 3. The power of this convolutional LSTM model is
Cheung and Yeung 2012; Reyniers 2008). The application           using convolution LSTM layers and designing input, hid-
of machine learning is a new perspective in this domain (Shi     den and output vectors as 3D tensors. Convolutional layers
et al. 2015; Agrawal et al. 2019). A machine learning based      are known as the best representation tools, which in com-
model can be trained to predict sequences of data points in      bination with LSTM layers perfectly captures the spatio-
near real-time upon receiving new data, that may address the     temporal property of the images. Encoding and forecasting
problem of continuous spatio-temporal data analysis better       with 3D tensors, where the last two dimensions show rows
than traditional numerical methods. Recent advances in deep      and columns helps to preserve all of the spatial information.
learning for sequential image prediction, such as recurrent      Another key feature of the design is keeping the dimension
neural network (RNN) and long short-term memory (LSTM)           of all of the states the same by using zero padding. The pre-
models (Cho et al. 2014; Donahue et al. 2015; Sutskever,         diction state has the same dimension as the input state so all
Vinyals, and Le 2014; Karpathy and Fei-Fei 2017; Srivas-         of the states can be concatenated in the forecasting network
tava, Mansimov, and Salakhutdinov 2015; Xu et al. 2015)          and fed into a 1x1 convolutional layer to generate the final
are helpful to tackle the challenge of developing an effective   prediction.
prediction model for the spatio-temporal datasets.                  The dataset pre-processing pipeline and the model imple-
                                                                 mentation have been implemented in Python using Keras
                     Related Work                                and libraries such as OpenCV, Pillow and Matplotlib for vi-
Shi et al.(Shi et al. 2015) proposed the convolutional LSTM      sualization of the results.
model for precipitation nowcasting problem. In their work,
authors showed the performance of the model first on pre-
diction of frames of Moving MNIST dataset. The moving
MNIST dataset has been widely used for evaluating video
prediction and image-sequence models (Srivastava, Man-
simov, and Salakhudinov 2015). Then they applied their
model on a radar echo dataset with 8148 training sequences
and showed that they captured the motion of the clouds
in images with with the end-to-end convolutional LSTM                  Figure 3: ConvLSTM Architecture (Shi et al.)
model. The radar echo dataset, includes radar maps with
              Figure 4: A sequence of 5 PBLH layer images with 10 minute time interval from the raw dataset




                         Figure 5: A sequence of 5 PBLH layer images from the synthesized dataset


                     Methodology                                method described in work by Sleeman et al.(Sleeman et al.
                                                                2020).
We propose a methodology to solve challenges of processing
LIDAR-based backscatter profiles when unfavorable condi-           In comparison to Moving MNIST dataset, the images in
tions are present. The PBLH edge detection dataset (Slee-       PBLH edge detection dataset are frequently changing in
man et al. 2020) is used for generating sequences of im-        terms of line shape and spatial information. The frames in
ages (frames) of changing estimated PBLH edges with 10          Moving MNIST dataset contains two repeating patterns (two
minute time interval and by applying morphological aug-         digits), which slightly moving in a frame. The estimated
mentation methods to predict a given next set of frames in      PBL present in imagery is changing shape by pattern, thick-
the sequence.                                                   ness and continuity of the line and changing location of the
   In multiple trials of training the model using the PBLH      line in each frame, which is the biggest challenge for train-
edge detection dataset, we observed that with frequent          ing an image sequence prediction model.
changes of shape of the line over time and missing data            We structured the PBLH edge detection image dataset as
points due to weather condition, the model was challenged       sequences with five frames. In order to address the challenge
to learn these frame-by-frame changes.                          of high frequency of changes in the images from frame to
   To help smooth the changes between frames of the se-         frame, we reduced the variance by applying augmentation
quences, we synthesized the images in the dataset using aug-    on the images. We augmented each image in the dataset with
mentation, which led to homogeneous sequences so that the       morphological transformations such as rotation and shift.
model could capture the changes in features and position        The variance between images (frames) in the raw dataset has
of the line. We generated spatio-temporal sequences of esti-    been calculated as 361.575 and after augmentation the vari-
mated PBLH layer images, where each sequence shows the          ance between images decreased to 275.700, which indicates
change of shape and location of the estimated PBLH edges        the change between images (frames) has been decreased.
with frames of images.                                             The raw images in the dataset are 885 x 656 pixels,
   In this way, we mapped the complex estimated PBLH            we resized the images to different resolutions (i.e. 32x32,
edge dataset to a smoother spatio-temporal dataset which en-    64x64 and 128x128 pixels). We describe results for the
abled the convolutional LSTM model to capture the changes       128x128 pixel images because with higher resolution im-
between frames in a sequence. With the inclusion of our         ages, pixelation-based issues are less prominent (no need to
methodology the network is able to predict the estimated        apply interpolation). This implies there is some sensitivity to
PBLH edges.                                                     the number of pixels, however more experimentation would
                                                                be required to understand this sensitivity further.
                                                                   We generated a training dataset with approximately 10k
                         Dataset                                sequences and used approximately 5000 sequences for train-
To study the behavior of the convolutional LSTM model, we       ing the model and for the held out test dataset used for pre-
conducted an experiment to train the convolutional LSTM         diction. We trained the convolutional LSTM model with se-
model with a dataset of PBL edge detection images for fore-     quences of 128 x 128 pixel images. A sequence in the syn-
casting next frames in the sequence. The images in this         thesized dataset is shown in Figure 5, which shows slight
dataset are captured with 10 minutes time interval. A se-       change of shape and spatial information between frames.
quence of PBLH edge detection images in the dataset is          The third frame from the sequence in Figure 4 has been se-
shown in Figure 4. These images were generated using the        lected and augmented and visualized in Figure 5, to show
how the dataset has been simplified and synthesized.                 Table 1: Evaluation of results on a heldout test dataset

                Experimental Results                               Image size/Metrics      Accuracy      SSIM     POD      FAR
                                                                    128x128 Images          97.67        83.88    98.10    3.89
As an experimental study, we trained the model for 15
epochs with ”logcosh” loss function and ADAM optimizer
and used the trained model as a prediction tool on the test
dataset. Figure 6 shows the result of prediction on two sin-      positives is 3.89. Overall, the metrics used to measure per-
gle test sequences. In the test phase, three frames from the      formance of the predicted images (frames) are favorable.
sequence were considered as the input to the model and pre-          Accuracy alone is not an indicative metric to evaluate the
diction was performed on the next two frames. By compar-          performance of a machine learning model, and additional
ing the predicted frames with the ground truth, we observe        metrics should be considered as well (Gaur 2020). We can
that the trained model captured the transformation of the         perceptually conclude the above point by comparing the rel-
frames as well as slight changes in the shape of the esti-        atively high accuracy result with visualized predictions, con-
mated PBLH edge. The model captured the spatial change            sidering the imperfect prediction for the last frame (fifth
in frames and predicted the next two frames in the sequence.      frame) in the sequence. For future work, we will consider
The prediction model was successful in predicting the next        using a larger size dataset with adjustments in length of se-
frame (fourth frame). However the fifth frame’s prediction        quences (increasing number of frames in a sequence) and
could be improved. In general, as prediction frames increase,     tuning model parameters, to train a more generalized model
accuracy decreases. Our current focus is on improving the         for the task of PBLH prediction.
network for better frame prediction for multiple frames.
                                                                            Conclusions and Future Work
                                                                  In the presence of unfavorable conditions, PBLH heights
                                                                  are challenging to estimate. We described a convolutional
                                                                  LSTM that can supplement existing edge detection methods
                                                                  in a partially observable environment. The LSTM provides a
                                                                  prediction of the estimated PBLH in the absence of a reliable
                                                                  estimation. In this work, we described a way to apply con-
                                                                  volutional LSTM to edge-detected PBLH backscatter output
(a)
                                                                  and show how our augmentation methodology can extend
                                                                  existing methods for predicting small changes in the esti-
                                                                  mated PBL across frames. We show how we overcame train-
                                                                  ing deficiencies when the images have a significant amount
                                                                  of information and when new information is present in each
                                                                  frame. We described how we developed an image sequence
                                                                  dataset. The PBLH edge detection images have a lot of in-
(b)                                                               formation change due to the turbulent nature of the PBL.
                                                                  Predicting the next set of frames in such a datasets is still
                                                                  very challenging. Our future work includes extending the
Figure 6: Predicted frames and ground truth of two test           model architecture and the augmentation and image trans-
PBLH edge detection sequences (a and b) with 128x128              formation, as well as input sequences length and size adjust-
pixel images                                                      ment, to be able to predict small temporal changes in PBLH
                                                                  estimates with more content information change.
   For quantitative analysis, we applied the prediction model
on a held-out test dataset with 3000 sequences and evaluated                         Acknowledgments
metrics such as accuracy, structural similarity index Met-        This work has been funded by the following grants: NASA
ric (SSIM) (Larkin 2015) which is a metric for measuring          grant NNH16ZDA001-AIST16-0091 and NSF CARTA
the similarity between two images, probability of detection       grant 17747724
(POD) (Wehling et al. 2011), which is a metric to quantify
the probability to detect a specific flaw, and false alarm rate
(FAR) (Barnes et al. 2009) which is the number of false pos-
                                                                                         References
itives that are expected to occur in a given image. Table 1       Agrawal, S.; Barrington, L.; Bromberg, C.; Burge, J.; Gazen,
shows the results of evaluating the model using above met-        C.; and Hickey, J. 2019. Machine learning for precip-
ric.                                                              itation nowcasting from radar images. arXiv preprint
   The SSIM metric of the table 1 shows that predicted im-        arXiv:1912.12132 .
ages have 83.88 percent of similarity with the ground truth       Barnes, L. R.; Schultz, D. M.; Gruntfest, E. C.; Hayden,
images, which shows the quality of predicted images. POD          M. H.; and Benight, C. C. 2009. Corrigendum: False alarm
shows the accuracy of the test which is 98.10 percent for         rate or false alarm ratio? Weather and Forecasting 24(5):
our test prediction and FAR, the number of occurred false         1452–1454.
Caicedo, V.; Rappenglück, B.; Lefer, B.; Morris, G.; Toledo,     Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.;
D.; and Delgado, R. 2017. Comparison of aerosol lidar re-         and Woo, W.-c. 2015. Convolutional LSTM network: A ma-
trieval methods for boundary layer height detection using         chine learning approach for precipitation nowcasting. Ad-
ceilometer aerosol backscatter data. Atmospheric Measure-         vances in neural information processing systems 28: 802–
ment Techniques 10(4).                                            810.
Cheung, P.; and Yeung, H. 2012. Application of optical-flow       Sleeman, J.; Halem, M.; Caicedo, V.; Demoz, B.; Delgado,
technique to significant convection nowcast for terminal ar-      R. M.; et al. 2020. A Deep Machine Learning Approach for
eas in Hong Kong. In The 3rd WMO International Sym-               LIDAR Based Boundary Layer Height Detection. In IEEE
posium on Nowcasting and Very Short-Range Forecasting             International Geoscience and Remote Sensing Symposium.
(WSN12), 6–10.                                                    Srivastava, N.; Mansimov, E.; and Salakhudinov, R. 2015.
Cho, K.; Merrienboer, B. V.; Çaglar Gülçehre; Bahdanau,        Unsupervised learning of video representations using lstms.
D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learn-        In International conference on machine learning, 843–852.
ing Phrase Representations using RNN Encoder-Decoder              Srivastava, N.; Mansimov, E.; and Salakhutdinov, R. 2015.
for Statistical Machine Translation. ArXiv abs/1406.1078.         Unsupervised Learning of Video Representations using
Compton, J. C.; Delgado, R.; Berkoff, T. A.; and Hoff, R. M.      LSTMs. In ICML.
2013. Determination of planetary boundary layer height            Stull, R. B. 1988. Mean boundary layer characteristics.
on short spatial and temporal scales: A demonstration of          In An Introduction to Boundary Layer Meteorology, 1–27.
the covariance wavelet transform in ground-based wind pro-        Springer.
filer and lidar measurements. Journal of Atmospheric and
Oceanic Technology 30(7): 1566–1575.                              Sun, J.; Xue, M.; Wilson, J. W.; Zawadzki, I.; Ballard, S. P.;
                                                                  Onvlee-Hooimeyer, J.; Joe, P.; Barker, D. M.; Li, P.-W.;
Danchovski, V.; Dimitrova, R.; Vladimirov, E.; Egova, E.;         Golding, B.; et al. 2014. Use of NWP for nowcasting con-
and Ivanov, D. 2019. Comparison of urban mixing layer             vective precipitation: Recent progress and challenges. Bul-
height from ceilometer, radiosonde and WRF model. In AIP          letin of the American Meteorological Society 95(3): 409–
Conference Proceedings, volume 2075, 120005. AIP Pub-             426.
lishing.
                                                                  Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to
Delgado, R.; Caicedo, V.; Demoz, B.; Szykman, J.; Sakai,          Sequence Learning with Neural Networks. In NIPS.
R.; Hicks, M.; Posey, J.; Atkinson, D.; and Kironji, I.
2018. Ad-Hoc Ceilometer Evaluation Study (ACES): Li-              Talianu, C.; Nicolae, D.; Ciuciu, J.; Ciobanu, M.; and Babin,
dar/Ceilometer Mixing Layer Heights and Network. In AGU           V. 2006. Planetary boundary layer height detection from
Fall Meeting Abstracts.                                           LIDAR measurements. Journal of Optoelectronics and Ad-
                                                                  vanced Materials 8(1): 243.
Donahue, J.; Hendricks, L. A.; Rohrbach, M.; Venugopalan,
S.; Guadarrama, S.; Saenko, K.; and Darrell, T. 2015. Long-       Wehling, P.; LaBudde, R. A.; Brunelle, S. L.; and Nelson,
term recurrent convolutional networks for visual recognition      M. T. 2011. Probability of detection (POD) as a statistical
and description. 2015 IEEE Conference on Computer Vision          model for the validation of qualitative methods. Journal of
and Pattern Recognition (CVPR) 2625–2634.                         AOAC International 94(1): 335–347.
Gaur, Y. 2020. Precipitation Nowcasting using Deep Learn-         Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A. C.;
ing Techniques.                                                   Salakhutdinov, R.; Zemel, R.; and Bengio, Y. 2015. Show,
                                                                  Attend and Tell: Neural Image Caption Generation with Vi-
Karpathy, A.; and Fei-Fei, L. 2017. Deep Visual-Semantic
                                                                  sual Attention. In ICML.
Alignments for Generating Image Descriptions. IEEE
Transactions on Pattern Analysis and Machine Intelligence         Yao, Y.; and Li, Z. 2017. CIKM AnalytiCup 2017: Short-
39: 664–676.                                                      Term Precipitation Forecasting Based on Radar Reflectivity
                                                                  Images. In Proceedings of the Conference on Information
Larkin, K. G. 2015. Structural Similarity Index SSIMplified:
                                                                  and Knowledge Management, Short-Term Quantitative Pre-
Is there really a simpler concept at the heart of image quality
                                                                  cipitation Forecasting Challenge, Singapore, 6–10.
measurement? arXiv preprint arXiv:1503.06680 .
                                                                  Zhang, J.; Howard, K.; Langston, C.; Kaney, B.; Qi, Y.;
Reyniers, M. 2008. Quantitative precipitation forecasts
                                                                  Tang, L.; Grams, H. M.; Wang, Y.; Cocks, S. B.; Marti-
based on radar observations: Principles, algorithms and op-
                                                                  naitis, S. M.; Arthur, A.; Cooper, K.; Brogden, J. W.; and
erational systems. Institut Royal Météorologique de Bel-
                                                                  Kitzmiller, D. 2016. Multi-Radar Multi-Sensor (MRMS)
gique Brussel, Belgium.
                                                                  Quantitative Precipitation Estimation: Initial Operating Ca-
Sawyer, V.; and Li, Z. 2013. Detection, variations and in-        pabilities. Bulletin of the American Meteorological Society
tercomparison of the planetary boundary layer depth from          97: 621–638.
radiosonde, lidar and infrared spectrometer. Atmospheric
environment 79: 518–528.
ShenzhenMeteorological; and AlibabaGroup. 2017.
CIKM AnalytiCup. URL http://www.cikmconference.org/
CIKM2017/CIKM AnalytiCup task1.html.