<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on Complex Data Challenges in Earth
Observation, November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Machine Learning Model Development for Space Weather Forecasting in the Ionosphere</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Randa Natras</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Schmidt</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Deutsches Geodätisches Forschungsinstitut der Technischen Universität München (DGFI-TUM)</institution>
          ,
          <addr-line>Arcistrasse 21, Munich, 80333</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>1</volume>
      <issue>2021</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>In this paper, the workflow of the machine learning model development for the space weather forecast in the Earth's ionosphere is presented, as an ongoing project. The problem of space weather forecasting using traditional approaches is discussed, as well as the advantages of using machine learning instead. In addition, the methods and approaches for building a machine learning model are presented, together with challenges related to data and algorithms. The machine learning workflow for the problem of space weather forecast is discussed from problem formulation and data acquisition, data preparation and feature engineering, learning algorithms, to model training, evaluation and deployment. This paper provides an overview of a machine learning project for space weather forecasting and discusses challenges and open issues.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Machine Learning</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Model Development</kwd>
        <kwd>Ionosphere</kwd>
        <kwd>Space Weather Forecast</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        (TECU), where 1 TECU = 1016 electrons/m2 [7]. During
severe space weather conditions, the variability in the
Space weather describes conditions caused by the Sun ionosphere can increase drastically in time and space.
in the near-Earth space, i.e. magnetosphere, ionosphere These sudden intense variabilities are often dificult to
and thermosphere that can influence the performance model with traditional mathematical approaches and to
and reliability of space-borne and ground-based techno- properly minimize in positioning solutions leading to
logical systems. It can produce major disturbances of degradation of positioning and navigation performance
Earth’s magnetosphere known as geomagnetic storms. [8, 9, 10]. Besides the space weather disturbances, the
Numerous efects of strong space weather on satellites, Earth’s ionosphere exhibits considerable geographical
power grids, aviation, communication and navigation variations, which can be divided into high-latitude
(besystems have already been observed and documented yond ±60° geomagnetic latitude), low-latitude (within
with considerable economic losses [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3, 4</xref>
        ]. As soci- ±20° of each side of the geomagnetic equator) and
midety increasingly relies on the services that these infras- latitude zones (between the boundaries of the other two
tructures provide, there is an urgent need to develop zones) [11]. Other variations in the Earth’s ionosphere,
advanced forecasting capabilities in order to be able to important to mention, depend on local time (daily
varimitigate a catastrophic failure of space- and ground-based ations), longitude, season and 11-year sunspot cycle. In
technological systems associated with this type of hazard order to model and predict the space weather, a complex
[5]. The impact of space weather on the Earth’s iono- chain of physical processes between the Sun, the
intersphere and GNSS-based (Global Navigation Positioning planetary space, the Earth’s magnetic field and the
ionoSystem) applications can be modelled by quantifying sphere have to be taken into account. However, we have
the Vertical Total Electron Content (VTEC) within the limited understanding of these coupled processes and
ionosphere. The ionosphere represents the ionized re- often do not know the physical and/or mathematical
region of the upper atmosphere (from about 50 km up to lationships to describe them properly. On the other hand,
1,000 km or more from the Earth’s surface) that contains there are numerous data from satellites and observatories
free electrons and ions produced by solar radiation [6]. that monitor space weather processes between the Sun
Free electrons in the ionosphere afect the propagation and the Earth. Artificial Intelligence and Machine
Learnof a microwave signal and induce a delay or advance ing (ML) ofer a new possibility of learning from data,
of the signal. VTEC is proportional to the relative iono- in contrast to traditional programming, where programs
spheric delay of GNSS signals, measured in TEC units with detailed rules need to be written that explicitly
instruct a computer how to execute steps (Figure 1). In
the case of space weather forecast, the traditional
programming approach would be to analyze space weather
properties, as well as typical and space weather-induced
variations in the ionosphere to detect patterns, then write
a forecasting algorithm consisting of list of rules for each
of the noticed patterns, test the program and iterate these
steps until the model performance is satisfactory.
However, the problem of space weather is highly complex and
our understanding of the underlying processes of space
weather is still limited to be able to properly describe
the physical and/or mathematical relations using
traditional methods. On the other hand, the ML approach
ofers the possibility of automatically learning rules from
the data that map inputs to outputs. The approach of
learning directly from the data can lead to discovering
the hidden knowledge of relationships within data and
to deepening our physical understanding [12]. These can
be achieved, for instance, by estimating the importance
of the input variables to the output(s) in a ML model, or
by finding structures and relationships within the data
through unsupervised learning etc. Furthermore, ML
can be used to estimate the nonlinear functions that
describe the underlying space weather processes [12] based
on the data provided. Recently, there has been an
increasing interest in the ML applications for forecasting
space weather in the ionosphere using Deep Learning
(DL) methods such as Feed-forward Neural Network [13],
Autoregressive Neural Network [14], Long Short-Term
Memory (LSTM) [15]. Some studies used other ML
algorithms like Extreme Gradient Boosting (XGBoost) [16]
and Gaussian Process Regression [17]. The DL models
proved to be more accurate than traditional modelling
approaches such as AutoRegressive Integrated Moving
Average (ARIMA) and Empirical Orthogonal Functions
(EOF) [15, 13].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Machine learning model development</title>
      <sec id="sec-2-1">
        <title>Tim Mitchell [18] proposed a definition of a well-posed</title>
        <p>ML problem: "A computer program is said to learn from
experience E with respect to task T and a performance
measure P, if its performance on T, as measured by
P improves with experience E". In the context of this
study, the ML problem is defined with the task of
VTEC forecast, where the experiences are provided in
a form of training data and a performance measure
This section presents learning algorithms, data chal- is chosen as root mean square error (RMSE) and
lenges, planned methodologies and approaches to be used correlation coeficient. The task of predicting space
in this study. The ML model development workflow for weather manifestation in the ionosphere is defined via
space weather forecast can be summarized into four main forecasting VTEC in the ionosphere. The problem is
phases (Figure 2): formulated in such a way that it can be solved with
supervised learning. Supervised learning can be seen
1. Problem formulation and data acquisition as a function approximation or predictive learning
2. Data exploration and feature engineering problem [19]. Using a training sample of the input
3. Model training and cross-validation (predictors, features or the independent variables) and
4. Model final evaluation (test phase) and deploy- output (response or the dependent variable) vector, the
ment of the model. goal is to obtain an approximation of the function that
optimally describes the relationship between input and
output. This is achieved by minimizing a certain loss
2.1. Problem formulation and data function over the joint distribution of all values. For this
acquisition study, data of solar activity, solar wind, geomagnetic
ifeld are collected from NASA/GSFC through
OMNIWeb (https://omniweb.gsfc.nasa.gov/form/dx1.html)
[20]. The VTEC values are extracted for
high, mid- and low-latitude points from the GIM
(Global Ionosphere Map) provided by the CODE
(https://cddis.nasa.gov/archive/gnss/products/ionex)
[21].
(Figure 3).</p>
        <p>As can be seen, the state in the geomagnetic field is
quiet to moderate the most of the time. Most of the storm
2.2. Data exploration and feature events occurred in the years after the maximum of the
engineering solar cycle (in April 2014). Thus, the initial periods for
The goal of this step is to identify the relevant predictors training and cross-validation are selected to be 2015 and
and prepare a dataset (preprocessed and cleaned up) that 2016, while the test year is 2017. The selection of useful
will be useful for the learning task. It is crucial that the features for the model is currently in progress.
Explanatraining set represents the ultimate task of the model, tory data analysis is applied to understand the data by
contains multiple cases and accurately represents the op- inspecting its distribution, statistical properties,
relationeration data that may be encountered in practice. Feature ships, correlations etc. Figure 4 shows the distribution
engineering [22] refers to the process of transforming of VTEC from the training data for three ionospheric
raw data into suitable features that can better represent points corresponding to the high-, mid- and low-latitude
the underlying problem, here space weather forecast. It ionosphere. The distribution peak is around 10 TECU
includes various steps such as feature selection, feature for all three studied regions, but the distribution extends
extraction, feature scaling, feature transformation etc. further into the higher values than into the lower values.
However, it can be a challengeable task to select repre- Based on the result of the analysis, an important aspect
sentative dataset and features that can describe all the to consider is how to deal with an imbalanced dataset,
cases that can occur in practice. This is an iterative pro- where cases of space weather appear rarely compared to
cess, where an initial dataset is used in the first attempt the quiet period. However, these cases are important as
and according to the performance of the model, the data they can produce irregular variations in the ionosphere.
are iteratively improved (Figure 2). Events of the highest An imbalance case can present a dificulty for a learning
interest are geomagnetic storms. The occurrence of space algorithm, which can lead to a biased model towards the
weather events was analyzed during solar cycle 24 (from majority of cases [23]. Possible solutions of dealing with
2009 to 2019), using the geomagnetic activity index Kp imbalance cases may include analyses of the individual
analysis. This approach provides insights into the models’
performance and gives guidance on how to improve the
model. Time series cross-validation [17] is used to
evaluate model performance preserving the temporal structure
of time series, and to diagnose the bias/variance
problem. Overfitting (high variance) lead to very low training
errors, but high validation and test errors, while
underfitting (high bias) leads to high errors in all the datasets. The
aim of this step is to identify the right complexity for a
model in order to avoid both underfitting and overfitting.</p>
        <p>The complexity of the model is changed by altering
various hyperparameters of the learning algorithm,
increasing/decreasing regularization, getting more training data,
adding or removing features, etc. [18]. Which step should
Figure 4: Overview of the distribution of the VTEC training be taken depends on what we want to fix: bias (to increase
data (2015 – 2016). Gaussian kernel density plot, black curve complexity) or variance (to decrease complexity) of the
corresponds to the normal distribution. Top left: VTEC 10 °E model. One way to diagnose this problem is to plot the
70°N, top right: VTEC 10°E 40°N, bottom: VTEC 10°E 10°N. learning curves for training and cross-validation datasets.
Another important issues to address are interpretability
and explanability. ML models based on ensemble
learnproperties of rare examples in order to distinguish be- ing such as Random forest [24] and Boosting [25] provide
tween minority samples and noisy samples, selection of a possibility to inspect which predictors have been used
an appropriate learning algorithms and optimal features most often by a learning algorithm. This information can
to enhance learning of rare VTEC signatures, training on be useful in interpreting the model and understanding
the entire and under-sampled datasets, as well as devel- the problem of space weather forecasting. In addition,
enopment of cost-sensitive solutions that are able to adapt semble learning is recognized as method that can provide
the penalty with respect to the degree of importance a significant improvement in robustness to skewed
disassigned to the minority case. tribution and good predictive power [23]. On the other</p>
        <p>Another issue with the data is the diferent temporal hand, Artificial Neural Networks (ANN), a core of DL
and spatial resolution. The temporal resolution of the and state-of-the art techniques for many applications
data covers daily samples (for the solar activity indices nowadays, are often dificult to explain. They require
R and F10.7), 3-hour data for the geomagnetic index Kp, many parameters, which consequently needs careful
deand hourly to 1-minute samples for other data describing sign in order to not overfit the data [ 26]. Modelling of
space weather and climate. Data describing solar and ge- spatial-temporal dependencies is another important task
omagnetic activity are given as a function of time, while in space weather forecasting. DL provides opportunity to
ionosphere VTEC data are temporally and spatially de- automatically extract features in the spatial domain (e.g.
pendent. VTEC data from the GIM CODE are provided Convolutional Neural Networks) and in the temporal
dowith a temporal resolution of 2 hours until the year 2015 main (e.g. Recurrent Neural Networks), therefore some
and 1 hour onwards, while spatial sampling is 2.5° x 5° researchers propose their combination to learn
spatialin latitude and longitude. It is important to take into temporal features ([27, 28, 29]. Other approaches such
account that GNSS stations used to estimate the GIM are as decomposing time series into components that
capunevenly distributed globally with the current lack of ture trend and seasonality [30] and detrending the time
GNSS ground receivers, particularly over the oceans and series [31] may be a suitable adaptation for other ML /
in the southern hemisphere, among other regions, where DL algorithms. In Figure 5, time information consisting
most part of the provided VTEC information is based of the hour of day and day of year are used as inputs
on interpolation, which may result in lower accuracy in to be able to model daily and seasonal VTEC variations.
these regions. The next important issue to focus on is estimation of the
uncertainty associated with predictions. Uncertainty can
2.3. Model training be quantified, for instance, by learning the probability
distribution over weights in the ANN [32].</p>
      </sec>
      <sec id="sec-2-2">
        <title>The next step in the ML model development for VTEC</title>
        <p>forecast is to decide which algorithms, hyperparameters
(parameters of learning algorithm) and model
architecture should be used. This is done by training an initial
model and performing model diagnostic through error</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusion</title>
      <p>The generalization error is estimated using a test dataset
to show how accurately the model can predict outcome
values for previously unseen data. Figure 5 shows the
experimental results for the 24-hour forecast with the
decision tree and ensemble learning for the quite, moderate
and storm periods of the year 2017, based on data of solar
activity, solar wind, geomagnetic field, time information
(hour of day and day of year) and ionosphere VTEC [33].
The RMSE for the storm period is up to two times higher
than for the quiet period, while the correlation coeficient
decreases as the Kp index increases. Ensemble learning
methods achieve better accuracy than a single decision
tree, where combining multiple ensembles gives the most
optimal results.</p>
      <p>Further exploration of the data and algorithms will be
carried out to improve learning during the storm and to
address the questions raised in Section 2.2 and Section
2.3. The ultimate goal is to build a model that generalizes
the problem well and fits the data reasonably well. In
the model deployment phase (application of the model in
practice) important components will be monitoring and
maintaining the model by tracking various metrics.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <sec id="sec-4-1">
        <title>This work is funded by Research Grants - Doctoral Programmes in Germany from German Academic Exchange Service (Deutscher Akademischer Austauschdienst, DAAD).</title>
        <p>[4] J. P. Eastwood, E. Bifis, M. A. Hapgood, L. Green, atmos11040316.</p>
        <p>M. M. Bisi, R. D. Bentley, R. Wicks, L.-A. McKinnell, [16] A. V. Zhukov, Y. V. Yasyukevich, A. E. Bykov,
M. Gibbs, C. Burnett, The economic impact of space Correction to: Gimli: Global ionospheric total
weather: Where do we stand?, Risk Analysis 37 electron content model based on machine
learn(2017) 206–218. doi:10.1111/risa.12765. ing, GPS Solutions 25 (2021) 21. doi:10.1007/
[5] D. J. Knipp, J. L. Gannon, The 2019 national s10291-020-01063-1.</p>
        <p>space weather strategy and action plan and beyond, [17] L. Mallika I, D. V. Ratnam, S. Raman, G.
SivavaraSpace Weather 17 (2019) 794–795. doi:10.1029/ prasad, Machine learning algorithm to forecast
2019SW002254. ionospheric time delays using global navigation
[6] A. D. Richmond, Ionosphere, Springer Nether- satellite system observations, Acta Astronautica
lands, Dordrecht, 2007, pp. 452–454. doi:10.1007/ 173 (2020) 221–231. doi:10.1016/j.actaastro.
978-1-4020-4423-6_159. 2020.04.048.
[7] B. Hofmann-Wellenhof, H. Lichtenegger, [18] T. Mitchell, Machine learning (1997).</p>
        <p>J. Collins, Global Positioning System: The- [19] T. Hastie, R. Tibshirani, J. Friedman, The elements
ory and Practice, Springer, Berlin, 2001. of statistical learning: data mining, inference and
doi:10.1007/978-3-7091-6199-9. prediction, 2 ed., Springer, New York, NY, 2009.
[8] M. Poniatowski, G. Nykiel, Degradation of kine- doi:10.1007/978-0-387-84858-7_3.
matic ppp of gnss stations in central europe [20] J. H. King, N. E. Papitashvili, Solar wind
spacaused by medium-scale traveling ionospheric dis- tial scales in and comparisons of hourly wind and
turbances during the st. patrick’s day 2015 ge- ace plasma and magnetic field data, Journal of
omagnetic storm, Remote Sensing 12 (2020). Geophysical Research: Space Physics 110 (2005).
URL: https://www.mdpi.com/2072-4292/12/21/3582. doi:10.1029/2004JA010649.</p>
        <p>doi:10.3390/rs12213582. [21] R. Dach, S. Schaer, D. Arnold, E. Orliac, L. Prange,
[9] I. Zakharenkova, I. Cherniak, Efects of storm- A. Susnik, A. Villiger, A. Jaeggi, Code final product
induced equatorial plasma bubbles on gps-based series for the igs (2016).
kinematic positioning at equatorial and middle lat- [22] A. Zheng, A. Casari, Feature engineering for
maitudes during the september 7–8, 2017, geomag- chine learning: principles and techniques for data
netic storm, GPS Solutions 25 (2021). doi:10.1007/ scientists, O’Reilly Media, Inc., 2018.
s10291-021-01166-3. [23] B. Krawczyk, Learning from imbalanced data: open
[10] X. Luo, S. Gu, Y. Lou, C. Xiong, B. Chen, X. Jin, challenges and future directions, Progress in
ArtiAssessing the performance of gps precise point po- ifcial Intelligence 5 (2016) 221–232. doi: 10.1007/
sitioning under diferent geomagnetic storm con- s13748-016-0094-0.
ditions during solar cycle 24, Sensors 18 (2018). [24] L. Breiman, Random forests, Machine Learning 45
URL: https://www.mdpi.com/1424-8220/18/6/1784. (2001) 5–32. doi:10.1023/A:1010933404324.
doi:10.3390/s18061784. [25] J. H. Friedman, Greedy function approximation:
[11] L. R. Cander, Ionospheric variability, in: Iono- A gradient boosting machine., The Annals of
spheric space weather, Springer, 2019, pp. 59–93. Statistics 29 (2001) 1189 – 1232. doi:10.1214/aos/
[12] E. Camporeale, S. Wing, J. Johnson, Machine learn- 1013203451.</p>
        <p>ing techniques for space weather, Elsevier, Amster- [26] C. C. Aggarwal, Teaching deep learners to
gendam, Netherlands, 2018. eralize, in: Neural Networks and Deep
Learn[13] J. C. Uwamahoro, J. B. Habarulema, Modelling ing, Springer, 2018, pp. 169–216. doi:10.1007/
total electron content during geomagnetic storm 978-3-319-94463-0_4.
conditions using empirical orthogonal functions [27] M. Kaselimi, A. Voulodimos, N. Doulamis,
and neural networks, Journal of Geophysical Re- A. Doulamis, D. Delikaraoglou, Deep recurrent
search: Space Physics 120 (2015) 11,000–11,012. neural networks for ionospheric variations
estimadoi:10.1002/2015JA021961. tion using gnss measurements, IEEE Transactions
[14] C. Cesaroni, L. Spogli, A. Aragon-Angel, M. Fiocca, on Geoscience and Remote Sensing (2021) 1–15.</p>
        <p>V. Dear, G. De Franceschi, V. Romano, Neural net- doi:10.1109/TGRS.2021.3090856.
work based model for global total electron content [28] H. Wilms, M. Cupelli, A. Monti, T. Gross, Exploiting
forecasting, J. Space Weather Space Clim. 10 (2020) spatio-temporal dependencies for rnn-based wind
11. doi:10.1051/swsc/2020013. power forecasts, in: 2019 IEEE PES GTD Grand
[15] R. Tang, F. Zeng, Z. Chen, J.-S. Wang, C.-M. Huang, International Conference and Exposition Asia (GTD
Z. Wu, The comparison of predicting storm-time Asia), 2019, pp. 921–926. doi:10.1109/GTDAsia.
ionospheric tec by three methods: Arima, lstm, 2019.8715887.
and seq2seq, Atmosphere 11 (2020). doi:10.3390/ [29] L. Mou, L. Bruzzone, X. X. Zhu, Learning
spectralspatial-temporal features via a recurrent
convolutional neural network for change detection in
multispectral imagery, IEEE Transactions on
Geoscience and Remote Sensing 57 (2019) 924–935.</p>
        <p>doi:10.1109/TGRS.2018.2863224.
[30] R. Asadi, A. C. Regan, A spatio-temporal
decomposition based deep neural network for time
series forecasting, Applied Soft Computing 87 (2020)
105963. doi:10.1016/j.asoc.2019.105963.
[31] F. Montesino Pouzols, A. Lendasse, Efect of
different detrending approaches on computational
intelligence models of time series, in: The 2010
International Joint Conference on Neural Networks
(IJCNN), 2010, pp. 1–8. doi:10.1109/IJCNN.2010.</p>
        <p>5596314.
[32] M. Abdar, F. Pourpanah, S. Hussain, D.
Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao,
A. Khosravi, U. R. Acharya, V. Makarenkov, S.
Nahavandi, A review of uncertainty quantification
in deep learning: Techniques, applications and
challenges, Information Fusion 76 (2021) 243–297.</p>
        <p>doi:10.1016/j.inffus.2021.05.008.
[33] R. Natras, M. Schmidt, Ionospheric vtec forecasting
using machine learning, in: EGU General
Assembly Conference Abstracts, 2021, pp. EGU21–8907.
doi:10.5194/egusphere-egu21-8907.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Krausmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Andersson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gibbs</surname>
          </string-name>
          , W. Murtagh,
          <article-title>Space weather and critical infrastructures: Findings and outlook</article-title>
          ,
          <source>EUR 28237 EN</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Lanzerotti</surname>
          </string-name>
          ,
          <article-title>Space weather: Historical and contemporary perspectives</article-title>
          ,
          <source>Space Science Reviews</source>
          <volume>212</volume>
          (
          <year>2017</year>
          )
          <fpage>1253</fpage>
          -
          <lpage>1270</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s11214-017-0408-y.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D. N.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Daly</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Daglis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. G.</given-names>
            <surname>Kappenman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Panasyuk</surname>
          </string-name>
          ,
          <article-title>Efects of space weather on technology infrastructure</article-title>
          ,
          <source>Space Weather</source>
          <volume>2</volume>
          (
          <year>2004</year>
          ). doi:
          <volume>10</volume>
          .1029/2003SW000044.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>