<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ORCID:</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Games with Nature</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pavlo Pyrohov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ievgen Meniailov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Aerospace University “Kharkiv Aviation Institute”</institution>
          ,
          <addr-line>Chkalow str., 17, Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Paper represents games with nature. The theory of the game with nature and it's concept is described. Artificial intelligence methods are applied to develop methods of decision-making in conditions of uncertainty. The Wald criterion, the optimism criterion, the pessimism criterion and the Savage criterions are described. The comparison of developed methods has been done. Denoting the behavior of the game functions depends on the winnings a corresponds to the first icon in the name of the criterion. Game with nature, artificial intelligence, decision-making, conditions of uncertainty.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>[15], the level of inflation [16], tax policy [17], changing purchasing demand, etc. With global
pandemic of COVID-19 [18] games with nature can be used for decision-making in preventing and
control measures to eliminate the epidemic dynamics [19]. In such cases, nature is not malicious and
acts passively, sometimes to the detriment of man, and sometimes to his benefit, but her state and
manifestation can significantly affect the result of the activity [20].</p>
      <p>In such games, a person tries to act prudently, for example, using a strategy that allows you to get
the least loss. The second player (nature) acts unintentionally, completely by accident, his possible
strategies are known (nature's strategies). Such situations are investigated using the theory of
statistical decisions [21]. Although there may well be situations in which nature can really act as a
player. For example, circumstances associated with weather conditions or with natural elemental
forces. Man's play with nature also reflects a conflict situation that arises when interests clash in</p>
      <p>2021 Copyright for this paper by its authors.
choosing a solution. But “the elemental forces of nature” cannot be attributed to reasonable actions
directed against a person, and even more so any “malicious intent” [22]. Thus, it is more correct to
talk about a conflict situation caused by a clash of human interests and the uncertainty of nature's
actions, but without an obvious antagonistic coloration [23]. Situations in which the risk is associated
not with the conscious opposition of the opposite side (environment), but with insufficient awareness
of its behavior or the state of the decision-maker, are investigated using the theory of statistical
decisions.</p>
      <p>The aim of research is to investigate methods of decision-making in conditions of uncertainty in
games with nature.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and methods</title>
      <sec id="sec-2-1">
        <title>Matrix of playing with nature:</title>
        <p>А = || аij ||
where аij is the payoff of player 1 in the implementation of his pure strategy i and pure strategy j of
player 2 (nature) (i = 1, ..., m; j = 1, …, n).</p>
        <p>All possible states are considered as P1, P2, ..., Pn of nature P, which it calls randomly regardless of
the actions of player A without malicious opposition to the strategies of player A. Nature can be in
only one of the noted states, but in which one it is unknown, although in some cases only the
probabilities of these states may be known.</p>
        <p>The bottom row of the matrix shows the probabilities qj of the states of nature Pj, j = 1, ..., n.</p>
        <p>Imagine that player A, not knowing the state of nature, chose strategy Ai. If nature has assumed the
state Pj, then the payoff of player А will be аij. But if player A knew in advance that nature would take
the state Pj, then he would choose the strategy Аi0, which achieves the greatest payoff ai0j.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Difference</title>
        <p>Possible strategies A1, A2, ..., An of player A and his payoffs aij≥0 for each of the strategies and each
of the states of nature Pj are also known. These winnings can be shown in the form of a payoff matrix
(Table 1).
between the payoff j of player A under the known state of nature Pj and the payoff аij if the player A
does not know the state of nature, it is called the risk under the strategy Ai and the state of nature Pj.
Thus, the risk rij is that part of the greatest payoff j in the state of nature Pj which player A did not
win by applying strategy Ai through ignorance of the state of nature.</p>
        <p>
          The last line shows the probabilities of the states of nature qj, j = 1, …, n. Since 0 ≤ ai,j ≤ j (the
right inequality follows from (
          <xref ref-type="bibr" rid="ref4">4</xref>
          )), then from (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ) we obtain that 0 ≤ ri,j ≤ j .
        </p>
        <p>The probability qj of the state of nature Pj is obviously the probability of winning ai,j and risk rij for
each strategy Ai, i = 1,…, m. Therefore, each strategy Ai can be interpreted as a discrete random
variable, which can take values equal to the winnings ai1, … ,ain or risks ri1, …, rin with the
corresponding probabilities q1, …, qn.</p>
        <p>Player A's task is to choose the optimal strategy from the possible strategies Ai, ..., Am. The
optimality of a strategy is understood in various senses and is chosen according to various criteria.</p>
        <p>The result of the game generally depends on three numerical parameters: the payoffs a of player A,
the risks r that appear when player A chooses a particular strategy, and the probabilities q of states of
nature. The desire to “fold” these three parameters into one indicator leads to some numerical function
depending on these three parameters. Let's call it G (a, r, q) and call it the game function. The nature
of the dependence of the game function G on a, r and q is motivated by the logic of the applied
criterion. The values
of the functions of the game will be called the indicators of the game. These indicators form the
matrix of the game (Table 3).</p>
        <p>P1
r11
r21
...
rm1
q1
P1
G11
G21
...</p>
        <p>Gm1</p>
        <p>P2
r12
r22
...
rm2
q2
P2
G12
G22
...</p>
        <p>Gm2</p>
        <p>Pn
r1n
r2n
...
rmn
qn
Pn
G1n
G2n
...</p>
        <p>
          Gmn
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
        </p>
        <p>The vector argument criterion  assumes the assignment of some numerical function
whose value
,
will be called the indicator of the strategy Ai.</p>
        <p>Then, among the indicators Gi of strategies Ai, an extreme one is selected. For some criteria, this is
the maximum value: Ext = max, and for others, the minimum: Ext = min. If Ext = max, then the
indicator Gi is called the indicator of the optimality of the strategy Ai; if Ext = min, then Gi is called
the non-optimality indicator of the strategy Ai.</p>
        <p>Applying the described scheme, we will form some classes of criteria.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <sec id="sec-3-1">
        <title>For maximum criteria (extreme pessimism).</title>
        <p>and indicators of strategies Ai are determined as follows:
and are (10) indicators of the optimality of strategies.</p>
        <p>Thus, Gi is the worst indicator of the game under the strategy Ai. Hence it follows that the function
of the game G (a, r, q) should be non-decreasing in the payoff a and non-increasing in the risk r.</p>
        <p>The game performance is also influenced by the probabilities of states of nature q. So, for
example, if the worst smallest payoff аij for strategy Ai has a sufficiently small probability qj, then it is
no longer advisable to consider it as the smallest one. For this gain to remain practically the smallest,
it should have a sufficiently high probability. With risks, the opposite is true: for the worst, greatest
risk rij with strategy Ai to remain practically the greatest, its probability should also be large enough.
This suggests that the game function should not increase in probability q.</p>
        <p>So, the logic of the maximin criterion determines the behavior of the game function depending on
the payoff a, risk r and probability q:</p>
        <p>For convenience, in what follows, for the maximin criterion, we denote the game function G by W,
the indicators of the game Gij by Wij, and the optimality indicators Gi of strategies Ai by Wi.</p>
        <p>Thus, for the maximin criterion, the game function</p>
        <p>G (a, r, q) Ú by a; Ø by r; Ø by q
W (a, r, q) Ú by a; Ø by r; Ø by q,
(7)
(8)
(9)
(10)
(11)
(12)
(13)</p>
      </sec>
      <sec id="sec-3-2">
        <title>Game performance is:</title>
      </sec>
      <sec id="sec-3-3">
        <title>Strategy optimality indicators are</title>
        <p>Optimal according to the maximin criterion is considered the strategy Ai0, for which
The maximin criterion is a criterion for the extreme pessimism of a person who chooses a strategy,
since it orients him to the worst manifestation of the state of nature for him and, as a consequence, to
very careful behavior when making a decision.</p>
        <p>The specific function of the game W (a, r, q) can be chosen in different ways, but with the
indispensable requirement of possessing properties (13).</p>
        <p>Examples of maximin criteria with specific functions of the game W (a, r, q) are the following
criteria:</p>
        <p>W(a,r,q) = a;
W(a,r,q) = (1-q)a;
W(a,r,q) = a-r;
W(a,r,q) = (1-q)a-qr.</p>
        <p>(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)</p>
        <p>Each of these functions possesses properties (13), can be checked by the sign of the partial
derivatives.</p>
        <p>In criterion (17), the indicators of the game are the winnings: Wij=aij, and therefore it does not take
into account either the risks or the probabilities of the states of nature. Criterion (17) is Wald's
criterion allowing to justify the choice of a solution in conditions of complete uncertainty, in
conditions of ignorance of the probabilities of states of nature [24]. Criterion (18) takes into account
the gains and probabilities of states of nature, but does not take into account the risks. Criterion (19)
takes into account the gains and risks without considering the probabilities of states of nature.
Criterion (20) takes into account the gains, risks, and probabilities of states of nature.</p>
        <p>For the minimax criterion (extreme pessimism), we denote the game function by S (a, r, q). It
should be non-increasing in the payoff a and non-decreasing in the risk r and the probability q of the
states of nature:</p>
        <p>S (a, r, q) Ø by a; Ú by r; Ú by q
Then Sij = S (aij, rij, qj) are the indicators of the game. Strategy indicators are defined as follows:
Then Sij = S (aij, rij, qj) are the indicators of the game. Strategy indicators are defined as follows:
from where</p>
        <p>1</p>
        <p>Thus, Si is minimal for the number i, for which Wi is maximal, and the equivalence (19) Û (26) is
proved. Then the equivalence (20) Û (27) is also proved.</p>
        <p>In case of maximax criteria (extreme optimism), the game function, which we denote by M (a, r,
q), should not decrease with respect to the payoff a and the probability q of states of nature and not
increase with respect to the risk r:
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)</p>
        <p>By virtue of (23), the indicators Si are indicators of the non-optimality of the strategies Ai. The
game function S (a, r, q) should have properties (21) in view of (22) and (23).</p>
        <p>Let us present some minimax criteria with specific functions of the game S (a, r, q) satisfying
conditions (21):</p>
        <p>Criterion (24), in which the indicators of the game are risks, does not take into account either the
gains or the probabilities of the states of nature. This is the Savage criterion.</p>
        <p>Comparing the maximin and minimax criteria, we can say the following.</p>
        <p>Statement 1. The maximin criteria (19) and (20) are equivalent to the minimax criteria (26) and
(27), respectively.</p>
        <p>The first of these equivalents means that strategy Ai is optimal according to criterion (19) if and
only if it is optimal according to criterion (26). A similar explanation applies to the second equivalent.</p>
        <p>Evidence. Let us first prove the equivalence (19) Û (26). Since the game functions W and S,
respectively, of criteria (19) and (26) satisfy the equality S = –W, then the game indicators also satisfy
the analogous equality Sij = –Wij. Then</p>
        <p>S(a,r,q) = r;
S(a,r,q) = qr;
S(a,r,q) = r-a;</p>
        <p>S(a,r,q) = qr-(1-q)a.</p>
        <p>M (a, r, q) Ú a; Ø by r; by Ú q.</p>
        <p>Indicators of the game Mij = M (aij, rij, qj). Optimality indicators of strategies</p>
      </sec>
      <sec id="sec-3-4">
        <title>An optimal strategy is a strategy Ai0 for which</title>
        <p>In criterion (33), the indicators of the game are winnings Mij = aij.</p>
        <p>The function of the game in case of minimum criteria (extreme optimism), we define it through E
(a, r, q), is chosen non-increasing in terms of payoff, and also in terms of the probability q of states of
nature and non-decreasing in terms of risk r:</p>
        <p>E (a, r, q) Ø by a; Ú by r; Ø by q.</p>
      </sec>
      <sec id="sec-3-5">
        <title>As indicators of non-optimal strategies Аi, we take</title>
        <p>M(a, r, q) = а;
M(a, r, q) = qa;
M(a, r, q) = a-r;
M(a, r, q) =qa-(1-q)r.</p>
        <p>E(a, r, q) = r;
E(a, r, q) = (1–q)r;
E(a, r, q) = r –a;</p>
        <p>E(a, r, q) = (1–q)r –qa.</p>
        <p>The maximax criteria are criteria of extreme optimism, since they assume that nature will be in the
most favorable state for player A, and therefore the strategy is chosen as the optimal one, in which the
maximum indicator of the game – the indicator of optimality is maximum among the maximum
indicators of all strategies.</p>
        <p>As maximax criteria with specific functions of the game M (a, r, q) possessing properties (30), we
can take, for example, the following:
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
where Eij = E (aij, rij, qi) are the indicators of the game.</p>
        <p>The optimal strategy is assigned to the strategy Ai0, which minimizes the non-optimal index Ei.</p>
        <p>Minimum criteria are also criteria of extreme optimism, since an optimal strategy is understood as
a strategy in which the non-optimal indicator is the minimum among the non-optimal indicators of all
strategies.</p>
        <p>Examples of minimin criteria with functions of the game E (a, r, q) with properties (37) can be:
The indicators of play in criterion (40) are risks, and thus it turns into a minimum criterion for
risks.</p>
        <p>Statement 2. The maximax criteria (35) and (36) are equivalent to the minimum criterion (42) and
(43), respectively.</p>
        <p>The proof is similar to that of Statement 1, namely, for criteria (35) and (42) we have: E = –M and,
therefore, Eij = –Mij, whence
therefore</p>
        <p>For better visibility (13), (21), (30), and (37) to the non-increasing or non-decreasing of the game
functions depending on the payoffs a, risks r, and states of nature q, let us summarize them in the
following table 4.</p>
        <p>It can be seen from this table that denoting the behavior of the game functions depending on the
winnings a correspond to the first icon in the name of the criterion: max - Ú, min - Ø, max - Ú, min
Ø. And in the second line, indicating the behavior of the game functions depending on the risks r, are
opposite to the arrows in the first line.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
    </sec>
    <sec id="sec-5">
      <title>5. References</title>
      <p>In this paper, the theory of the game and its types were described, the concept of games with
nature was also described. Methods of decision-making in conditions of uncertainty were described,
such as the Wald criterion, the optimism criterion, the pessimism criterion, the Savage criterion. For
each criterion, its exceptional feature was described.
[7] N. Dotsenko, et. al.: Project-oriented management of adaptive teams' formation resources in
multi-project environment. CEUR Workshop Proceedings 2353 (2019) 911-923.
[8] N. Dotsenko, et. al.: Modeling of the processes of stakeholder involvement in command
management in a multi-project environment. 2018 IEEE 13th International Scientific and
Technical Conference on Computer Sciences and Information Technologies, CSIT 2018 –
Proceedings. 1 (2018) 29-32. doi: 10.1109/STC-CSIT.2018.8526613
[9] S. M. Lucas: Game AI Research with Fast Planet Wars Variants. 2018 IEEE Conference on</p>
      <p>
        Computational Intelligence and Games (CIG) (2018) 1-4, doi: 10.1109/CIG.2018.8490377.
[10] F. Yu, F. Chengcheng, S. Yuqiang: Data Analysis between Numerical Simulation and High
Frequency Ground Wave Radar during a Gale Weather Process. 2019 International Conference
on Meteorology Observations (ICMO) (2019) 1-4. doi: 10.1109/ICMO49322.2019.9025979.
[11] F. Xue, G. Yan, X. Zhou, S. Xu: Evolutionary Game Analysis of Green Building Promotion
Mechanism Based on SD. 2019 International Conference on Economic Management and Model
Engineering (ICEMME) (2019) 356-359. doi: 10.1109/ICEMME49371.2019.00077.
[12] M. Mazorchuck, et. al.: Web-application development for tasks of prediction in medical domain.
2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and
Information Technologies, CSIT 2018 – Proceedings. 1 (2018) 5-8. doi:
10.1109/STCCSIT.2018.8526684
[13] D. Chumachenko, O. Sokolov, S. Yakovlev: Fuzzy recurrent mappings in multiagent simulation
of population dynamics systems. International Journal of Computing 19 (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (2020) 290-297.
[14] L. Huang, H. Chen: Simulation and Analysis of Centralized Bidding Market Clearing Method
Based on Intelligent Algorithm. 2019 IEEE Innovative Smart Grid Technologies - Asia (ISGT
Asia) (2019) 2963-2967. doi: 10.1109/ISGT-Asia.2019.8881105.
[15] F. Wang, X. Feng, Lu Tang: Microeconomic Modeling and Simulation of Exchange Rate with
Heterogeneous Strategies. 2007 International Conference on Machine Learning and Cybernetics.
(2007) 2351-2356. doi: 10.1109/ICMLC.2007.4370538.
[16] K. Bazilevych, et. al.: Stochastic modelling of cash flow for personal insurance fund using the
cloud data storage. International Journal of Computing 17 (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) (2018) 153-162.
[17] B. Pittl, W. Mach, E. Schikuta: CloudTax: A CloudSim-Extension for Simulating Tax Systems
on Cloud Markets. 2016 IEEE International Conference on Cloud Computing Technology and
Science (CloudCom) (2016) 35-42. doi: 10.1109/CloudCom.2016.0021.
[18] D. Chumachenko, et. al. On-Line Data Processing, Simulation and Forecasting of the
Coronavirus Disease (COVID-19) Propagation in Ukraine Based on Machine Learning
Approach, Communications in Computer and Information Science 1158 (2020) 372-382. doi:
10.1007/978-3-030-61656-4_25
[19] S. Yakovlev, et. al., The concept of developing a decision support system for the epidemic
morbidity control, CEUR Workshop Proceedings 2753 (2020) 265–274.
[20] J. Tomalá-Gonzáles, et. al.: Serious Games: Review of methodologies and Games engines for
their development. 2020 15th Iberian Conference on Information Systems and Technologies
(CISTI) (2020) 1-6. doi: 10.23919/CISTI49556.2020.9140827.
[21] A. A. Rafik: Decision making theory with imprecise probabilities. 2009 Fifth International
Conference on Soft Computing, Computing with Words and Perceptions in System Analysis,
Decision and Control (2009) 1-1. doi: 10.1109/ICSCCW.2009.5379425.
[22] A. Alothman, A. Alqahtani: Analyzing Competitive Firms In An Oligopoly Market Structure
Using Game Theory. 2020 Industrial &amp; Systems Engineering Conference (ISEC). (2020) 1-5.
doi: 10.1109/ISEC49495.2020.9230335.
[23] Y. Pan: Optimization of Investment, Consumption and Proportional Reinsurance with Model
Uncertainty. 2020 Chinese Control And Decision Conference (CCDC) (2020) 826-831. doi:
10.1109/CCDC49329.2020.9164859.
[24] J. Liu, M. Gao, J. Zheng, J. Wang: Model-Based Wald Test for Adaptive Range-Spread Target
Detection. IEEE Access, vol. 8, pp. 73259-73267, 2020, doi: 10.1109/ACCESS.2020.2988066.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jin-yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>Zhi-geng: Maximum entropy grey game model between human and nature</article-title>
          .
          <source>Proceedings of 2013 IEEE International Conference on Grey systems and Intelligent Services (GSIS)</source>
          (
          <year>2013</year>
          )
          <fpage>436</fpage>
          -
          <lpage>439</lpage>
          . doi:
          <volume>10</volume>
          .1109/GSIS.
          <year>2013</year>
          .
          <volume>6714821</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Gopal: Markov game based control: Worst case design strategies for games against nature</article-title>
          .
          <source>2010 IEEE International Conference on Intelligent Computing and Intelligent Systems</source>
          (
          <year>2010</year>
          )
          <fpage>339</fpage>
          -
          <lpage>343</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICICISYS.
          <year>2010</year>
          .
          <volume>5658687</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Laskowski</surname>
          </string-name>
          <article-title>: Criteria of Choosing Strategy in Games Against Nature</article-title>
          .
          <article-title>EUROCON 2007 - The International Conference on "Computer as a Tool" (</article-title>
          <year>2007</year>
          )
          <fpage>2323</fpage>
          -
          <lpage>2328</lpage>
          , doi: 10.1109/EURCON.
          <year>2007</year>
          .
          <volume>4400384</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Grappiolo</surname>
          </string-name>
          , et. al.:
          <article-title>Towards Player Adaptivity in a Serious Game for Conflict Resolution</article-title>
          .
          <source>2011 Third International Conference on Games and Virtual Worlds for Serious Applications</source>
          . (
          <year>2011</year>
          )
          <fpage>192</fpage>
          -
          <lpage>198</lpage>
          . doi:
          <volume>10</volume>
          .1109/VS-GAMES.
          <year>2011</year>
          .
          <volume>39</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chumachenko</surname>
          </string-name>
          , I. Meniailov,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bazilevych</surname>
          </string-name>
          ,
          <source>T. Chumachenko: On Intelligent Decision Making in Multiagent Systems in Conditions of Uncertainty. 2019 11th International Scientific and Practical Conference on Electronics and Information Technologies, ELIT 2019 - Proceedings</source>
          . (
          <year>2019</year>
          )
          <fpage>150</fpage>
          -
          <lpage>153</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELIT.
          <year>2019</year>
          .8892307
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>P. Piletskiy P.</surname>
          </string-name>
          , et. al.:
          <article-title>Development and Analysis of Intelligent Recommendation System Using Machine Learning Approach</article-title>
          .
          <source>Advances in Intelligent Systems and Computing</source>
          <volume>1113</volume>
          (
          <year>2020</year>
          )
          <fpage>186</fpage>
          -
          <lpage>197</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -37618-5_
          <fpage>17</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>