<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Search for Cognitive Models: Standards and Challenges</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marco Ragni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicolas Riesterer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cognitive Computation Lab</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Albert-Ludwigs-Universita ̈t Freiburg</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Cognitive modeling is the distinguishing factor of cognitive science and the method of choice for formalizing human cognition. In order to bridge the gap between logic and human reasoning, a number of foundational research questions need to be rigorously answered. The objective of this paper is to present relevant concepts and to introduce possible modeling standards as well as key discussion points for cognitive models of human reasoning.</p>
      </abstract>
      <kwd-group>
        <kwd>Cognitive Modeling</kwd>
        <kwd>Human Reasoning</kwd>
        <kwd>Logic</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>All sciences are defined by their respective research objectives and methods.
Cognitive science in particular is special in this regard, because it is an
interdisciplinary field located at the boundaries of many other domains of research
such as artificial intelligence, psychology, linguistics, computer science, and
neuroscience. As a result, its goals and methods are diverse mixtures with influences
from the neighboring fields.</p>
      <p>The core research question of cognitive science focuses on investigating
information processing in the human mind in order to gain an understanding of
human cognition as a whole. To this end, it primarily employs the method of
cognitive modeling as a means of capturing the latent natural processes of the mind
by well-defined mathematical formalizations. The challenge of cognitive
modeling is to develop models which are capable of representing highly complex and
potentially unobservable processes in a computational manner while still
guaranteeing their interpretability in order to advance the level of understanding of
cognition.</p>
      <p>This paper discusses high-level cognitive models of reasoning. In particular,
it gives a brief introduction into the following three core research questions:
1. What characterizes a cognitive model?
2. What is a “good” cognitive model?
3. What are current challenges?</p>
    </sec>
    <sec id="sec-2">
      <title>What is a cognitive model?</title>
    </sec>
    <sec id="sec-3">
      <title>Step 1: Model Generation</title>
      <p>A theory of reasoning is defined as cognitively adequate [22] with respect to a
reasoning task T and a human reasoner R, if the theory is (i) representationally
adequate, i.e., it uses the same mental representation as a human reasoner does,
(ii) operationally adequate, i.e., the theory specifies the same operations the
reasoner employs, and (iii) inferentially adequate, i.e., the theory draws the same
conclusions based on the operations and mental representation as the human
reasoner. While the inferential adequacy of a theory T can be determined from
the given responses of a reasoner for a given task, it is impossible to directly
observe the operations and mental representations a reasoner applies. They can
only be determined by means of reverse engineering, i.e. the identification of
functionally equivalent representations and operations leading to the generation
of a given reasoner’s output.</p>
      <p>
        A mental representation is localized within the specific cognitive architecture
of the human mind, which the reasoning process operates on. Hence, we need to
distinguish between cognitive architectures and a cognitive models. A cognitive
architecture is a tuple hD, Oi consisting of a data structure D (which can contain
an arbitrary number of substructures) and a set of operations O specified in
any formal language to manipulate the data structure. The goal of a cognitive
architecture is to specify the often type dependent flow of information (e.g.,
visual or auditory) between di↵erent memory-related cognitive structures in the
human mind. This imposes constraints on the data structures of the reasoner and
the corresponding mental operations. An example for a cognitive architecture is
ACT-R which uses so-called modules, i.e., data structures for specific types of
information, and production rules as a set of general operations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>A cognitive computational model for a cognitive task T in a given cognitive
architecture specifies algorithms based on (a subset of) operations defined on
the data structure of the underlying cognitive architecture. The application of
those algorithm results in the computation of an input-output mapping for the
cognitive task T with the goal of representing human cognition.
3</p>
    </sec>
    <sec id="sec-4">
      <title>What is a “good” cognitive model?</title>
    </sec>
    <sec id="sec-5">
      <title>Step 2: Model Evaluation</title>
      <p>The definition of a cognitive computational model (cognitive model for short) is
rather general and allows for a large space of possible model candidates. Driven
by the motivation that a cognitive theory should be explanatory for human
performance, a “good” cognitive model is never just a simulation model, i.e., a
model that solely reproduces existing experimental data. Instead, it must always
make explicit assumptions about the latent workings of the mind.</p>
      <p>Based on several criteria from the literature [18] the following list can serve
as a starting point for defining principles of “good” cognitive modeling:
1. The model has transparent assumptions. All operations and parameters are
transparent and the model’s responses can be explained by model operations.
2. The model’s principles are independent from the test data. A model cannot
be developed on the same data it is tested on. To avoid overfitting to a
specific dataset, fine-tuning on the test data is not allowed.
3. The model generates quantitative predictions. The model computes the same
type of answers a human reasoner gives based on the input she receives.
Model predictions can be compared with the test data by mathematical
discrepancy functions often applied in mathematical psychology and AI, such
as the Root-Mean-Square Error (RMSE), statistical information criteria, or
others (see below).
4. The model predicts behavior of an individual human reasoner. Often, models
predict an average reasoner. However, aggregating data increases the noise
and eliminates individual di↵erences.
5. The model covers several relevant reasoning phenomena and predicts new
phenomena. The goal of a cognitive model is not to just fit data perfectly,
but to explain latent cognitive reasoning processes in accordance with the
results obtained from psychological studies. Ultimately, models are supposed
to o↵er an alternative view on cognition allowing for the derivation of new
phenomena that can be validated or falsified by conducting studies on human
reasoners.</p>
      <p>These points also introduce an ordering based on the importance of the
modeling principles. Points 1 and 2 are general requirements we consider to be
mandatory for any serious modeling attempts. Points 4 and 5 are important for
general cognitive models which are supposed to shed light on the inner workings
of the mind. For the reverse engineering process and a comparison of di↵erent
models that share all points 1-5, criteria 3 is the most important one.</p>
      <p>
        There are di↵erent methods for assessing the quality of models. On their
very basis, they all share the idea of defining a discrepancy metric that can
be used to quantify the value of a specific model in comparison with others.
Most fundamentally, the RMSE defines the discrepancy based on the distance
between the model predictions and outcomes observed in real world experiments.
More sophisticated statistical approaches based on the likelihood of data, such
as the 2 or G2 metrics, can be interpreted as test statistics with significant
results indicating large di↵erences to the data [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, since models do
not only di↵er with respect to the goodness of fit, but also with respect to
their complexity, further information must often be integrated into the model
comparison process. Akaike’s Information Criterion (AIC) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and the Bayesian
Information Criterion (BIC) [21] are metrics based on G2, that incorporate the
number of free parameters as an indication of complexity. F IA is an information
theoretic approach that quantifies complexity based on the minimum description
length principle [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Furthermore, there are purely Bayesian approaches to the
problem of model comparison. By relying on Bayes’ Theorem, the Bayes Factor
(BF ) measures the relative fit of two models by integrating uncertainties about
the data and parameters under the model. It quantifies whether the data provides
more evidence for or against one model being compared with an alternative [15].
4
      </p>
    </sec>
    <sec id="sec-6">
      <title>Challenges</title>
      <p>The field of cognitive science can benefit greatly from interdisciplinary work and
results. This ranges from the application of recent advances in modeling methods
from computer science and statistics, and extends all the way to exploiting the
knowledge gained in the field of theoretical psychology.</p>
      <p>However, in order to foster this collaborative approach that could potentially
result in faster and more goal-oriented progress, the field needs to address several
open questions and relevant challenges:</p>
      <sec id="sec-6-1">
        <title>1. What are relevant benchmark problems? In computer science and</title>
        <p>AI, well-defined benchmark problems have been great aids to the field. By
organizing annual competitions and generally maintaining low barriers for
entry, progress could be boosted in various domains, such as planning or
satisfiability solving for logic formulae. Additionally, the rigorous definition
of benchmarks allowed for a fair comparison between di↵erent approaches
based on well-defined criteria triggering a competitive spirit for improving
the state-of-the-art of the respective domains.</p>
        <p>
          We see the necessity to introduce the field of cognitive science and
especially the domain of human reasoning to the concept of competition, as well.
Without defining benchmark problems and providing the data to approach
them in a clear and direct manner, the field risks to drown in the
continuously increasing stream of cognitive theories arguing to explain parts of
human reasoning. In order to guarantee progression, we consider the
definition of explicit criteria for model comparison and their application based on
commonly accepted and publicly available datasets mandatory.
While psychological experiments can provide benchmark problems, they
need to be di↵erentiated with respect to priority. So far, no criteria for the
identification of relevant problems have been introduced in the literature.
However, they are necessary for the development of a generally accepted
benchmark. The following list compiles experiments and phenomena as well
as general remarks that should be taken into account when formalizing a
benchmark problem:
(a) Phenomena/experiments that have often been modeled and/or cited are:
– Conditional and propositional reasoning:
• Simple conditional inferences [17]
• Counterfactual reasoning [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
• Rule testing: The Wason Selection Task [?]
• Illusions in propositional reasoning [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]
• Suppression e↵ect [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]
– Relational reasoning:
• Preference e↵ect [19]
• Pseudo-transitive relations [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]
• Complex relations [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]
• Indeterminacy e↵ect [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]
• Visual impedence e↵ect [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]
– Syllogistic reasoning:
• Reasoning patterns on the 64 syllogisms [
          <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
          ]
• Belief-bias e↵ect [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]
• Generalized quantifiers [16]
(b) Data from the literature that can be included in the a benchmark needs
to include a description of the information a reasoner received as well
as her response. Aggregated data of individual reasoners can help to
formulate an intuition or can give an indication for an e↵ect. However,
for developing a profound model, answers of the individual reasoners are
necessary.
        </p>
      </sec>
      <sec id="sec-6-2">
        <title>2. How to translate existing descriptive theories into computational cognitive models?</title>
        <p>Most cognitive theories are not defined algorithmically. Instead they are often
based on verbal descriptions alone. However, for purposes of fair
mathematical comparison, a formalization of these theories is required. The challenge
here is to develop a model implementation of the theory that is as close
to the original theory as possible while making all additional assumptions
made by the modeler explicit. There currently is no accepted methodology
for general theory implementation.</p>
      </sec>
      <sec id="sec-6-3">
        <title>3. How could a general cognitive modeling language be specified?</title>
        <p>The field of action planning greatly benefits from having a general Planning
Domain Definition Language (PDDL). On one hand, PDDL allows for the
easy definition and introduction of new problems. On the other hand, it
forces planners to be defined generally without exploiting domain-dependent
shortcuts and heuristics.</p>
        <p>Especially when considering the goal to construct a model for unified
cognition, finding a common cognitive modeling language might be beneficial.
However, the task of defining a language which is accepted by most modelers
is not an easy endeavor as the list of potential reasoning domains is quite
extensive, and each has its own specific set of requirements. Additionally, there
are very di↵erent modeling approaches beyond the purely symbolic methods
that are commonly found in planning introducing even more complexity for
the language desired. Examples include models based on artificial neurons,
hybrid approaches, Bayesian models, and abstract description based models
such as Multinomial Processing Trees (MPTs).</p>
      </sec>
      <sec id="sec-6-4">
        <title>4. What are properties of the human data structures that influence the reasoning process?</title>
        <p>
          While working memory is resource bounded, long-term memory is not. But
there are additional cognitive features that can have an influence on
reasoning such as the background knowledge, cognitive bottlenecks, parallel
processing, etc. These limitations are often not represented in cognitive theories
but crystallized in cognitive architectures [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. However, general approaches
for developing and comparing these architectures have yet to be identified.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Desirables: Standards, Networks, and Competitions</title>
      <p>5.1</p>
      <sec id="sec-7-1">
        <title>Cognitive Modeling Standards</title>
        <p>Cognitive models are usually developed in a post-hoc fashion with the goal to
fit to an existing set of experimental data. Alternatively, cognitive models can
be created following a mixture of data- and theory-based approaches with
undefined overlap. Irrespective of the motivation and development process, a fair
comparison of models must be based on well-defined criteria (such as those
introduced in Section 3). Generally, the research community of the field needs
to settle on which criteria are mandatory, which are desirable, and which are
not worthwile to pursue further. In order to develop and maintain this set of
modeling standards, close communication between researchers is necessary.
5.2</p>
      </sec>
      <sec id="sec-7-2">
        <title>Cognitive Modeling Network</title>
        <p>Researchers dealing with similar tasks are scattered among many diverse
disciplines and research communities with few to none overlap. Amongst others,
researchers developing cognitive models for reasoning can be found in
– MathPsych community (MathPsych conference) and a mailing list
– Cognitive modeling community (ICCM conference)
– Knowledge representation and reasoning community (AI conferences like
IJ</p>
        <p>CAI, AAAI, KR) and
– Reasoning community (with the Thinking conference and the annual
Londoner Reasoning Workshop) and a mailing list</p>
        <p>However, there often is no overlap between the individual communities. A
joint e↵ort to combine the approach is neccessary.
5.3</p>
      </sec>
      <sec id="sec-7-3">
        <title>Competitions</title>
        <p>As introduced in Section 4, competitions allow to compare di↵erent approaches
and to test ideas. Additionally, the test data serves as a benchmark for future
cognitive models and to aid the development of comprehensive models of unified
cognition. One way is to embrace a more competitive perspective on model
development. By introducing challenges on comprehensive benchmarks, models
that perform best according to a predefined list of criteria (connecting strictly
quantitative requirements with theoretical profoundness) are selected.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Conclusion</title>
      <p>This paper introduced challenges and research questions the fields of cognitive
science and cognitive modeling in particular need to address. In order to ensure
progress in the understanding of the mind, models have to transcend the state
of simulations focusing on fitting experimental data. The goal for modeling is
to construct model candidates that account for prominent phenomena
discovered in cognitive psychology. By comparing these models on fair grounds and
extracting new phenomena from the computational formalizations that can in
turn be validated or falsified on experimental data, the field can advance towards
a unified model of cognition.</p>
      <p>One aim of this paper is to make general cognitive modeling principles
available to the diverse communities, to open the discussion of standards, to foster
the interdisciplinary research, and to tackle one of the core problems of high-level
cognition: human reasoning
15. Michael D Lee and Eric-Jan Wagenmakers. Bayesian cognitive modeling: A
practical course. Cambridge university press, 2014.
16. M. Oaksford and N. Chater. A Rational Analysis of the Selection Task as Optimal</p>
      <p>Data Selection. Psychological Review, 101(4):608–631, 1994.
17. K. Oberauer. Reasoning with conditionals: A test of formal models of four theories.</p>
      <p>Cognitive Psychology, 53:238–283, 2006.
18. M. Ragni. Ra¨umliche Repr¨asentation, Komplexita¨t und Deduktion: Eine kognitive
Komplexita¨tstheorie[Spatial representation, complexity and deduction: A cognitive
theory of complexity]. PhD thesis, Albert-Ludwigs-Universit¨at Freiburg, 2008.
19. M. Ragni and M. Knau↵. A theory and a computational model of spatial reasoning
with preferred mental models. Psychological Review, 120(3):561–588, 2013.
20. David M Riefer and William H Batchelder. Multinomial modeling and the
measurement of cognitive processes. Psychological Review, 95(3):318–339, 1988.
21. Gideon Schwarz. Estimating the dimension of a model. Ann. Statist., 6(2):461–464,
03 1978.
22. Gerhard Strube. The role of cognitive science in knowledge engineering.
Contemporary knowledge engineering and cognition, pages 159–174, 1992.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>H.</given-names>
            <surname>Akaike</surname>
          </string-name>
          .
          <article-title>A new look at the statistical model identification</article-title>
          .
          <source>IEEE Transactions on Automatic Control</source>
          ,
          <volume>19</volume>
          (
          <issue>6</issue>
          ):
          <fpage>716</fpage>
          -
          <lpage>723</lpage>
          ,
          <year>Dec 1974</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Anderson</surname>
          </string-name>
          .
          <article-title>How can the human mind occur in the physical universe</article-title>
          ? Oxford University Press, New York,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>William H Batchelder and David M Riefer.</surname>
          </string-name>
          <article-title>Theoretical and empirical review of multinomial process tree modeling</article-title>
          .
          <source>Psychonomic Bulletin &amp; Review</source>
          ,
          <volume>6</volume>
          (
          <issue>1</issue>
          ):
          <fpage>57</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>R. M. J. Byrne</surname>
          </string-name>
          .
          <article-title>Suppressing valid inferences with conditionals</article-title>
          .
          <source>Cognition</source>
          ,
          <volume>31</volume>
          :
          <fpage>61</fpage>
          -
          <lpage>83</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>R. M. J. Byrne</surname>
          </string-name>
          .
          <article-title>Mental models and counterfactual thoughts about what might have been</article-title>
          .
          <source>Trends in cognitive sciences</source>
          ,
          <volume>6</volume>
          (
          <issue>10</issue>
          ):
          <fpage>426</fpage>
          -
          <lpage>431</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Goodwin</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Reasoning about relations</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>112</volume>
          :
          <fpage>468</fpage>
          -
          <lpage>493</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Goodwin</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Transitive and pseudo-transitive inferences</article-title>
          .
          <source>Cognition</source>
          ,
          <volume>108</volume>
          :
          <fpage>320</fpage>
          -
          <lpage>352</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Peter D Gru</surname>
          </string-name>
          <article-title>¨nwald. The minimum description length principle</article-title>
          . MIT press,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>S.</given-names>
            <surname>Khemlani</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Disjunctive illusory inferences and how to eliminate them</article-title>
          .
          <source>Memory &amp; Cognition</source>
          ,
          <volume>37</volume>
          (
          <issue>5</issue>
          ):
          <fpage>615</fpage>
          -
          <lpage>623</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>S.</given-names>
            <surname>Khemlani</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Theories of the syllogism: A meta-analysis</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <year>January 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>S.</given-names>
            <surname>Khemlani</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>How people di↵er in syllogistic reasoning</article-title>
          .
          <source>In Proceedings of the 36th Annual Conference of the Cognitive Science Society</source>
          . Austin,
          <source>TX: Cognitive Science Society</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>K. C. Klauer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Musch</surname>
            , and
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Naumer</surname>
          </string-name>
          .
          <article-title>On belief bias in syllogistic reasoning</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>107</volume>
          (
          <issue>4</issue>
          ):
          <fpage>852</fpage>
          -
          <lpage>884</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>M. Knau</surname>
            ↵ and
            <given-names>P. N.</given-names>
          </string-name>
          <string-name>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Visual imagery can impede reasoning</article-title>
          .
          <source>Memory &amp; Cognition</source>
          ,
          <volume>30</volume>
          :
          <fpage>363</fpage>
          -
          <lpage>71</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. I.
          <string-name>
            <surname>Kotseruba</surname>
            ,
            <given-names>O. J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Gonzalez</surname>
            , and
            <given-names>J. K.</given-names>
          </string-name>
          <string-name>
            <surname>Tsotsos</surname>
          </string-name>
          .
          <article-title>A review of 40 years of cognitive architecture research: Focus on perception, attention, learning and applications</article-title>
          .
          <source>arXiv preprint arXiv:1610.08602</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>