<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Evaluation Metrics and Protocols
for eDiscovery and Systematic Review Systems. March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Towards Explainable Total Recall in TAR for eDiscovery Using Retraining of the Underlying Neural Network Model</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Charles Courchaine</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Corey Wade</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tasnova Tabassum</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stetson Daisy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ricky J. Sethi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Fitchburg State University</institution>
          ,
          <country country="US">United States</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University</institution>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>28</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Previous work has suggested that Fuzzy ARTMAP (FAM)-based Technology-Assisted Review (TAR) achieves viable levels of recall for eDiscovery (&gt;75%), and produces models that are explainable via graphical and textual interpretation. However, these results also indicated room for improvement in recall performance, as FAM is sensitive to its training input. We evaluated the viability of improving recall performance through retraining the model based on documents evaluated as relevant from all prior review iterations. Retraining improved recall significantly, resulting in 72.9% of topic-vectorizer pairs being over the 95% recall threshold as compared to 42.2% without retraining. In addition, the FAM-based model continued to demonstrate self-stopping behavior even with retraining.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Technology Assisted Review</kwd>
        <kwd>Fuzzy ARTMAP</kwd>
        <kwd>e-discovery</kwd>
        <kwd>Stopping Problem</kwd>
        <kwd>XAI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        Fuzzy ARTMAP is a supervised classifier, one of many neural network algorithms derived from Adaptive
Resonance Theory (ART), which maps inputs to category labels [
        <xref ref-type="bibr" rid="ref10 ref9">10, 9</xref>
        ]. ART describes how the brain
learns and predicts in a non-stationary world [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. One of the key features of models produced from
this theory is that they can quickly incorporate new information without sufering from catastrophic
forgetting [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In particular, Fuzzy ARTMAP utilizes the fuzzy AND operator from fuzzy set theory,
      </p>
      <p>Model
Model 0
Model 1
Model 2
Model 3</p>
      <p>
        Initial Training Source
Randomly chosen 10 Relevant, 90 Non-Relevant documents
Random order of all evaluated Relevant documents in iterations 0 to i
Random order of all evaluated Relevant documents in iterations 0 to j
Random order of all evaluated Relevant documents in iterations 0 to k
Review Iterations
0 to i
i+1 to j
j+1 to k
k+1 to stop
instead of the binary set union operator, to work with values on the interval [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. An important
feature of the model produced by Fuzzy ARTMAP is that it can be presented as a set of fuzzy If-Then
rules or graphically through a geometric interpretation [
        <xref ref-type="bibr" rid="ref10 ref13">10, 13</xref>
        ]. To employ the geometric
interpretation, the input must be complement encoded. Complement encoding normalizes the input vector
 by concatenating it with its complement  (or 1 − ), yielding an input of  = [, ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. With
complement encoding, the categories in the Fuzzy ARTMAP model can be interpreted as n-dimensional
hyper-rectangles [
        <xref ref-type="bibr" rid="ref10 ref14">10, 14</xref>
        ].
      </p>
      <p>
        While Fuzzy ARTMAP has significant benefits in terms of online learning and interpretability, it is
sensitive to the ordering of training examples [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. One way to mitigate this limitation is through a
voting strategy where three or five models are trained with inputs in diferent orders; however, this is
for using Fuzzy ARTMAP as a traditional classifier [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In TAR, new training examples are available
after each review iteration. These new examples, including documents judged as relevant and not
relevant, are used to update the Fuzzy ARTMAP model without full retraining of the model [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Procedure</title>
      <p>
        In previous work, we evaluated Fuzzy ARTMAP performance in TAR and found robust recall
performance; however, there were several instances where Fuzzy ARTMAP failed to achieve 75% or better
recall [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. In this earlier work, the Fuzzy ARTMAP implementation stopped when it predicted no more
relevant documents. As Fuzzy ARTMAP is sensitive to the input order of the training examples, we
extended our previous work by clearing and retraining the model with documents previously evaluated
as relevant. The clearing and retraining of the model occurred when the model predicted no more
relevant documents, up to a set number of retraining events.
      </p>
      <p>We set the number of retrainings at three, established through a mix of cost balancing and small
empirical tests. In general, the baseline, with no retraining, had runs that took a range of times from
sub-seconds to 13-minutes; with retraining, the range of times increased exponentially, more than
tripling for most runs and some taking 4 to 9 hours.</p>
      <p>
        Ultimately, this resulted in up to four models produced, the first (Model 0 in Table 1) was trained with
ten relevant documents and ninety non-relevant documents. This model was used and updated with
the relevance judgements in each iteration until no more relevant documents were predicted in the ith
review iteration. A new model was then created from all the documents evaluated as relevant in the
prior review iterations, presented in a random order to the model (e.g. Model 1 in Table 1). The process
was repeated up to the limit of three additional models, and ended when the final model predicted no
more relevant documents. The rest of the model implementation was kept the same as the baseline in
[
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Briefly, the baseline implementation is recapitulated below.
      </p>
      <p>
        We maintained the same hyperparameter values from our previous work, a baseline vigilance ( ) of
0.95 and a fast learning rate ( ) of 1.0. For these early results, we evaluated the retraining modifications
with the 20 Newsgroups and Reuters-21578 corpora, vectorized with tf-idf, GloVe in 300-dimensions, and
Word2Vec in 300-dimensions. All topics were used from 20 Newgroups, and the 119 topics with relevant
documents were used from the Reuters-21578 corpus, yielding 417 samples overall. Word vectors
for each document were averaged to produce document representations [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. For all vectorizations
the representations were scaled to the [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] interval using the scikit-learn MinMaxScaler. Finally, the
**p &lt;.01, ***p &lt;.001
Increases are positive, decreases are negative
features were complement encoded prior to processing via Fuzzy ARTMAP [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The Fuzzy ARTMAP
TAR implementation takes a continuous active learning (CAL) approach [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], and the model is updated
after each review iteration. For each review iteration up to 100 documents were reviewed, based on the
number of documents the model predicted as relevant. To rank the documents for active learning, the
degree of fuzzy subsethood was used [
        <xref ref-type="bibr" rid="ref17 ref7 ref8">7, 8, 17</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>The efect of retraining on recall performance was substantial. The average diference in recall, and other
metrics, between retraining and no retraining is shown in Table 2. Statistical significance was calculated
using a one-way paired t-test, with p &lt; .001 across all corpora and vectorizers for recall indicating
statistically significant diference with retraining. The smallest improvement was six percentage points
of recall, with most improvements in the 15 to 19 percentage point range, up to a maximum of 39
percentage points improvement in recall. This improvement in recall is further illustrated in Table
3, where 72.9% of topic-vectorizer pairs (304 of 417) achieved 95% recall or better with retraining as
compared with 42.2% without retraining. Commensurate with this improvement in recall performance is
a decrease in precision, mostly between 13 and 21 percentage points. This was a statistically significant
decrease in precision across corpora and vectorizers (p &lt; .001), as calculated by a one-way paired t-test.</p>
      <p>In general, while F1 performance did decrease by a practical and statistically significant amount (p &lt;
.001, except for 20 Newsgroups-GloVe), the drop was generally more modest, with the improvement in
recall ofsetting the drop in precision . Also interesting was the general increase in the average diference
in the number of documents required for review to reach 75% recall; illustrated in Table 2 under the
RE-75 metric. This is partly due to the number of un-retrained topic-vectorizer pairs that never
reached 75% recall, compared with the retrained instances (24.46% vs. 3.36%). The increase is relatively
modest for the Reuters-21578 corpus, around 300 documents or about 1.5% of the 19,044 documents
with bodies. For GloVe and Word2Vec the diference is more substantial in 20 Newsgroups, around 1,400
documents or 8.5% of the 16,330 distinct posts. The efect of the retraining and number of documents
reviewed is illustrated in the diference between Figure 1 and Figure 2; where more documents are
reviewed, but higher recall is ultimately achieved in Figure 2 than Figure 1.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>Utilizing full retraining of the Fuzzy ARTMAP model, even a modest number of times, improved
recall significantly - attaining 95% recall in over 72.9% topic-vectorizer pairs with retraining
as compared to 42.2% without retraining. Currently, the improvement in recall comes at the cost
of precision (Table 2 - Precision and RE-75 ); however, some of the increase in the number of
documents to reach 75% recall is due to the no retraining model only reaching 75% recall 75% of the
time compared with 96% of the time with retraining (Table 3). This improvement in recall is achieved
while retaining the characteristic that the algorithm eventually predicts no more relevant documents.
Additional research is required to further characterize this stopping behavior, evaluating it across more
data sets, attempting to define the statistical and theoretical basis for the predictions of no more relevant
documents, and tuning the number of retraining iterations to potentially target recall levels. There is
additional research opportunity in improving precision along with the improvement in recall.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Yang</surname>
          </string-name>
          , S. MacAvaney, D. D.
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Frieder</surname>
          </string-name>
          , Goldilocks:
          <article-title>Just-right tuning of bert for technologyassisted review</article-title>
          ,
          <source>arXiv:2105</source>
          .01044 [cs] (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kanoulas</surname>
          </string-name>
          ,
          <article-title>When to stop reviewing in technology-assisted reviews: Sampling from an adaptive distribution to estimate residual relevant documents</article-title>
          ,
          <source>ACM TRANSACTIONS ON INFORMATION SYSTEMS 38</source>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1145/3411755.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <article-title>Heuristic stopping rules for technology-assisted review (</article-title>
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1145/ 3469096.3469873.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Chhatwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gronvall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Huber-Fliflet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Keeling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Explainable text classification in legal document review a case study of explainable predictive coding</article-title>
          ,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1109/BigData.
          <year>2018</year>
          .
          <volume>8622073</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Mahoney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Huber-Fliflet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gronvall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>A framework for explainable text classification in legal document review</article-title>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1858</fpage>
          -
          <lpage>1867</lpage>
          . doi:
          <volume>10</volume>
          .1109/BigData47090.
          <year>2019</year>
          .
          <volume>9005659</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Mahoney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gronvall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Huber-Fliflet</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Zhang,</surname>
          </string-name>
          <article-title>Explainable Text Classification Techniques in Legal Document Review: Locating Rationales without Using Human Annotated Training Text Snippets</article-title>
          , in: 2022
          <source>IEEE International Conference on Big Data (Big Data)</source>
          , IEEE, Osaka, Japan,
          <year>2022</year>
          , pp.
          <fpage>2044</fpage>
          -
          <lpage>2051</lpage>
          . doi:
          <volume>10</volume>
          .1109/BigData55660.
          <year>2022</year>
          .
          <volume>10020626</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Courchaine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Sethi</surname>
          </string-name>
          ,
          <article-title>Fuzzy law: Towards creating a novel explainable technology-assisted review system for e-discovery,</article-title>
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          ,
          <year>2022</year>
          , pp.
          <fpage>1218</fpage>
          -
          <lpage>1223</lpage>
          . URL: https://ieeexplore.ieee.org/document/ 10020503/. doi:
          <volume>10</volume>
          .1109/BigData55660.
          <year>2022</year>
          .
          <volume>10020503</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Courchaine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tabassum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Sethi</surname>
          </string-name>
          ,
          <article-title>Explainable e-discovery (xed) using an interpretable fuzzy artmap neural network for technology-assisted review</article-title>
          ,
          <year>2023</year>
          , pp.
          <fpage>2761</fpage>
          -
          <lpage>2766</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>L. E. B. da Silva</surname>
          </string-name>
          , I. Elnabarawy,
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Wunsch</surname>
          </string-name>
          ,
          <article-title>A survey of adaptive resonance theory neural network models for engineering applications</article-title>
          ,
          <source>Neural Networks</source>
          <volume>120</volume>
          (
          <year>2019</year>
          )
          <fpage>167</fpage>
          -
          <lpage>203</lpage>
          . URL: https: //doi.org/10.1016/j.neunet.
          <year>2019</year>
          .
          <volume>09</volume>
          .012. doi:
          <volume>10</volume>
          .1016/j.neunet.
          <year>2019</year>
          .
          <volume>09</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Carpenter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Grossberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Markuzon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Rosen</surname>
          </string-name>
          ,
          <article-title>Fuzzy artmap: A neural network architecture for incremental supervised learning of analog multidimensional maps</article-title>
          ,
          <source>IEEE Transactions on Neural Networks</source>
          <volume>3</volume>
          (
          <year>1992</year>
          )
          <fpage>698</fpage>
          -
          <lpage>713</lpage>
          . URL: http://ieeexplore.ieee.org/document/ 159059/. doi:
          <volume>10</volume>
          .1109/72.159059.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Grossberg</surname>
          </string-name>
          ,
          <article-title>Toward autonomous adaptive intelligence: Building upon neural models of how brains make minds</article-title>
          ,
          <source>IEEE Transactions on Systems, Man, and Cybernetics: Systems</source>
          <volume>51</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1109/TSMC.
          <year>2020</year>
          .
          <volume>3041476</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Grossberg</surname>
          </string-name>
          ,
          <article-title>Competitive learning: From interactive activation to adaptive resonance, Cognitive Science (</article-title>
          <year>1987</year>
          ). doi:
          <volume>10</volume>
          .1111/j.1551-
          <fpage>6708</fpage>
          .
          <year>1987</year>
          .tb00862.x.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Grossberg</surname>
          </string-name>
          ,
          <article-title>A path toward explainable ai and autonomous adaptive intelligence: Deep learning, adaptive resonance, and models of perception, emotion</article-title>
          , and action,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .3389/fnbot.
          <year>2020</year>
          .
          <volume>00036</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-H.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. C. W. II</surname>
          </string-name>
          ,
          <article-title>Adaptive Resonance Theory (ART) for Social Media Analytics</article-title>
          , Springer International Publishing,
          <year>2019</year>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>89</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -02985-
          <issue>2</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carvallo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lobel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Soto</surname>
          </string-name>
          ,
          <article-title>Automatic document screening of medical literature using word and text embeddings in an active learning setting</article-title>
          ,
          <source>Scientometrics</source>
          <volume>125</volume>
          (
          <year>2020</year>
          ).
          <source>doi:10.1007/ s11192-020-03648-6.</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>G. F.</given-names>
            <surname>Cormack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Grossman</surname>
          </string-name>
          ,
          <article-title>Autonomy and reliability of continuous active learning for technology-assisted review</article-title>
          ,
          <source>arXiv</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kosko</surname>
          </string-name>
          ,
          <article-title>Fuzzy entropy and conditioning</article-title>
          ,
          <source>Information Sciences 40</source>
          (
          <year>1986</year>
          )
          <fpage>165</fpage>
          -
          <lpage>174</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0020</fpage>
          -
          <lpage>0255</lpage>
          (
          <issue>86</issue>
          )
          <fpage>90006</fpage>
          -
          <lpage>X</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>