<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UTMN at HAHA@IberLEF2019: Recognizing Humor in Spanish Tweets using Hard Parameter Sharing for Neural Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anna Glazkova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nadezhda Ganzherli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena Mikh</string-name>
          <email>e.v.mikhalkovag@utmn.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Tyumen</institution>
          ,
          <addr-line>Tyumen</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>222</fpage>
      <lpage>228</lpage>
      <abstract>
        <p>Automatic humor detection is a hard but challenging task. For the competition HAHA at IberLEF 2019 we built a neural networks classi er that uses di erent types of neural networks for speci c sets of features. After being trained separately, the layers are concatenated to give the general output. The performance of our system on the binary detection of humorous tweets reaches F-score of 0.76 which is comparably higher than results of baseline machine learning classi ers and earns us the ninth place in the ranking table. As for task 2, where the system has to guess how funny the tweet was based on the number of stars that it got, our result is similarly good: RM SE = 0:945. However, much needs to be done to evaluate contribution of each of the feature sets and our choice of the type of neural network.</p>
      </abstract>
      <kwd-group>
        <kwd>Humor detection Neural networks Hard parameter sharing Feature engineering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Humor detection is a non-trivial task that was considered by (
        <xref ref-type="bibr" rid="ref16">16</xref>
        ) to be of the
AI-complete kind, as humor is \one of the most sophisticated forms of human
intelligence". However, with the rise of neural networks and semantic vector
algorithms, e.g. the one suggested by (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ), it has recently gained much attention
of researchers and organizers of competitions: SemEval by (11; 8) and HAHA
at IberLEF by (2; 4). A lot of systems at these competitions, including winners,
apply machine learning and semantic vectors:
1. INGEOTEC by (
        <xref ref-type="bibr" rid="ref10">10</xref>
        ) uses a combination of machine learning methods and
word embeddings.
2. UO UPV by (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) is based on a Bidirectional LSTM neural network and word
embeddings.
3. JU-CSE-NLP by (
        <xref ref-type="bibr" rid="ref12">12</xref>
        ) \is a rule-based implementation of a dependency
network and a hidden Markov model".
4. Idiom Savant by (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) \consists of two probabilistic models... using Google
n-grams and Word2Vec".
5. PunFields by (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) applies a linear SVM classi er to a manually built thesaurus
of English words.
      </p>
      <p>Our approach at HAHA@IberLEF2019 is not an exception from this trend.
2</p>
      <p>
        Dataset and Preprocessing
\HUMOR: A Crowd-Annotated Spanish Corpus for Humor Analysis" by (
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
was created in 2017. At HAHA@IberLEF2019 the training set consisted of 16,000
tweets, manually annotated as humorous and not humorous and with the score of
funniness calculated based on the average number of \stars" (5 maximum) given
to a tweet by several independent readers. 1,200 tweets from the Training dataset
were used for validation. The test set included 4,000 tweets. In addition, we used
several datasets like pre-trained word embeddings and a sentiment dictionary.
They are described in the next section.
      </p>
      <p>We rst preprocess tweets with the help of our own software that includes
the following steps:
1. Convert some markers of emotions into words: :( to tristeza and :) :D xD</p>
      <p>XD to reir.
2. Convert repetitive sequences of jaja... and JAJA... to a simple ja.
3. Pre-tokenize: add a space before and after punctuation symbols except #
and .
4. Convert repetitions of the same letter of length 3 into a single letter and
add a lemma EMPHASIS to the tweet so that it lexically denotes sentiment
implied by letter repetitions: sooooomos to somos EMPHASIS.
5. Tokenize hashtags # and mentions :
(a) De-capitalize capitalized sequences of more than one letter leaving the
rst letter capitalized: NUET to Nuet.
(b) Add a space before and after every non-letter character in the sequence:</p>
      <p>PP#CiU to PP # CiU.
(c) Add a space before every capitalized character in the sequence:
DaviniaBono to Davinia Bono.
6. Convert emoticons into their word representations using unicode tables in
Spanish1.</p>
      <p>Our Python script for tweet preprocessing (except emoticons) and the system
we used at the Competition will soon be available at https://github.com/
evrog/Spanish_Humor. As concerns the choice of steps, it is basically a tradeo
between not ruining words with traditional orthography and extracting as many</p>
    </sec>
    <sec id="sec-2">
      <title>1 We used tables from https://unicode-table.com/es.</title>
      <p>lemmas from Internet-speci c speech as possible. The nal stage of preprocessing
is lemmatization with the help of SpaCy2: the output is a list of lemmas and
special characters in a tweet.
3</p>
      <sec id="sec-2-1">
        <title>System Architecture</title>
        <p>
          As mentioned above, our model is based on a neural network. It processes several
types of features in parallel, then concatenates the layers' outputs and passes
them to the Dense layer. (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) calls this approach \hard parameter sharing" and
binds it to the work on Multitask Learning by (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ).
        </p>
        <p>We used a subset of 1,200 tweets from the Training dataset for validation
and accuracy, as a validation measure. To avoid over tting, we used the early
stopping strategy (the value of patience is 5) and the dropout regularization for
the output layer (the fraction of the input units to drop is 0.8).</p>
        <p>The model learns on four sets of features in parallel:
1. Tweets represented as sequences. The length of word emebeddings is
300. The weight matrix is built from pretrained word embeddings for
Spanish3. In our experiment, the Convolutional neural network learned better
from these features than the Recurrent one, so these features are fed to CNN
and a further combination of MaxPooling and a attening layer. The
Convolutional layer contains 64 lters with a kernel size of 5. The size of the max
pooling windows is 20.
2. Tweets represented as a Bag-of-Words and smoothed with
TFIDF. These features are restricted to the 5,000 most frequent words, due to
computation complexity of a larger vocabulary, and passed to a Dense layer.
3. Features of sentiment and topic modelling, extracted from tweets.</p>
        <p>
          To represent sentiment in every tweet, we used \a ective norms" calculated
by (
          <xref ref-type="bibr" rid="ref15">15</xref>
          ).4 For each word in a tweet we collected its six norms from the
dictionary, getting a word vector. We summed these vectors to create a
vector of a sentence and applied MinMax normalization to scale its values
between 0 and 1:
yi =
xi
        </p>
        <p>
          xmin
xmax
xmin
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
As concerns topics, we used LDA from Gensim (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) to extract 20 most
general topics from the collection (topic distribution) and calculate distance
from each tweet to each topic. The features are passed to a Dense layer.
4. Additional features. These features include:
(a) presence (0) or absence (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) of emoji, lists, word repetitions, special
characters (e.g. !, ?);
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2 https://spacy.io/api/lemmatizer</title>
      <p>3 https://www.kaggle.com/rtatman/pretrained-word-vectors-for-spanish/
4 The dictionary of norms can be downloaded here: http://crr.ugent.be/archives/
1844.
(b) normalized with MinMax quantitative features: number of words, lines,
minimum and maximum distance between embeddings, minimum
Levenstein distance between a pair of words (to detect puns) applied to all
possible pairs of words in a tweet.</p>
      <p>The features are also passed to a Dense layer.</p>
      <p>Scheme 1 demonstrates the general outline of our best performing neural
network used for the task of binary classi cation. The scheme includes main
parameters, e.g. the size of windows is 20 in 20: MaxPooling. The optimizer is
adam (adaptive moment estimation); the loss function is binary crossentropy;
the activation functions at hidden layers are ReLU, and for the output layer it is
softmax. The last layer includes Dropout regularization with probability = 0:8.
For the second task the architecture is similar to that of the rst task, but for the
input values that are separated into classes according to the average number of
stars that the tweets earned. Also, in the second task, we use the mean standard
error for validation.
4</p>
      <sec id="sec-3-1">
        <title>Test Results</title>
        <p>Table 1 demonstrates the result of our system compared to the winners of two
tasks. The rst four measures are for Task 1, and the last column presents Task
2. As concerns Task 1, the performance of our system is average compared to
other teams' results. However, it is high above the chance value and comparably
higher than results of baseline machine learning classi ers.
Application of computer methods, in particular word embeddings and neural
networks, in analysis of gurative speech and its varieties, such as humor, has
recently proved to be very e ective in annotation of large corpora. But also, it
gives a new perspective on the analysis of language features that are important
in humor production and appreciation. Our approach included testing sets of
di erent features in a growing combination: at each step we added a feature set
and a subset of a neural network into the architecture and checked if they
improved our result. For example, including the sentiment dictionary (see above:
\a ective norms") improved our F-score by 0.015. The features we chose are
usual in the systems we mentioned in 1: vocabulary represented as word
embeddings, TF-IDF weighted Bag-of-Words, sentiment dictionary, special characters
that represent emotions in Twitter (e.g. :)).</p>
        <p>As for the system architecture, we tested the so-called hard parameter
sharing. Our system uses di erent neural networks that we empirically found to be
more capable of dealing with each speci c set of features. In general, we use
CNN for embeddings and Dense layers for other feature sets. The result of our
system is much higher than that of the baseline and is medium compared to
other participants. However, the value of each of the sets features and the model
of neural network have yet to be evaluated more closely. We plan to combine our
features with other types of neural networks. Also, the model might have given
a higher result in Task 2 if we used a regression model instead of multi-class
classi cation. However, this is yet to be tested.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Acknowledgements</title>
        <p>The reported study was funded by RFBR according to the research project No.
18-37-00272.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Caruana</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          :
          <article-title>Multitask learning: A knowledge-based source of inductive bias</article-title>
          .
          <source>In: Machine Learning Proceedings</source>
          <year>1993</year>
          , pp.
          <volume>41</volume>
          {
          <fpage>48</fpage>
          . Morgan Kaufmann, San Francisco (CA) (
          <year>1993</year>
          ). https://doi.org/https://doi.org/10.1016/B978-1
          <source>-55860-307-3</source>
          .
          <fpage>50012</fpage>
          -
          <lpage>5</lpage>
          , http://www.sciencedirect.com/science/article/pii/ B9781558603073500125
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Overview of the haha task: Humor analysis based on human annotation at ibereval 2018</article-title>
          .
          <source>In: CEUR Workshop Proceedings</source>
          . vol.
          <volume>2150</volume>
          , pp.
          <volume>187</volume>
          {
          <issue>194</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garat</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moncecchi</surname>
          </string-name>
          , G.:
          <article-title>A crowdannotated spanish corpus for humor analysis</article-title>
          .
          <source>In: Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media</source>
          . pp.
          <volume>7</volume>
          {
          <issue>11</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Etcheverry</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garat</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prada</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          : Overview of HAHA at IberLEF 2019:
          <article-title>Humor Analysis based on Human Annotation</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Doogan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghosh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veale</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Idiom savant at semeval-2017 task 7: Detection and interpretation of english puns</article-title>
          .
          <source>In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          . pp.
          <volume>103</volume>
          {
          <issue>108</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Mikhalkova</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karyakin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Pun elds at semeval-2017 task 7: Employing rogets thesaurus in automatic pun recognition and interpretation</article-title>
          .
          <source>In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          . pp.
          <volume>426</volume>
          {
          <issue>431</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Mikolov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sutskever</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corrado</surname>
            ,
            <given-names>G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Distributed representations of words and phrases and their compositionality</article-title>
          .
          <source>In: Advances in neural information processing systems</source>
          . pp.
          <volume>3111</volume>
          {
          <issue>3119</issue>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hempelmann</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurevych</surname>
          </string-name>
          , I.:
          <article-title>SemEval-2017 Task 7: Detection and interpretation of English puns</article-title>
          .
          <source>In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          . pp.
          <volume>58</volume>
          {
          <issue>68</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Ortega-Bueno</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muniz-Cuza</surname>
            ,
            <given-names>C.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pagola</surname>
            ,
            <given-names>J.E.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Uo upv: Deep linguistic humor detection in spanish social media</article-title>
          .
          <source>In: Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval</source>
          <year>2018</year>
          )
          <article-title>co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN</article-title>
          <year>2018</year>
          )
          <article-title>(</article-title>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Ortiz-Bejar</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salgado</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moctezuma</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miranda-Jimenez</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tellez</surname>
            ,
            <given-names>E.S.:</given-names>
          </string-name>
          <article-title>Ingeotec at ibereval 2018 task haha: tc and evomsa to detect and score humor in texts</article-title>
          .
          <source>In: Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval</source>
          <year>2018</year>
          )
          <article-title>co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN</article-title>
          <year>2018</year>
          )
          <article-title>(</article-title>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Potash</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romanov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rumshisky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Semeval-2017 task 6:# hashtagwars: Learning a sense of humor</article-title>
          .
          <source>In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          . pp.
          <volume>49</volume>
          {
          <issue>57</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Pramanick</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Ju cse nlp @ semeval 2017 task 7: Employing rules to detect and interpret english puns</article-title>
          .
          <source>In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          . pp.
          <volume>432</volume>
          {
          <issue>435</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Rehurek</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sojka</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Software Framework for Topic Modelling with Large Corpora</article-title>
          .
          <source>In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks</source>
          . pp.
          <volume>45</volume>
          {
          <fpage>50</fpage>
          . ELRA, Valletta, Malta (May
          <year>2010</year>
          ), http: //is.muni.cz/publication/884893/en
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Ruder</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>An overview of multi-task learning in deep neural networks</article-title>
          .
          <source>arXiv preprint arXiv:1706.05098</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Stadthagen-Gonzalez</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Imbault</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sanchez</surname>
            ,
            <given-names>M.A.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brysbaert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Norms of valence and arousal for 14,031 spanish words</article-title>
          .
          <source>Behavior research methods 49(1)</source>
          ,
          <volume>111</volume>
          {
          <fpage>123</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Stock</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strapparava</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Hahacronym: Humorous agents for humorous acronyms</article-title>
          . Stock, Oliviero,
          <string-name>
            <given-names>Carlo</given-names>
            <surname>Strapparava</surname>
          </string-name>
          , and Anton Nijholt. Eds pp.
          <volume>125</volume>
          {
          <issue>135</issue>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>