<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>OPI-JSA at CLEF 2017: Author Clustering and Style Breach Detection</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Daniel Karas ́</institution>
          ,
          <addr-line>Martyna S</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Information Processing Institute</institution>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>In this paper, we propose methods for author identification task dividing into author clustering and style breach detection. Our solution to the first problem consists of locality-sensitive hashing based clustering of real-valued vectors, which are mixtures of stylometric features and bag of n-grams. For the second problem, we propose a statistical approach based on some different tf-idf features that characterize documents. Applying the Wilcoxon Signed Rank test to these features, we determine the style breaches.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1.1</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>Author Clustering task consists of two distinct problems: author clustering and
authorship link ranking. Solving first of the scenarios means assigning each of the m given
documents to k clusters, where k is unknown and has to be approximated, where each
of the k clusters corresponds to a single author. On the other hand, authorship link
ranking can be understood as assigning intra-cluster confidence scores to document pairs,
where a higher score indicates greater similarity between documents.</p>
      <p>
        Both problems have to be solved for multiple collections of up to 50 documents.
The additional difficulty lies in fact, that document batches were created in 3 different
languages — English, Dutch, and Greek. This property makes it much harder to
implement typical language-dependant solutions such as Word2Vec [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] or WodrNet [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], since
such resources are not readily available for languages other than English. At its core, our
solution to Author Clustering task consists of two main components: Locality-sensitive
hashing (LSH) and Stylometric Measures that are not language-specific.
1.2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Locality-sensitive hashing</title>
      <p>The goal of Local-sensitive hashing (LSH) is to cluster items into "buckets" by
approximating similarities between aforementioned items. This group of algorithms is widely
used in tasks such as clustering and near-duplicates detection.</p>
      <p>
        There are multiple LSH algorithms. During our research we tested two of them
— MinHash [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and SuperBit [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. After multiple evaluations, SuperBit proved to be
better suited for described task. This algorithm approximates cosine similarity between
real-valued vectors and clusters them into given amount of clusters. The logic behind
choosing this family of the algorithm is twofold: these algorithms have the reputation of
being well suited for the task of clustering, we also wanted to test the tradeoff between
their incredible speed and their effectiveness.
      </p>
      <p>
        One of the main challenges of Author Clustering lies in establishing an optimal
number of clusters since the count of clusters is not given a priori. Multiple solutions to
this problem exist. Our final algorithm uses a process called silhouetting [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
1.3
      </p>
    </sec>
    <sec id="sec-4">
      <title>Stylometric Measures</title>
      <p>
        Due to lack of language-dependant resources such as Word2Vec and WordNet for
languages other than English, we decided to go with well known language-agnostic
stylometric measures [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] as well as a typical bag of word n-grams representation. For the
same reason — no stemming or lemmatization is performed on the documents.
      </p>
      <p>Each document is represented as a fixed-size, real-valued vector. First part of the
vector is a bag of word 3-grams, where each coordinate corresponds to unique word
3-gram present in a whole document collection for given problem.</p>
      <p>For the rest of the vector, the mixture of multiple lexical word and character based
measures are used. During the research, multiple different measures were evaluated, but
at the end, we decided to use: special character frequency, average word length, average
sentence length in characters, average sentence length in words and vocabulary richness
(number of unique words divided by the number of words).
1.4</p>
    </sec>
    <sec id="sec-5">
      <title>Results</title>
      <p>Problem Language Genre F-Bcubed R-Bcubed "P-Bcubed Av-Precision
problem001 en articles 0.645930 0.696670 0.602080 0.400580
problem002 en articles 0.463950 0.383330 0.587500 0.081134
problem003 en articles 0.418680 0.461900 0.382860 0.124740
problem004 en articles 0.412690 0.543330 0.332690 0.083299
problem005 en articles 0.628290 0.623330 0.633330 0.282090
problem006 en articles 0.418510 0.398330 0.440830 0.060129
problem007 en articles 0.423770 0.348720 0.540000 0.072016
problem008 en articles 0.482420 0.460000 0.507140 0.079461
problem009 en articles 0.776280 0.738890 0.817650 0.474400
problem010 en articles 0.572720 0.516670 0.642420 0.165370
problem011 en articles 0.462030 0.424290 0.507140 0.014544
problem012 en articles 0.528660 0.575000 0.489230 0.123790
problem013 en articles 0.450820 0.644440 0.346670 0.092703
problem014 en articles 0.621250 0.633330 0.609620 0.205350
problem015 en articles 0.424140 0.552380 0.344230 0.027974
problem016 en articles 0.479660 0.658330 0.377270 0.154390
problem017 en articles 0.487220 0.458330 0.520000 0.029075
problem018 en articles 0.520000 0.433330 0.650000 0.022727
problem019 en articles 0.446230 0.543330 0.378570 0.072511
problem020 en articles 0.490040 0.485710 0.494440 0.100070
problem021 en reviews 0.345450 0.950000 0.211110 0.221300
problem022 en reviews 0.350800 0.512500 0.266670 0.066592
problem023 en reviews 0.353910 1.000000 0.215000 0.272140
problem024 en reviews 0.400190 0.600830 0.300000 0.116170
problem025 en reviews 0.337180 0.875000 0.208820 0.065738
problem026 en reviews 0.469780 0.508330 0.436670 0.034419
problem027 en reviews 0.402840 0.522220 0.327880 0.053262
problem028 en reviews 0.494430 0.600000 0.420450 0.040009
problem029 en reviews 0.501390 0.720000 0.384620 0.061785
problem030 en reviews 0.380680 0.860000 0.244440 0.078936
problem031 en reviews 0.321360 0.218570 0.606670 0.021624
problem032 en reviews 0.492580 0.673330 0.388330 0.227330
problem033 en reviews 0.516360 0.708330 0.406250 0.153760
Continued on next page
Our solution to author clustering and authorship link can be written in following steps:
First, we approximate the desired amount of clusters using silhouetting, then we
represent every document in a collection as a real-valued vector consisting of a bag of word
3-grams and multiple stylometric measures, then SuperBit LSH algorithms is used for
the actual clustering procedure. Authorship link is calculated using cosine similarity.
2</p>
      <sec id="sec-5-1">
        <title>Style Breach Detection</title>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>2.1 Introduction</title>
      <p>Style Breach Detection task consists in detecting borders where authorship may change
within a document. Unlike the text segmentation problem which mainly focuses on
finding switches of topics, whereas the point of style breach detection task lies in
discovering borders using writing style features ignoring analysis the content of the text.</p>
      <p>
        We propose a statistical approach based on tf-idf features that characterize
documents from widely different points of view: word n-grams (we consider only n = 1 and
n = 3), punctuation, Part of Speech (PoS) using The Penn Treebank POS Tagger [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
stopwords, to determine the borders of changing style within a document.
2.2
      </p>
    </sec>
    <sec id="sec-7">
      <title>The Wilcoxon Signed Rank Test</title>
      <p>
        The paired samples Wilcoxon signed-rank test is a nonparametric test which is used to
verify the null hypothesis that two samples come from the same distribution [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Suppose we have a random sample of N pairs (X1; Y1); : : : ; (XN ; YN ), where
X1; : : : ; Xn and Y1; : : : ; Yn correspond to the blocks/objects effect before and
after some activity, respectively. For each random sample the difference is formed as
Di = Xi Yi. We assume the observation D1; : : : ; DN are independent from a
population which is continuous and symmetric with median MD. We verify the null
hypothesis H0 : MD = 0 against the two-sided alternative H1 : MD 6= 0.</p>
      <p>The algorithm to determine the statistic of this test is as follows: we need to order
the absolute differences jD1j; : : : ; jDnj from the smallest to the largest and assign them
N integer ranks (from 1 to N ), noting the original signs of the differences Di. We
consider the sum of ranks of the positive differences as a test criterion because the sum
of all the ranks is a constant. If we denote r as the rank of a random variable, then the
test statistic can be written as</p>
      <p>T =
n
X r(jDij)I(Di &gt; 0);
i=1
where I( ) = 1 if a sentence is true and I( ) = 0 otherwise.</p>
      <p>We denote Zi by I(Di &gt; 0) for each i = 1; : : : ; N . Under the null hypothesis the Zi
are independent and identically distributed from Bernoulli population with probability
P (Zi = 1) = 12 . The test statistic is a linear combination of Zi variables, so we could
determine its expected value and variance as follows:</p>
      <p>n(n + 1)
E(T ) = ; (2)</p>
      <p>4
n(n + 1)(2n + 1)
Var(T ) = : (3)</p>
      <p>24</p>
      <p>We apply approximation based on the asymptotic normality of T due to lack of
knowledge the exact distribution of this statistic. The following statistic:
T =</p>
      <p>T E(T )
pVar(T )
(1)
(4)
(6)
(7)
where jDj is the number of all documents in the given corpus and the denominator is
equal to the number of documents where the i-th word occurs at least once. Then, tf-idf
for i-th word and the j-th document is as follows:
is asymptotically normal under H0.</p>
      <p>Let denote an accepted significance level. We reject the null hypothesis against
the two-sided alternative if jT j z1 =2, where z1 =2 is the (1 =2)th quantile
from a normal distribution with mean 0 and standard deviation 1.
2.3</p>
    </sec>
    <sec id="sec-8">
      <title>Tf-idf: Term frequency–inverse document frequency</title>
      <p>
        Originally, tf-idf calculates values for each word in a document through an inverse
proportion of the frequency of the word in a particular document to the percentage of
documents the word appears in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Formally, tf-idf is the product of term frequency and inverse document frequency.
The term frequency is the number of times that i-th word occurs in j-th document, and
it may be written as</p>
      <p>tfi;j = Pk nk;j ; (5)
where ni;j is the number of occurrences the i-th word in the j-th document and the
denominator is the sum of the number of occurrences of all words in the j-th
document. The inverse document frequency is the logarithm of the inverse fraction of the
documents that contain the i-th word:
ni;j
idfi = log</p>
      <p>jDj
jfd : wi 2 dgj</p>
      <p>;
tf-idfi;j = tfi;j idfi:</p>
    </sec>
    <sec id="sec-9">
      <title>The paired samples Wilcoxon Signed Rank test with tf-idf features to detect style breaches</title>
      <p>The corpus used to construct our approach consists of only documents that are provided
in English and may contain either zero or many style breaches which occur at the end
sentences. Further, we noticed paragraphs are natural borders of the style breaches. On
this account, we split each document into sections assuming nothing less than two blank
lines determine the boundary between two paragraphs. If there are not any blank lines
within a document, then m sentences are organized into a section, where m is a fixed
natural number.</p>
      <p>
        Customarily, tf–idf is a numerical statistic that is intended to reflect how important
a word is to a document in a corpus [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In our approach, we use tf-idf to determine
how important a particular term is to a paragraph in a document. For each document
and each term mentioned above, we determine the tf-idf matrix Xi, where we denote
X1; X2; X3; X4; X5 as the tf-idf matrix for word, punctuation, PoS, stopwords, word
3-grams, respectively. The number of rows of Xi is equal to the number of paragraphs
in a document, and the number of columns of this matrix is equal to the number of all
unique terms in this document.
      </p>
      <p>We computed vectors representing paragraphs as concatenated tf-idf vectors of
selected terms together, it may be written as:
xk = (xk;j1 ; : : : xk;js );
(j1; : : : ; js)
f1; ; 5g;
(8)
where we denote xk as tf-idf combining vector for the k-th paragraph as concatenated
s tf-idf vectors of above-mentioned terms together (xk;j is tf-idf vector of the j-th term
for the k-th paragraph).</p>
      <p>The primary aim of this approach is to test whether one or multi-authors wrote two
following paragraphs. For this purpose, we use the paired samples Wilcoxon Signed
Rank test which is used to verify if two samples come from the same distribution. We
assume if the same author write two paragraphs they should have the same distribution
and analogously if two paragraphs are not written by the same author they come from
the different distributions. In other words, if the same author has drafted two sections the
result of the test should not be statistically significant (the null hypothesis is accepted,
the style is not changing between two consecutive paragraphs). On the other hand, if
multi-authors write two paragraphs then the null hypothesis should be rejected (the style
difference between two sections is statistically significant).</p>
      <p>For each two consecutive paragraphs in a document, we test if these paragraphs
have the same style. As the result of these tests, we note p-values. Next, we sort the
p-values from smallest to largest value, and we determine the S lowest p-values, where
S is defined as:</p>
      <p>S = bp jP jc + 1;
(9)
where p is a fixed value that lies in [0; 1] and jP j is the number of paragraphs in a
document.</p>
      <p>
        The borders between paragraphs corresponding with selected p-values imply the
style breaches.
The main goal of training evaluations was to choose the set of values of the parameters
used in our submitted solution. Keeping in mind the previous PAN’s task — Intrinsic
Plagiarism Detection task [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], we assumed that at least of 70% of each document was
written by the one primary author, other 30% of a text could be written by other authors,
eventually. Hence we fixed p as 0.3. Additionally, our initial experiments showed that
best results were obtained for m = 10.
      </p>
      <p>Therefore, the principal evaluation to determine the optimal set of tf-idf features
we performed for the parameters mentioned above. In Table 3, we showed the detailed
results according to the subset of tf-idf features. It worth noticing that our primary</p>
      <p>Team winF winP winR windowDiff Runtime
OPI-JSA 0.322601 0.314656 0.585617 0.545648 00:01:19
khan17 0.288795 0.399004 0.487075 0.479990 00:02:23
kuznetsova17 0.277264 0.371108 0.542527 0.529496 00:20:25
intention was optimized the F-score of WinPR. Due to the similar results obtained on
the training dataset, we select the subset of tf-idf features which also gives good results
on other datasets, based on our previous experiences. For the final submission, we chose
tf-idf of word, PoS and stopwords.</p>
      <p>
        In Table 4, the official results were shown [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Our submitted solution took the first
place according to winF, winR, and runtime. The proposed approach optimizes recall at
the sacrifice of precision and windowDiff (what was the main intention of our system).
3
      </p>
      <sec id="sec-9-1">
        <title>Conclusion</title>
        <p>
          We have presented methods for author identification task [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] that we submitted to
the 2017 PAN competition [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. This year the author identification task was divided
into author clustering and style breach detection tasks. We proposed solutions for these
competitions independently.
        </p>
        <p>The submitted system for style breach detection task obtained the best result
according to F-score of WinPR that it uses for the final ranking of all participating teams.
Additionally, it is worth noticing we were building both of our algorithms bearing in
mind optimizing execution time. Both systems had the shortest runtimes of all
submitted solutions. Implementation of our solution of author clustering task achieved the
fastest running time, which could be further improved if the number of clusters would
be known a priori for each problem, since the routine of optimizing number of clusters
for each problem is the most time-consuming step of the algorithm. While exhibiting
remarkable running time, our algorithm did not perform substantially worse than other
contestants. For the kind of usage cases that we are going to employ said algorithm
for — the trade-off between running time and performance proved to be satisfying,
which means we may use it in real-world scenarios after few improvements like using
language-specific tools such as WordNet.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Gibbons</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chakraborti</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Nonparametric statistical inference</article-title>
          . In: Lovric, M. (ed.)
          <source>International Encyclopedia of Statistical Science</source>
          , pp.
          <fpage>977</fpage>
          -
          <lpage>979</lpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Ji</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <surname>Q.</surname>
          </string-name>
          :
          <article-title>Super-bit locality-sensitive hashing</article-title>
          . In: Pereira,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Burges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.J.C.</given-names>
            ,
            <surname>Bottou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Weinberger</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.Q</surname>
          </string-name>
          . (eds.)
          <source>Advances in Neural Information Processing Systems</source>
          <volume>25</volume>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>116</lpage>
          . Curran Associates, Inc. (
          <year>2012</year>
          ), http://papers.nips.cc/paper/4847-super
          <article-title>-bit-locality-sensitive-hashing</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Mikolov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corrado</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Efficient estimation of word representations in vector space</article-title>
          .
          <source>CoRR abs/1301</source>
          .3781 (
          <year>2013</year>
          ), http://arxiv.org/abs/1301.3781
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          :
          <article-title>Wordnet: A lexical database for english</article-title>
          .
          <source>Commun. ACM</source>
          <volume>38</volume>
          (
          <issue>11</issue>
          ),
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          (
          <year>Nov 1995</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/219717.219748
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Pervaz</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ameer</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sittar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nawab</surname>
            ,
            <given-names>R.M.A.</given-names>
          </string-name>
          :
          <article-title>Identification of author personality traits using stylistic features: Notebook for pan at clef 2015</article-title>
          . In: Cappellato,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.J.F.</given-names>
            ,
            <surname>SanJuan</surname>
          </string-name>
          , E. (eds.)
          <source>CLEF (Working Notes)</source>
          .
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>1391</volume>
          .
          <string-name>
            <surname>CEUR-WS.org</surname>
          </string-name>
          (
          <year>2015</year>
          ), http://dblp.uni-trier.de/db/conf/clef/clef2015w.htmlPervazASN15
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gollub</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stamatatos</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Improving the Reproducibility of PAN's Shared Tasks: Plagiarism Detection, Author Identification, and Author Profiling</article-title>
          . In: Kanoulas,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Lupu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Clough</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Sanderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Hall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Hanbury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Toms</surname>
          </string-name>
          , E. (eds.)
          <article-title>Information Access Evaluation meets Multilinguality, Multimodality, and Visualization</article-title>
          .
          <source>5th International Conference of the CLEF Initiative (CLEF 14)</source>
          . pp.
          <fpage>268</fpage>
          -
          <lpage>299</lpage>
          . Springer, Berlin Heidelberg New York (
          <year>Sep 2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tschuggnall</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stamatatos</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          : Overview of PAN'17:
          <string-name>
            <surname>Author</surname>
            <given-names>Identification</given-names>
          </string-name>
          , Author Profiling, and
          <string-name>
            <given-names>Author</given-names>
            <surname>Obfuscation</surname>
          </string-name>
          . In: Jones,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Lawless</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Gonzalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Kelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Goeuriot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Cappellato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Ferro</surname>
          </string-name>
          , N. (eds.)
          <string-name>
            <surname>Experimental IR Meets Multilinguality</surname>
          </string-name>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          .
          <source>8th International Conference of the CLEF Initiative (CLEF 17)</source>
          . Springer, Berlin Heidelberg New York (
          <year>Sep 2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eiselt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Overview of the 1st International Competition on Plagiarism Detection</article-title>
          . In: Stein,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Stamatatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Koppel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Agirre</surname>
          </string-name>
          , E. (eds.) SEPLN 09 Workshop on Uncovering Plagiarism, Authorship, and
          <source>Social Software Misuse (PAN 09)</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . CEUR-WS.
          <source>org (Sep</source>
          <year>2009</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-502
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Rajaraman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ullman</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          : Mining of Massive Datasets. Cambridge University Press (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ramos</surname>
          </string-name>
          , J.:
          <article-title>Using tf-idf to determine word relevance in document queries (</article-title>
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Rousseeuw</surname>
            ,
            <given-names>P.J.:</given-names>
          </string-name>
          <article-title>Silhouettes: A graphical aid to the interpretation and validation of cluster analysis</article-title>
          .
          <source>Journal of Computational and Applied Mathematics</source>
          <volume>20</volume>
          ,
          <fpage>53</fpage>
          -
          <lpage>65</lpage>
          (
          <year>1987</year>
          ), http://www.sciencedirect.com/science/article/pii/0377042787901257
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Santorini</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Part-of-speech tagging guidelines for the Penn Treebank Project</article-title>
          .
          <source>Tech. Rep. MS-CIS-90-47</source>
          , Department of Computer and Information Science, University of Pennsylvania (
          <year>1990</year>
          ), ftp://ftp.cis.upenn.edu/pub/treebank/doc/tagguide.ps.gz
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Tschuggnall</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stamatatos</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verhoeven</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daelemans</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Specht</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          : In: Cappellato,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Goeuriot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Mandl</surname>
          </string-name>
          , T. (eds.)
          <source>Working Notes Papers of the CLEF 2017 Evaluation Labs</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>