=Paper= {{Paper |id=Vol-1391/107-CR |storemode=property |title=A Generic Authorship Verification Scheme Based on Equal Error Rates |pdfUrl=https://ceur-ws.org/Vol-1391/107-CR.pdf |volume=Vol-1391 |dblpUrl=https://dblp.org/rec/conf/clef/Halvani015 }} ==A Generic Authorship Verification Scheme Based on Equal Error Rates== https://ceur-ws.org/Vol-1391/107-CR.pdf
    A Generic Authorship Verification Scheme Based on
                   Equal Error Rates
                        Notebook for PAN at CLEF 2015

                            Oren Halvani and Christian Winter

                 Fraunhofer Institute for Secure Information Technology SIT
                       Rheinstrasse 75, 64295 Darmstadt, Germany
                        {FirstName.LastName}@SIT.Fraunhofer.de



       Abstract We present a generic authorship verification scheme for the PAN-2015
       identification task. Our scheme uses a two-step training phase on the training
       corpora. The first phase learns individual feature category parameters as well as
       decision thresholds based on equal error rates. The second phase builds feature
       category ensembles which are used for majority vote decisions because ensem-
       bles can outperform single feature categories. All feature categories used in our
       method are very simple to gain multiple advantages: Our method is entirely in-
       dependent of any external linguistic resources (even word lists), and hence it can
       easily applied to many languages. Moreover, the classification is very fast due to
       simple features. Additionally, we make use of parallelization. The evaluation of
       our scheme on a 40% split (which we did not use for training) of the official PAN-
       2015 training corpus led to an average corpus accuracy of 68.12%; in detail 60%
       for the Dutch, 67.5% for the English, 60% for the Greek and 85% for the Spanish
       subcorpus. The overall computation runtime was approximately 27 seconds.


1   Introduction

Authorship analysis forms a sub-discipline of digital text forensics (a branch of digital
forensics). Authorship attribution (AA) and authorship verification (AV) can be seen as
the two major disciplines of authorship analysis [4].
     AA deals with the problem of attributing a given anonymous text to exactly one
author, given a training set of authors for which text samples of undisputed authorship
are available [4]. AV, on the other hand, aims to determine whether a given text was
written by a certain author A [4]. The training set in this case comprises only text
samples from the supposed author A, that can be analyzed for distinct features that
make up the writing style of A. If it differs significantly from the writing style of the
given text, the authorship is considered as false, otherwise true. AVcan be understood
as a specialized task of AA with an open set of candidate authors. Conversely, AA can
be treated as a sequence of verification problems regarding the given candidate authors
[2] and should be solvable if the verification problem is solved [3].
     In this work we propose a new intrinsic AV method that offers a number of ben-
efits. The method provides a universal threshold per language and feature category,
used to accept or reject the alleged authorship of a document. Here, universal refers
to the ability to generalize across different genres and topics of the texts. In addition,
the model generated by the method is highly flexible and can be extended incremen-
tally in order to handle new languages, genres, topics or features. Besides, the method
neither requires complex machine learning algorithms nor error-prone natural language
processing techniques nor external linguistic resources. Furthermore, it features a very
low computational complexity, compared to other existing approaches (including our
previous PAN-approaches).


2   Features

Features are the core of any AV method. Without features it is not possible to model in-
dividual writing styles of authors. Many researchers across different fields (linguistics,
computer science, mathematics, etc.) have attempted to distinguish features in various
ways. As a result, features have been classified into topic features, non-topic features,
stylistic features, lexical features, syntactic features, semantic features, etc. In order to
avoid using these terms, which can be very misleading, we decided to use the generic
term of features, which we sub-categorize into so-called feature categories. Regarding
our approach, we make use of nine feature categories, which are listed in Table 1.


Fi Name                   Feature description                    Parameter range
F1 Punctuation n-grams    A sequence of n consecutive punctu- n ∈ {1, 2, . . . , 10}
                          ation marks (commas, hyphens, etc.)
                          taken from D after reduction to punc-
                          tuation characters.
F2 Character n-grams      A sequence of n consecutive charac- n ∈ {1, 2, . . . , 10}
                          ters in D.
F3 n% frequent tokens     The n% most frequently occurring n ∈ {5, 10, . . . , 50}
                          tokens in D (for n ≈ 30 that will be
                          mostly function words).
F4 Token k-prefixes       The first k characters of a token.     k ∈ {1, 2, 3, 4}
F5 Token k-suffixes       The last k characters of a token.      k ∈ {1, 2, 3, 4}
                                                                 n ∈ {2, 3, 4},
F6 Token k-prefix n-grams The first k characters of each token
                                                                 k ∈ {1, 2, 3, 4}
                          within a token n-gram.
                                                                 n ∈ {2, 3, 4},
F7 Token k-suffix n-grams The last k characters of each token
                                                                 k ∈ {1, 2, 3, 4}
                          within a token n-gram.
F8 n-prefixes– k-suffixes The first n and last k characters of a n, k ∈ {1, 2, 3, 4}
                          token.
F9 n-suffixes– k-prefixes The last n characters of a token and n, k ∈ {1, 2, 3, 4}
                          the first k characters of the next to-
                          ken.
              Table 1: The nine feature categories used in our AVscheme.
     The extraction of features from a text can be performed on any linguistic layer (e.g.,
character layer, morphological layer, lexical layer, syntactic layer or semantic layer).
Depending on the layer, different tools are required for the extraction process. For exam-
ple, extracting features from the lexical layer, involves tokenizers, stemmers, or lemma-
tizers while extracting features from the semantic layer requires part-of-speech taggers,
parsers, or thesauri [4].
     A feature can be explicit, i.e extracted directly from the text, or implicit, i.e. ex-
tracted after the text has been preprocessed. Simple letters, for example, can be seen as
explicit features, as they can be looked up in the text itself. In contrast, more complex
features like part-of-speech tags (adjectives, articles, pronouns, etc.) are implicit, since
they are not annotated in the text and thus can only be extracted after the text has been
preprocessed. In our method we focused only on explicit features, extracted from the
first four mentioned layers. There are a number of reasons for this decision:

 1. A previous observation [1] has shown that implicit features not necessarily lead to
    promising results in contrast to explicit features (for example character n-grams).
 2. Implicit features rely often on natural language processing models that are bounded
    to only one language and thus are not generalizable.
 3. Implicit features often cause a high computation runtime since various types of
    natural language processing techniques are utilized.


3      Used corpora

In this section we present the training and test corpora used by our method and describe
their inner structure.


3.1     Training and test corpora

In order to construct and evaluate our AVmodel, we used the publicly released PAN-
2015 training data set1 denoted by CPan15 . This corpus is divided into four subcorpora
CDutch , CEnglish , CGreek and CSpanish , where each sub corpus consists of 100 problems
{ρ1 , ρ2 , . . . , ρ100 }.
    During the time our method was created and trained, the official test corpus has not
been released. Hence, we used 60% of each subcorpus as a training and the remaining
40% as a test corpus.
    We denote2 the training corpus as CTrain = {CTr-Dutch , CTr-English , CTr-Greek , CTr-Spanish }
and the test corpus as CTest = {CTe-Dutch , CTe-English , CTe-Greek , CTe-Spanish }. CTrain serves
exclusively to construct the models (parameters, thresholds and ensembles) while CTest
is used to test our AVscheme.
 1
     The original file name is pan15-authorship-verification-training-dataset-2015-04-19.
 2
     The prefixes Tr and Te denote if a corpus C ∈ (CTrain ∪ CTest ) is a training or a test corpus.
3.2     Corpus inner structure

The inner structure of the used corpora (whether training or testing) in our method is
described as follows. A corpus C comprises a set of problems {ρ1 , ρ2 , . . .}, where a
problem ρ = (DA? , DA ) consists of a questioned document DA? and a set of known
documents DA . The number of known documents in each problem ρ varies from one to
five documents. The lengths of the texts in these corpora vary between a few hundred
and a few thousand words. The distribution of true and false authorships in each corpus
is uniform3 . We denote a true authorship by Y and a false authorship by N.


4      Our method

In this section we explain first which kind of preprocessing is applied on the training
and test corpora. Next, we introduce both models our method relies on, and finally,
we describe the treatment of verification problems in the test corpora which involves
(among other things) similarity calculation and classification.


4.1     Preprocessing

Given some corpus C, for each document D within C, we replace newlines, tabs and
multiple spaces by only one space. This enables a better extraction of several features,
as for instance, character n-grams. Furthermore, we concatenate all known documents
within each problem ρ into one document, such that (from now on) each ρ consists only
of the two documents DA and DA? . Besides this, no other preprocessing techniques are
applied on the data.


4.2     Model training

Our AVscheme depends on two models that are learned on each training corpus C ∈
CTrain mentioned in Section 3. The first model, denoted as MF , includes those param-
eters for each feature category (listed in Table 1) that lad to the highest accuracy on
C. Moreover, MF includes for each feature category Fi ∈ F a threshold θFi , which is
determined as threshold of the equal error rate (EER). The second model, denoted as
ME , builds on MF and includes for each corpus C an ensemble of feature categories
F1 , F2 , . . . , F9 that lad to the highest accuracy.


Constructing MF In order to construct the feature category model MF , we use
the Algorithms 1−6. Note that some specific methods are not listed, as for instance
Length() or LanguageOfCorpus(C) as the semantics behind these methods is obvious.


Constructing ME In order to construct the ensemble model ME , we use the Algo-
rithms 7−8.
 3
     Note that the uniform Y / N distribution is important for the training phase of our method.
Input: F = {F1 , F2 , . . . , F9 }, CTrain = {CTr-Dutch , CTr-English , CTr-Greek , CTr-Spanish }
Output: MF

for F ∈ F do
    // Semantic: Language → (n, k), (θF , CAccuracy )
    ThresholdDictionary ← {h , h( , ), ( , )ii}
    for C ∈ CTrain do
        L ← LanguageOfCorpus(C)
        // The first empty tuple denotes the (n, k)
            parameters.
        // The second empty tuple denotes (θF , CAccuracy ).
        ThresholdDictionary.Add(hL, h( , ), ( , )ii)
        ThresholdLanguageDictionary ← ThresholdDictionary[L]
       for k ∈ F ’s k-interval do
           for n ∈ F ’s n-interval do
               θF ← DetermineThreshold(C, F, n, k)
               CAccuracy ← CalculateCorpusAccuracy(C, θF , F, n, k)
               ThresholdLanguageDictionary.Add( h(n, k), h(θF , CAccuracy )ii )
           end
       end
    end
    /* Here we construct the model MF , which holds for
       each language (represented through C) those (n, k)
       parameters and θF that lead to the highest
       accuracy on C.                                  */
    MF ← {h(), h( , , , )ii} // Semantic: Language
       → (n, k, θF , CAccuracy ).

    for entry ∈ ThresholdDictionary do
        L ← LanguageOfCorpus(entry.Key)
         promisingTuple ← Select the tuple (n, k, θF , CAccuracy ) from entry where
         CAccuracy is the maximum among all tuples within entry.
      MF .Add((L, promisingTuple))
   end
end
return MF
                      Algorithm 1: Construct the MF model.
Input: C, F, n, k
Output: θF

Problem2SimilarityDictionary ← ComputeProblemsSmilarities(C, F, n, k)
psaList ← Select all tuples h(ρ, sim, trueAuthor)i from
Problem2SimilarityDictionary
Y = psaList.Filter(trueAuthor == "Y").Select(sim).OrderByAscending()
N = psaList.Filter(trueAuthor == "N").Select(sim).OrderByAscending()
θF ← DetermineThresholdViaEER(Y, N );
return θF
               Algorithm 2: DetermineThreshold(C, F, n, k).




Input: C, F, n, k
Output: Problem2SimilarityDictionary

Problem2SimilarityDictionary = { }
for ρ ∈ C do
    sim ← ComputeProblemSmilarity(DA? , DA , F, n, k)
    trueAuthor ← GetTruth(ρ)
    Problem2SimilarityDictionary.Add(hρ, (F, sim, trueAuthor)i)
end
return Problem2SimilarityDictionary
          Algorithm 3: ComputeProblemsSmilarities(C, F, n, k).
Input: DA? , DA , F, n, k
Output: Problem2SimilarityDictionary

DA -Features ← ExtractFeatures(DA , F, n, k)
DA? -Features ← ExtractFeatures(DA? , F, n, k)
DA -NormalizedFeatureDictionary ←
NormalizeFeaturesByRelativeFrequency(DA -Features)
DA? -NormalizedFeatureDictionary ←
NormalizeFeaturesByRelativeFrequency(DA? -Features)
Vocabulary ← DA -NormalizedFeatureDictionary.Keys
∪ DA -NormalizedFeatureDictionary.Keys
DA -VocabularyDictionary ←
Restrict2Vocabulary(DA -NormalizedFeatureDictionary)
DA? -VocabularyDictionary ←
Restrict2Vocabulary(DA? -NormalizedFeatureDictionary)
X ← DA -VocabularyDictionary.Values
Y ← DA? -VocabularyDictionary.Values
       m
       P
dist ←   |X[i] − Y [i]| // Manhattan Distance
        i=1
           1
sim ←            // Linear similarity transformation
        1 + dist

return sim
        Algorithm 4: ComputeProblemSmilarity(DA? , DA , F, n, k).
Input: C, F, n, k
Output: θF

if Length(Y) 6= Length(N) then
     Exception("The number of Y - does not equal the number of N -cases !")
len ← Length(Y )
i←0
j ← len −1
θF ← 0.5
for k ← 0, k < len, k++ do
     if Y [i] < N [j] then
         i++
         j--
         continue
     if Y [i] == N [j] then
         θF ← Y [i]
         break
     if Y [i] > N [j] then
         if i == 0 && j == len − 1 then
              θF ← (Y [i] + N [j])/2
         else
              min ← Min(Y [i], N [j + 1])
              max ← Max(Y [i − 1], N [j])
              θF ← (min + max)/2
         break
end
if i == len then
     θF = (Y [i − 1] + N [j + 1])/2
return θF
             Algorithm 5: DetermineThresholdViaEER(Y, N ).
      Input: C, θF , F, n, k
      Output: CAccuracy

      truePositives ← 0
      Problem2SimilarityDictionary ← ComputeProblemsSmilarities(C, F, n, k)
      for hρ, (F, sim, trueAuthor)i in Problem2SimilarityDictionary do
          predictedAuthor ← "Undefined"
          if sim > θF then
               predictedAuthor ← "Y"
          else if sim < θF then
               predictedAuthor ← "N"
         if predictedAuthor == trueAuthor then
              truePositives++
      end
                    100
      CAccuracy =       · truePositives
                    |C|
      return CAccuracy
                 Algorithm 6: CalculateCorpusAccuracy(C, θF , F, n, k).


4.3     Model testing

Now that we have constructed the models we need also to test them. This is done in
Algorithm 9.


5     Evaluation

In this section we describe the evaluation results. First, we present the models MF and
ME learned from the training corpora CTrain = {CTr-Dutch , CTr-English , CTr-Greek , CTr-Spanish }.
Afterwards, we show (given both pre-trained models) the evaluation results on the test
corpora CTest = {CTe-Dutch , CTe-English , CTe-Greek , CTe-Spanish }.


5.1     Determining MF

The training on CTrain led to the model MF = {MF1 , MF2 , . . . , MF9 } which is given
in the Tables 2−10.


5.2     Determining ME

Given MF , CTrain and F as an input, we applied ConstructME (MF , CTrain , F) on all
training subcorpora. The resulting model ME is given in Table 11.
Input: MF , CTrain , F
Output: ME

ME ← { }
// SuitableEnsembles includes only odd-numbered
    ensembles due to a majority voting.
SuitableEnsembles = GetFeatureCategoriesPowerSet(F)
for C ∈ CTrain do
    Fpairings ← ConstructFpairings (MF , C, F)
    // Semantic: ensemble → corpusAccuracy
    featureCategoryComboAccuracy← {h(), ()i}
   for ensemble ∈ SuitableEnsembles do
       predictions ← CreateFeatureCategoryEnsembleResults(ensemble, C)
       corpusAccuracy ← EvaluatePredictionsViaTruthFile(predictions, C)
       featureCategoryComboAccuracy.Add((ensemble, corpusAccuracy))
   end
   featureCategoryComboAccuracy ← Sort all entries (ensemble,
   corpusAccuracy) ∈ featureCategoryComboAccuracy by descending,
   according to corpusAccuracy.
   E ← Select the most promising ensemble from
   featureCategoryComboAccuracy, which led to the highest corpusAccuracy
   ME .Add(bestEnsemble)
end

return ME
                Algorithm 7: ConstructME (MF , CTrain , F).
 Input: MF , C, F
 Output: Fpairings

 // Semantics: F → {hρ, (θF , sim)i}.
 Fpairings ← {h(), (, )i}
 for F ∈ F do
     Fpairings ← {h(), (, )i}
     Fpairings .Add(hF, Fpairings i)
    for ρ ∈ C do
        MF ← LoadModelFromMF (F )
        n ← LoadNFromMF ()
        k ← LoadKFromMF ()
        θF ← LoadThresholdFromMF ()
        sim ← ComputeProblemSmilarity(C, F, n, k)
        Fpairings .Add(hρ, (θF , sim)i)
    end
 end

 return Fpairings
                    Algorithm 8: ConstructFpairings (MF , C, F).




 Input: MF , ME , CTest , F
 Output:

 for C ∈ CTest do
     Fpairings ← ConstructFpairings (MF , C, F)
     L ← LanguageOfCorpus(C)
     E ← Select the most promising ensemble from ME , for C with respect to L
     predictions ← CreateFeatureCategoryEnsembleResults(E, C)
     CAccuracy ← EvaluatePredictionsViaTruthFile(predictions, C)
     Print(predictions, CAccuracy )
 end

Algorithm 9: Apply the AVscheme, based on the pre-trained models, on test corpora
CTest .
 C           n θF1 CAccuracy             C           n θF2 CAccuracy
 CTr-Dutch 6 0,34 63.33 %                CTr-Dutch 1 0,81    70 %
 CTr-English 1 0,66 76.67 %              CTr-English 1 0,79  70 %
 CTr-Greek 2 0,64 76.67 %                CTr-Greek 7 0,34    80 %
 CTr-Spanish 1 0,78 83.33 %              CTr-Spanish 1 0,89  70 %
 Table 2: The MF1 model.                 Table 3: The MF2 model.


C           n θF3 CAccuracy              C           k θF4 CAccuracy
CTr-Dutch 10 0,46 66.67 %                CTr-Dutch 1 0,65 66.67 %
CTr-English 5 0,43 56.67 %               CTr-English 4 0,37 63.33 %
CTr-Greek 45 0,45 76.67 %                CTr-Greek 2 0,56      70 %
CTr-Spanish 10 0,57   70 %               CTr-Spanish 1 0,79 83.33 %
 Table 4: The MF3 model.                 Table 5: The MF4 model.


 C           k θF5 CAccuracy            C           n k θF6 CAccuracy
 CTr-Dutch 3 0,44 63.33 %               CTr-Dutch 1 2 0,40 63.33 %
 CTr-English 2 0,51    60 %             CTr-English 3 2 0,34 56.67 %
 CTr-Greek 1 0,77 76.67 %               CTr-Greek 4 2 0,33      70 %
 CTr-Spanish 3 0,51 66.67 %             CTr-Spanish 4 2 0,33 71.67 %
 Table 6: The MF5 model.                 Table 7: The MF6 model.


C           n k θF7 CAccuracy           C           n k θF8 CAccuracy
CTr-Dutch 1 2 0,50 66.67 %              CTr-Dutch 2 3 0,34 73.33 %
CTr-English 2 2 0,35 66.67 %            CTr-English 3 2 0,34 66.67 %
CTr-Greek 1 2 0,58      80 %            CTr-Greek 2 1 0,43 76.67 %
CTr-Spanish 4 3 0,33 66.67 %            CTr-Spanish 1 1 0,62    80 %
 Table 8: The MF7 model.                 Table 9: The MF8 model.



                         Table 10: The MF9 model.
                        C           n k θF9 CAccuracy
                        CTr-Dutch 1 1 0,45    70 %
                        CTr-English 3 1 0,35  60 %
                        CTr-Greek 4 1 0,35 76.67 %
                        CTr-Spanish 1 1 0,60  80 %
                          Table 11: The ME model for CTrain .
                             C           E (Ensemble)
                             CTr-Dutch {F1 , F8 , F9 }
                             CTr-English {F1 }
                             CTr-Greek {F1 , F3 , F4 , F6 , F7 }
                             CTr-Spanish {F1 , F4 , F6 }


5.3   Evaluation of the models

The evaluation of our approach (based on the pre-trained models MF and ME ) on the
test corpus CTest led to the results, listed in Table 12. The runtime needed for the entire
evaluation was 27 seconds.



          Table 12: Results after applying the MF and ME models on CTest .
                                C              Accuracy
                                CTe-Dutch          60 %
                                CTe-English      67.5 %
                                CTe-Greek          60 %
                                CTe-Spanish        85 %
                                CTest       ∅ = 68.12 %




6     Conclusion and future work

We presented a generic and fast authorship verification scheme for the Author Identifi-
cation task within the PAN-2015 competition. Our method provides a number of bene-
fits. The most important benefit (in comparison to our previous PAN-approaches) is that
our method finally makes use of trainable models. These models not only contribute to
the low runtime since thresholds do not need to be computed over and over again, but
also can be extended incrementally and so enable a better generalization in terms of
languages, genres or topics. Another benefit that directly comes from the (transparent)
models is that it makes the approach better interpretable as one is able to inspect the
effects of the different feature category parameters as well as the decision thresholds. In
future work we will focus on this detail. A further benefit of our method is that there is
no need for deep linguistic processing (e.g., POS-tagging, chunking or parsing) or lin-
guistic resources such as word lists, word-nets, thesauruses, etc. This causes not only a
low runtime but also makes the method portable. Besides this, the method can be modi-
fied easily regarding its underlying components. For example, the distance function can
be replaced by an ensemble of distance functions with almost no effort.
    However, there are also many open issues in our approach that leave room for im-
provement. Here, the most important issue is that at the moment our method is difficult
to comprehend, which makes it complicated to understand and implemented by others.
Therefore, it is planed in future work to re-factor the algorithms in order to make the
scheme more compact.


Acknowledgments. This work was supported by the Center for Advanced Security Re-
search Darmstadt CASED (www.CASED.de), funded by the German state government
of Hesse under the LOEWE program and by the German Federal Ministry of Education
and Research (BMBF) in the funded projects ALPhA and EWV.


References
1. Juola, P., Stamatatos, E.: Overview of the Author Identification Task at PAN 2013. In:
   Forner, P., Navigli, R., Tufis, D., Ferro, N. (eds.) Working Notes for CLEF 2013 Conference ,
   Valencia, Spain, September 23-26, 2013. CEUR Workshop Proceedings, vol. 1179.
   CEUR-WS.org (2013),
   http://ceur-ws.org/Vol-1179/CLEF2013wn-PAN-JuolaEt2013.pdf
2. Koppel, M., Schler, J., Argamon, S., Winter, Y.: The “Fundamental Problem” of Authorship
   Attribution. English Studies 93(3), 284–291 (2012),
   http://dx.doi.org/10.1080/0013838X.2012.668794
3. Koppel, M., Winter, Y.: Determining if Two Documents are by the Same Author. JASIST
   65(1), 178–187 (2014)
4. Stamatatos, E.: A Survey of Modern Authorship Attribution Methods. J. Am. Soc. Inf. Sci.
   Technol. 60(3), 538–556 (Mar 2009)