<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Natural Text Anonymization Using Universal Transformer with a Self-attention</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksandr Romanov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>alexx.romanov@gmail.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anastasia Fedotova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>afedotowaa@icloud.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Anna Kurtukova</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Roman Meshcheryakov</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Tomsk State University of Control Systems and Radioelectronics Tomsk</institution>
          ,
          <addr-line>Russian Federation</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences Moscow</institution>
          ,
          <addr-line>Russian Federation</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper focuses on the anonymization of natural language text in Russian. The problem of anonymization is topical in connection with the need to conduct studies aimed at assessing the effectiveness and stability of methods of attribution of the text to its intentional distortion by various techniques of anonymization. The paper presents a technique for anonymizing a Russian text based on a fast correlation filter, dictionary synonymization and a universal transformer model with a self-attention mechanism. The automated system developed on its basis is tested on an experimental corpus of Russian texts. The texts obtained with its help are analyzed by the authorship identification system. The effectiveness of attribution of anonymous texts by a specialized software system was reduced to a level of random guessing, which allows to name the proposed methodology effective.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Every year there are more and more software products which allow to communicate on the
Internet through text messages while maintaining anonymity. The development of such
technologies leads to an increase in the number of offenses in cyberspace. However, technical
means are not always enough to identify the subject that intentionally committed an illegal
action. In such cases, it becomes necessary to conduct an expert examination using various
techniques and tools for identifying the authorship. This makes the task of attributing the
author of a natural language text an important aspect of information security.</p>
      <p>Existing software for authorship attribution allows to take into account various
linguistic and statistical parameters and are quite effective in most cases. However, they are not
resistant even to most trivial anonymization techniques. That is why an additional research
aimed at assessing the effectiveness and sustainability of existing methods of attribution of
a natural language text to its intentional distortions by various anonymization techniques is
quite relevant. Therefore, the goal of this study is to create a technique for anonymizing a
Russian text and a software system to realize it.</p>
      <p>The problem of anonymization of natural language texts is frequently discussed in the
works of foreign researchers, and the proposed approaches demonstrate positive results.
However, a lack of techniques and software for Russian-language texts should be noted, which
makes this study more relevant.</p>
      <p>An automated anonymization system for text documents is presented in [Mamede et al., 2016].
The system was tested on different styles and types of texts using different anonymization
methods, such as suppression, tagging, random substitution and generalization. The authors
disclosed that all methods have their drawbacks, but the method of generalization as solution
for anonymization of the text was recognized as the most acceptable, because it allows
preserving the natural appearance of the text and its readability. The evaluation showed that
the use of the tagging method facilitates the reading of anonymous text, preventing some
semantic deviations caused by the substitution of words in the original text. The advantage
of the system is the possibility to replace easily its modules in order to support new methods
of anonymization, other languages, or to improve the module performance. Three experts
using a data set consisting of 75 documents from two different corpora evaluated the system.
The experts evaluated anonymized texts using the suppression method. The results show that
readers were able to compare 67% of the optimized texts with the original.</p>
      <p>The article [Sardina et al., 2018] has two main objectives: to compile a new corpus in
Spanish with annotated anonymized spontaneous dialogue data, and to investigate techniques
for automating the sensitive data identification task solution, in a setting where initially no
annotated data from the target domain are available. Several methods that can successfully
anonymize data have been investigated by the authors. Randomization into data changes
without loss of value by adding noise, and the aggregation method of data reduction, has
been effective only on structured setting in the form of graphs or tables. These methods are
considerably oriented to the anonymization of structured datasets in the form of graphs or
tables. A better suited classification of techniques specifically oriented to the anonymization
of unstructured textual data is suppression: a neutral place-holder replaces the item to be
anonymized, e.g. "XXXX", "ANON", tagging: a label indicating its category or identifier is
used to replace the item to be anonymized, e.g. "LOC", "LOCATION453", and generalization:
the item to be anonymized is substituted by one of the same category. It is noted that the best
results are achieved by a combination of these methods. The experiment was carried out on the
ESPort corpus, which includes a selection of 1170 spontaneous spoken human-human dialogues
from phone calls. The corpus has been anonymized using the substitution technique, which
implies that the result is a readable natural text, and it contains annotations of some linguistic
and extra-linguistic phenomena annotations like laughter, repetitions, mispronunciations.</p>
      <p>The technique [Nguyen-son et al., 2015] is based on the assessment of a threshold of
frequency metric to improve the naturalness of fingerprinted messages. As a metric, a
combination of precision and distribution estimates was used, calculated on the number of degrees of
generalization of sensitive phrases and the loss of information obtained from them. Based on
the proposed technique and generalization method, a web application designed to anonymize
personal information in messages before posting on Facebook was created. In addition, authors
used synonimization to create fingerprints - identifiers for each message so that if personal
information is disclosed, the identity of the person who provided it can be established. The
approach was tested on personal messages - the corpus included more than 55000 samples of
identifying phrases that were distributed among groups: hometown, education, work, religion,
politics, sports and personal interests. The accuracy of the technique was 92%.</p>
      <p>The paper [Kacmarcik et al., 2006] explores techniques for reducing the effectiveness of
standard authorship attribution methods. The authors consider two levels of anonymization:
shallow and deep. In the test set, authors show that shallow anonymization can be achieved
by making 14 changes per 1000 words to reduce the likelihood of identifying author by 17%.
For deep anonymization the unmasking work of Koppel and Schler is adapted. The possibility
of creating a tool to support document anonymization has been explored on the assumption
that the author has undertaken basic preventative measures (like spellchecking and grammar
checking). For experiments, the authors have chosen a standard data set, the Federalist
Papers. The support vector machine (SVM) is used for each feature set. However, modifying
the document to increase or decrease the frequency of a term will necessarily impact the
frequencies of other terms and thus affect the document stylometric signature. One limitation
to this approach is that it applies primarily to authors that have a reasonably sized texts
corpus. Finally, simple SVMs less resilient to obfuscation attempts than Koppel and Schler’s
unmasking approach. Classifiers with a minimum number of features are susceptible even to
trivial methods of entanglement. The accuracy of the technique is 86.86%.</p>
      <p>The system presented in [McDonald et al., 2012 ] defines the steps necessary to anonymize
documents and implements them. This system has been implemented via tool
JStyloAnonymouth [Authorship Attribution], which has been released under an open source license
(GPL 3). The software allows attribution of authorship, calculation of features most conducive
to the identification process, and offers ways to change feature vectors to ensure anonymity.
The authors use the K-means clustering method. The results show that 80% of the study
participants were able to anonymize their documents in terms of a fixed corpus and limited
feature set used. However, it was found that it was difficult to make changes to the pre-written
documents, which was a serious shortcoming of this approach.</p>
      <p>The research in [Simi et al., 2017] is devoted to prevention of attacks on a person’s
privacy based on confidential information from social networks. To prevent such attacks
Kanonymization is used. In general, k-anonymization is used to achieve this purpose. The
technique is ineffective for authors that have a reasonably sized corpus. The authors of
[Simi et al., 2017] propose three effective algorithms, most often mentioned in scientific
papers, which allow to use various anonymization strategies for a more complete assessment.
The authors conducted tests for the algorithms incognito, Samarati and Sweeney. The data
sets obtained from UCIrvine were used for study. In the course of the tests, the parameters
of the k-th value and the size of the dataset were changed. A dependency was established:
the greater the value of k, the greater the time spent on anonymization. Experiments
demonstrate that, as the number of k value expands the time taken for anonymization increases.
According to the results of the tests, the authors concluded that among the three algorithms,
the Samarati algorithm has the advantage, since it provides effective anonymization even on
a large amount of data.</p>
      <p>In the article [Maeda et al., 2016] the method of anonymization of unstructured texts is
proposed using the dictionary of anonymization and a quasi-identifier (information identifying
the set of connected objects, for example, "nation - place of birth"). The system breaks
parts of quasi-identifiers into alternate characters, for example, "" in order to prevent the
re-identification of private information. The anonymization dictionary is created from a list
of quasi-identifiers. Further, an accelerated process of anonymization based on heuristics and
set theory is proposed. The advantage of this method is the maximum preservation of the
author’s text.</p>
      <p>The authors of the article [Brennan et al., 2009] investigated adversarial attacks and their
devastating effect on the robustness of existing statistical methods of analysis in authorship
recognition. The results of the study are based on the participation of 15 individual authors.
Each author had to submit approximately 5000 words of sample writing. Each writing sample
had to be from some sort of a formal source. This was intended to eliminate slang and
abbreviations. Then the authors of the texts made an obfuscation attack to hide their own
style, and also sought to imitate the style of another author. Then, three attribution methods
were applied to the resulting corpus: a statistical technique using the signature (accuracy
95%), the approach using neural networks (accuracy 78.5%) and the classification based on
synonyms (accuracy 91.6%). Based on the results, it was concluded that all three methods
were not effective enough in such attacks. The obfuscation attack reduces the effectiveness of
the techniques to the level of random guessing and the imitation attack succeeds with 68-91%
probability depending on the stylometric technique used. The authors highlight the following
reasons for the negative results: test subjects were unfamiliar with stylometric techniques,
without specialized knowledge in linguistics, and spent little time on the attacks.</p>
      <p>
        The arti
        <xref ref-type="bibr" rid="ref14">cle [Quiring et al., 2019</xref>
        ] is devoted to the related topic of anonymization of
program source codes. The paper presents a new, based on machine learning, method of "attack
on authorship" of source code. The essence of the approach is to perform a number of
semantic code transformations that mislead deanonymization algorithms, but look plausible to the
developer. The attack is guided by Monte-Carlo tree search that enables to operate in the
discrete domain of source code. The "black box" strategy allows creating non-targeted attacks
that prevent correct identification, as well as targeted attacks that mimic the style of the
developer. To verify the result, a series of experiments were conducted using the source code of
204 programmers. The experiment has shown that the author’s technique significantly affects
the methods of attribution - their accuracy is reduced from 88% to 1%. Another experiment
was aimed at investigating the effect of targeted attacks. The result showed that in a group
of programmers, each person can impersonate another developer in 77 of cases%.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Anonymization Technique Based on the Fast Correla</title>
      <p>tion Filter, Dictionary Synonymization and Universal</p>
    </sec>
    <sec id="sec-3">
      <title>Transformer</title>
      <p>As a rule, approaches to the anonymization of natural language texts are based on
classical mathematical algorithms and statistics. Seldom modern methods operate with machine
learning (ML) algorithms, despite their high efficiency in related problems of text mining
[Kurtukova et al., 2019a]-[Romanov et al., 2018]. This is due to the specifics of the
anonymization process - it is necessary to modify the text exclusively so that its meaning is not ultimately
distorted, which requires the researcher to study all stages of the technique.</p>
      <p>The technique presented in Fig. 2 is based on the assumption that the deep NN
architectures intended for the text generation can improve the process of "obfuscation" of the text
by adding new words and figures of speech which do not affect the general meaning in any
way, and consists of the following steps: 1. Text features extraction and calculation of average
values of text features based on the corpus of Russian-language texts.</p>
      <p>2. Filtering the calculated features with a fast correlation filter and selection the most
informative features for further smoothing.</p>
      <p>3. Text correction by smoothing the identifying features.</p>
      <p>4. Generation of anonymized text by the "universal transformer" model [Dehghaniet al., 2019]
based on input dictionary-smoothed text.</p>
      <p>For identification of the author, about a thousand different groups of statistical
characteristics are used [Romanov et al., 2011]:
- lexical (punctuation, special symbols, lexicon, jargon, dialectics, archaism);
- morphological (lemmas, morphemes, grammar classes);
- syntactic (complexity, position of words, prevalence, sentiment analysis);
- structural (headings, fragmentation, citation, links, design, placement parameters);
- content-specific (keywords, emoticons, acronyms and abbreviations, foreign words);
- idiosyncratic stylistic features (spelling and grammatical errors, anomalies);
- document metadata (steganography, data structures).</p>
      <p>However, five of the most informative features of the author’s style have been allocated
which could affect the authorship identification process:
- unigrams (frequencies of letters of the Russian alphabet);
- trigrams (frequencies of triples of letters of the Russian alphabet);
- Sharov-words (frequencies of all words from the dictionary of S. Sharov [Sharov]);
- punctuation (frequency of punctuation marks);
- parts of speech (distribution of words among parts of speech).</p>
      <p>Based on the features, the average frequency of occurrence in the training corpus and
in the anonymous text are calculated. The resulting identifying values are passed to the fast
correlation filter. It accepts the input a full set of available for analysis features and uses a
measure of symmetrical uncertainty to determine the dependencies between the features:
SU (X; Y ) = 2</p>
      <p>= SU (Y; X);
H(X) H(X=Y )</p>
      <p>H(X) + H(Y )
where H(X), H(Y ) – are the entropies of random variables having accordingly i и j
states.</p>
      <p>H(X) =</p>
      <p>X P (xi)log2(P (xi));</p>
      <p>i
where H(XjY ) – conditional entropy:</p>
      <p>H(X=Y ) =</p>
      <p>X P (yj) X P (xi=yj)log2(P (xi=yj));</p>
      <p>i i
where P (xi), P (yj) – prior probabilities for all values X and Y , P (xi=yj) – posterior
probability X with known Y .</p>
      <p>The closer the SU value is to the unit, the higher is the dependence of the features on
each other. Thus, a search is made for the subset which best describes the author’s style, and
the remaining uninformative features are excluded from the further process of anonymization.</p>
      <p>The obtained informative features are calculated in the anonymized text and smoothed
in accordance with the following principles:</p>
      <p>- Words frequencies are compared with the Sharov’s dictionary and they are replaced
with synonyms that have the lowest frequency according to the dictionary.</p>
      <p>- For character unigrams and trigrams, words are detected which have the high frequency
of occurrence of specific n-grams, they are replaced with synonyms which contain other sets
of n-grams in priority.</p>
      <p>- Punctuation marks are divided into functional groups: isolating (for a text), separating
and emphasizing (for a sentence). For text anonymization on the basis of punctuation,
punctuation marks are replaced within an isolated functional group according to average statistics.</p>
      <p>- When considering the frequency of occurrence of different parts of speech, they are
replaced by equivalent structures according to the replaced part of speech. Trigram
replacements indirectly affect the frequency of the occurrence of unigrams in the text, which in turn
brings this indicator closer to the average value.</p>
      <p>
        The final step is to submit the corrected text to the input of the deep learning model. For
this purpose, a transformer model was
        <xref ref-type="bibr" rid="ref14">chosen [Wang et al., 2019</xref>
        ] and [Vaswani et al., 2019].
This solution is due to the special popularity of this architecture for solving related text
mining problems [Sun et al., 2019]-[Zihang et al., 2019] among modern deep learning models
and demonstrates results superior to simpler architectures.
      </p>
      <p>The transformer processes the input text sequence at the level of words and characters,
and also uses the self-attention mechanism to study the context. The main advantage of a
transformer over simple recurrent neural networks (RNN) and hybrid neural networks (HNN)
is the model training speed. This is achieved by parallel processing of words and setting
correspondences between them (one word correlates with the other words in a sentence, forming
a context).</p>
      <p>The classic transformer modification called "Universal Transformer" (see Fig. 2) is used
in the article. The characteristic of this model is the use of a more computationally efficient
recurrence apparatus: several modules of uniform, parallel-in-time functions of recurrent
transformations. Also, the universal transformer operates on the basis of an adaptive algorithm
which regulates the amount of computing resources spent on processing one element of the
sequence. If the element is a word that has several different meanings, this algorithm increases
the number of iterations. Iterations are designed to improve the model’s understanding of the
context. In addition, in this case, the algorithm reduces them when processing simple and
unambiguous elements.</p>
      <p>At each stage, the transformer does not process the text sequentially, but simultaneously,
after that it checks the received interpretation of each character or the word using the
selfattention mechanism.</p>
      <p>The model repeatedly checks a number of representations of tensors of the first rank
(indicated in the figure as h) sequentially for each position, combining information from different
positions in combination using the self-attention mechanism and recurrence transformation
functions.</p>
      <p>In this work, a universal transformer, trained on the corpus of Russian-language texts
and their brief descriptions, reflecting the main meaning and playing the role of labels in this
task, is able to generate a new expanded text based on the input sample. thus confusing the
source text and distorting identifying features that indirectly indicate the author’s style.
3</p>
    </sec>
    <sec id="sec-4">
      <title>Experiment Setting and Results</title>
      <p>An automated system for anonymization of natural-language text in Russian based on
the presented technique was developed. Python was chosen as a programming language
[Chollet, 2017], as it is especially popular for text analysis. To perform morphological
analysis, the pymorphy2 library was used by default [Korobov], which has a dependent module
pymorphy2-dicts which contains a collection of Russian-language dictionaries OpenCorpora.
An electronic version of N. Abramov’s synonym dictionary [Synonyms dictionary] was used in
the module of smoothing of features. The automated system provides an ability to connect
and use other morphological analyzers. The universal transformer model was implemented
on the basis of the architecture offered by the modern tensor2tensor deep learning library
[Tensor2Tensor Documentation] and has been modified due to additional embeddings in
accordance with the specifics of the text anonymization task for the Russian texts. For the
experiments, the corpus of the Russian-language texts collected from the M. Moshkov
electronic library [Moshkov Library], containing the texts of 23 writers with a total volume of 115
samples was used.</p>
      <p>The software system starts its work by calculating the average values of identifying
features that are found in the corpus of Russian texts. The calculation results (see Table 1) were
ranked by the frequency of occurrence of the grams for each of the identifying features.
№
1
2
Then, the frequencies of features for the sample text to be anonymized are calculated.
Tables 2 - 5 show the frequency of occurrence of features of the whole corpus, for a specific
text and their deviation from the average. After smoothing the features according to the
dictionary of synonyms to medium frequencies, reanalysis of frequency analysis was carried
out, the results of which are also presented in the tables.</p>
      <p>Based on the results, it can be concluded: the module for smoothing of the informative
text features of an automated system performs its functions correctly and has a significant
impact on the process of text anonymization. The next step is to obtain the corrected text
by the universal transformer and to start the process of generating the final text. For a new
corpus, process begins with training and its speed directly depends on such factors as the
number of samples in the training corpus, the input sequence length and the complexity of
the occuring words.</p>
      <p>To correct the values of weights and hyperparameters of the model, cross entropy metrics
and Kullback-Lybler distances, traditional for complex text analysis tasks, were calculated.
The final value of the loss function was 3.28 for the test part of the original corpus, which is
a positive result.</p>
      <p>Upon training, the obtained model is used to anonymize the user sample. The
transformer, which processes incoming sentences one at a time, forms a new text, preserving
its originally intended meaning, by generating new phrases and figures of speech. The text
anonymized by the model is written line by line to the output file. The user receives reference
information about the changes made and recommendations in case he wants to anonymize
manually, without referring to the automatically generated text.</p>
      <p>To assess the effectiveness and sustainability of the authorship attribution technique of
a natural language text to its intentional distortions, it was decided to conduct a series of
additional experiments with the automated system for authorship identification "Avtoroved"
[Kurtukova et al., 2019a]. This system uses classical NN, SVM, and the QSUM method and
demonstrates a high authorization accuracy of 95-98% for the texts written in the Russian
language.</p>
      <p>The authors’ texts of various styles and lengths were involved in the experiment: Agafonov
V., Grossman V.S., Bykov V., Bulgakov M., Knorre F., Druzhnikov Yu., Koval Yu., Krivin F.,
Kaledin S., Degen I. The results of the analysis with "Avtoroved" of different-sized corpora of
the original, unchanged samples and anonymized ones by the developed software system are
presented in Table 6.</p>
      <p>Thus, the developed technique is even resistant to multi-stage analysis and the
identification of informative features and allows to evaluate objectively the reliability of attribution
techniques and software systems based on them to intentional distortions of the source text.
The proposed technique reduces the accuracy of the authorship identification of a
Russianlanguage text by half, which is a serious result for the study.</p>
      <p>An example of text anonymization was presented on an extract from the novel of E. I.
Zamyatin "The Scourge of God" (Fig. 3 and Fig. 4).</p>
      <p>Below is an extract of the novel anonymized by the proposed technique, where the
uppercase signs are smoothed with a synonyms dictionary, italics are the context generated by
the transformer, and highlighting is the transformer’s self-attention mechanism.</p>
      <p>This example was demonstrated to 10 linguistic experts. Expert ratings were distributed
on a ten-point scale. The maximum score was assigned if the meaning of the anonymized
text does not differ from the original one and the text is completely clear to the expert. The
minimum, if the meaning of the anonymized text differs from the original, the text is not clear
to the expert, since most of the changes in the text do not correspond to semantics.</p>
      <p>For these estimates, the concordance and Pearson coefficients were calculated. The value
of the expert’s concordance coefficient was 0.76, which indicates a high degree of consistency
of expert opinions. Its significance was confirmed by Pearson’s score of 116.9. This means
that obtained results make sense and can be taken into account in the study. The results of
expert evaluation suggest that for all the considered text features, it was possible to achieve
a satisfactory smoothing of indicators, and the text generated by the transformer does not
affect the semantic, therefore, the functioning of the automated system is correct.
4</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>As part of the study, a technique for text anonymization based on smoothing out selected
informative features, a fast correlation filter, and a universal transformer with a self-attention
was proposed. The software system developed on its basis was tested on the corpus of
Russianlanguage texts and showed a positive result.</p>
      <p>The obtained results allow to reach the following conclusions:
- The features extracted by the fast correlation filter are quite informative.
- Smoothing with a synonyms dictionary is correct and smooths the frequency of
occurrence to average values for the Russian language.</p>
      <p>- The text generated by the universal transformer model is readable and meaningful,
despite the changes made.
- Anonymized text can be recognized by the authorship identification system with an
accuracy not exceeding 50%, and, therefore, can be used for anonymization.</p>
      <p>The uniqueness of the developed system is due to the lack of similar solutions for the
Russian language on the international market, a limited number of studies related to the
problem under consideration, and the possibility of adapting the system to any other language.</p>
      <p>In the future, we plan to continue the study, in particular, changing the technique by
introducing new ML algorithms for filtering informative author’s text features. It is assumed
that an ensemble of two or more NN architectures will show the results better than those
presented in this scientific work.
[Authorship Attribution] Authorship Attribution and Authorship Anonymization Framework.</p>
      <p>
        URL: https://github.com/psal/jstylo.
[Kurtukova et al., 2019a] Kurtukva A. V., Romanov A. S. (2019). The technique of
deanonymization of the author of the source code based on the SVM and automatic
filtering of features. In Proceedings of the XVI Conference Prospects for the development of
basic sciences. Vol. 7. Pp. 92-94. (In Rus.) = Kurtukva A. V., Romanov A. S. Metodika
deanonimizacii avtora ishodnogo koda na osnove mashiny opornyh vektorov i
avtomaticheskoi filtracii priznakov: Perspektivy razvitiya fundamental’nyh nauk: sbornik trudov XVI
mezhdunarodnoi konferencii studentov, aspirantov i molodyh u
        <xref ref-type="bibr" rid="ref14">chenyh, 2019</xref>
        . - S. 92-94.
      </p>
      <sec id="sec-5-1">
        <title>Models with</title>
      </sec>
      <sec id="sec-5-2">
        <title>Transformer. [Wang et al., 2019] Wang ArXiv:1904.09408. [Vaswani et al., 2019] Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.</title>
        <p>[Synonyms dictionary] Synonyms dictionary URL: http://slovonline.ru/slovarsinonimov/.
[Moshkov Library] Moshkov Library. URL: http://www.lib.ru.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Documentation.</title>
        <p>URL:</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Mamede et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Mamede N.</given-names>
            ,
            <surname>Baptista</surname>
          </string-name>
          <string-name>
            <given-names>J</given-names>
            ,
            <surname>Dias</surname>
          </string-name>
          <string-name>
            <surname>F.</surname>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>Automated anonymization of text documents</article-title>
          .
          <source>2016 IEEE Congress on Evolutionary Computation (CEC)</source>
          .
          <year>2016</year>
          . Pp.
          <volume>1287</volume>
          -
          <fpage>1294</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Sardina et al.,
          <year>2018</year>
          ] Sardina L. G.,
          <source>del Pozo A. and Aldezabal I. Automating the Anonymisation of Textual Corpora</source>
          .
          <year>2018</year>
          . 78 p.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>[</surname>
          </string-name>
          Nguyen-son et al.,
          <year>2015</year>
          ]
          <article-title>Nguyen-son</article-title>
          <string-name>
            <given-names>H. Q.</given-names>
            ,
            <surname>Tran</surname>
          </string-name>
          <string-name>
            <given-names>M. T.</given-names>
            ,
            <surname>Yoshiura</surname>
          </string-name>
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Sohenara</surname>
          </string-name>
          <string-name>
            <given-names>A. N.</given-names>
            ,
            <surname>Echizen</surname>
          </string-name>
          <string-name>
            <surname>I.</surname>
          </string-name>
          (
          <year>2015</year>
          )
          <article-title>Anonymizing Personal Text Messages Posted in Online Social Networks and Detecting Disclosures of Personal Information</article-title>
          .
          <source>IEICE Transactions on Information and Systems</source>
          . E98. Pp.
          <volume>78</volume>
          -
          <fpage>88</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Kacmarcik et al.,
          <year>2006</year>
          ]
          <string-name>
            <given-names>Kacmarcik G.</given-names>
            ,
            <surname>Gamon</surname>
          </string-name>
          <string-name>
            <surname>M.</surname>
          </string-name>
          (
          <year>2006</year>
          )
          <article-title>Obfuscating Document Stylometry to Preserve Author Anonymity</article-title>
          .
          <source>Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Pp</source>
          .
          <volume>444</volume>
          -
          <fpage>451</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>[McDonald</surname>
          </string-name>
          et al.,
          <year>2012</year>
          ] McDonald
          <string-name>
            <given-names>A. W.</given-names>
            ,
            <surname>Afroz</surname>
          </string-name>
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Caliskan</surname>
          </string-name>
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Stolerman</surname>
          </string-name>
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Greenstadt</surname>
          </string-name>
          <string-name>
            <surname>R.</surname>
          </string-name>
          (
          <year>2012</year>
          )
          <article-title>Use Fewer Instances of the Letter “i”. Toward Writing Style Anonymization</article-title>
          .
          <source>PETS'12 Proceedings of the 12th international conference on Privacy Enhancing Technologies</source>
          . Pp.
          <volume>299</volume>
          -
          <fpage>318</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Simi et al.,
          <year>2017</year>
          ]
          <string-name>
            <surname>Simi</surname>
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nayaki</surname>
            <given-names>K. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elayidom</surname>
            <given-names>M. S.</given-names>
          </string-name>
          <article-title>An Extensive Study on Data Anonymization Algorithms Based on K-Anonymity</article-title>
          .
          <source>IOP Conference Series: Materials Science and Engineering</source>
          .
          <year>2017</year>
          . 225 p.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [Maeda et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Maeda W.</given-names>
            ,
            <surname>Suzuki</surname>
          </string-name>
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Nakamura</surname>
          </string-name>
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Fast text anonymization using kanonyminity</article-title>
          .
          <source>Proceedings of the 18th International Conference on Information Integration and Web-Based Applications and Services</source>
          .
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Brennan et al.,
          <year>2009</year>
          ]
          <string-name>
            <surname>Brennan</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greenstadt</surname>
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2009</year>
          )
          <article-title>Practical attacks against authorship recognition techniques</article-title>
          .
          <source>Proceedings of the Twenty-First Innovative Applications of Artificial Intelligence Conference</source>
          . 7 p.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Quiring et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Quiring E.</given-names>
            ,
            <surname>Maier</surname>
          </string-name>
          <string-name>
            <given-names>A</given-names>
            .
            <surname>Rieck K. Misleading Authorship</surname>
          </string-name>
          <article-title>Attribution of Source Code using Adversarial Learning</article-title>
          .
          <year>2019</year>
          . ArXiv:
          <year>1905</year>
          .12386.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <article-title>[Kurtukova et</article-title>
          . al, 2019b]
          <string-name>
            <surname>Kurtukova</surname>
            <given-names>A. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romanov</surname>
            <given-names>A. S.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Identification author of source code by machine learning methods</article-title>
          .
          <source>SPIIRAS Proceedings</source>
          . Vol.
          <volume>18</volume>
          (
          <issue>3</issue>
          ). Pp.
          <volume>741</volume>
          -
          <fpage>765</fpage>
          . (In Rus.) =
          <article-title>Kurtukva</article-title>
          <string-name>
            <given-names>A. V.</given-names>
            ,
            <surname>Romanov</surname>
          </string-name>
          <string-name>
            <surname>A. S.</surname>
          </string-name>
          <article-title>Identifikaciya avtora ishodnogo koda metodami mashinnogo obucheniya</article-title>
          .
          <source>Trudy SPIIRAN</source>
          ,
          <year>2019</year>
          . - №
          <volume>18</volume>
          (
          <issue>3</issue>
          ). - S.
          <fpage>741</fpage>
          -
          <lpage>765</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Romanov et al.,
          <year>2018</year>
          ]
          <string-name>
            <surname>Romanov</surname>
            <given-names>A. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vasileva</surname>
            <given-names>M. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kurtukova</surname>
            <given-names>A. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meshheryakov</surname>
            <given-names>R. V.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Sentiment Analysis of TextUsing Machine Learning Techniques</article-title>
          .
          <source>Proceedings of the R</source>
          .
          <article-title>Piotrowski's Readings in Language Engineering</article-title>
          and Applied Linguistics. Saint Petersburg, Russia, November.
          <year>2017</year>
          . Рр.
          <volume>86</volume>
          -
          <fpage>95</fpage>
          . (In Rus.) =
          <article-title>Romanov</article-title>
          <string-name>
            <given-names>A. S.</given-names>
            ,
            <surname>Vasileva</surname>
          </string-name>
          <string-name>
            <given-names>M. I.</given-names>
            ,
            <surname>Kurtukova</surname>
          </string-name>
          <string-name>
            <given-names>A. V.</given-names>
            ,
            <surname>Meshheryakov R</surname>
          </string-name>
          . V.
          <article-title>Analiz tonal'nosti teksta s ispolzovaniem metodov mashinnogo obucheniya. Sbornik trudov konferencii “The II international conference</article-title>
          R.
          <article-title>Piotrowski's Readings LE</article-title>
          AL'
          <year>2017</year>
          ”: M. Jeusfeld c/o Redaktion Sun SITE,
          <string-name>
            <surname>Informatik</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <year>2018</year>
          . - S.
          <fpage>86</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Dehghaniet al.,
          <year>2019</year>
          ]
          <string-name>
            <surname>Dehghani</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gouws</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vinyals</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uszkoreit</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaiser</surname>
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>UNIVERSAL TRANSFORMERS</article-title>
          . ArXiv:
          <year>1807</year>
          .03819.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [Romanov et al.,
          <year>2011</year>
          ]
          <string-name>
            <surname>Romanov</surname>
            <given-names>A. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shelupanov</surname>
            <given-names>A. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meshheryakov</surname>
            <given-names>R. V.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Development and research of mathematical models, techniques and software of information processes in identifying the author of the text</article-title>
          . 188 p.
          <article-title>(In Rus.) = Romanov</article-title>
          <string-name>
            <given-names>A. S.</given-names>
            ,
            <surname>Shelupanov</surname>
          </string-name>
          <string-name>
            <given-names>A. A.</given-names>
            ,
            <surname>Meshheryakov R</surname>
          </string-name>
          . V.
          <article-title>Razrabotka i issledovanie matematicheskih modelej, metodik i programmnyh sredstv informacionnyh processov pri identifikacii avtora teksta: Tomsk: V-</article-title>
          <string-name>
            <surname>Spektr</surname>
          </string-name>
          ,
          <year>2011</year>
          . -
          <fpage>188</fpage>
          s.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>C.</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          (
          <year>2019</year>
          ) Language
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Vaswani et al.,
          <year>2019</year>
          ]
          <string-name>
            <surname>Vaswani</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shazeer</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parmar</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uszkoreit</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomez</surname>
            <given-names>A. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaiser</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polosukhin</surname>
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>Attention Is All You Need</article-title>
          .
          <source>ArXiv:1706</source>
          .
          <fpage>03762</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [Sun et al.,
          <year>2019</year>
          ] Sun, Chi, Xipeng Qiu, Yige Xu and
          <string-name>
            <surname>Xuanjing Huang.</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>How to FineTune BERT for Text Classification</article-title>
          ? ArXiv:
          <year>1905</year>
          .05583.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [Devlin et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Devlin J.</given-names>
            ,
            <surname>Chang</surname>
          </string-name>
          <string-name>
            <given-names>M.-W.</given-names>
            ,
            <surname>Lee</surname>
          </string-name>
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Toutanova</surname>
          </string-name>
          <string-name>
            <surname>K. BERT</surname>
          </string-name>
          :
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding</article-title>
          .
          <year>2018</year>
          . ArXiv:
          <year>1810</year>
          .04805.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [Zihang et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Zihang D.</given-names>
            ,
            <surname>Yang</surname>
          </string-name>
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Yang</surname>
          </string-name>
          <string-name>
            <given-names>Y.</given-names>
            , Carbonell J. G.,
            <surname>Le</surname>
          </string-name>
          <string-name>
            <given-names>Q. V.</given-names>
            ,
            <surname>Salakhutdinov</surname>
          </string-name>
          <string-name>
            <surname>R</surname>
          </string-name>
          .
          <article-title>Transformer-XL: Attentive Language Models beyond a Fixed-Length Context</article-title>
          .
          <year>2019</year>
          . ArXiv:
          <year>1901</year>
          .02860.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <source>[Chollet</source>
          , 2017] Chollet F.
          <article-title>Deep Learning with Python</article-title>
          .
          <source>Learning with Python: Manual. Manning Publications</source>
          .
          <year>2017</year>
          . 386 p.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [Korobov]
          <string-name>
            <surname>Korobov M.</surname>
          </string-name>
          <year>Pymorphy2</year>
          . URL:https://pymorphy2.readthedocs.io/en/latest/misc/citing.html .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>