<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Anomaly detection in text documents using HTM networks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Zoltán</forename><surname>Szoplák</surname></persName>
							<email>zoltan.szoplak@student.upjs.sk</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Institute of computer science</orgName>
								<orgName type="department" key="dep2">Faculty of Science P. J</orgName>
								<orgName type="institution">Šafárik University in</orgName>
								<address>
									<addrLine>Košice ; Jesenná 5</addrLine>
									<postCode>04001</postCode>
									<settlement>Košice</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gabriela</forename><surname>Andrejková</surname></persName>
							<email>gabriela.andrejkova@upjs.sk</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Institute of computer science</orgName>
								<orgName type="department" key="dep2">Faculty of Science P. J</orgName>
								<orgName type="institution">Šafárik University in</orgName>
								<address>
									<addrLine>Košice ; Jesenná 5</addrLine>
									<postCode>04001</postCode>
									<settlement>Košice</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Anomaly detection in text documents using HTM networks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">1200CF6DF33EDC811FBE07DA10D3F435</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>HTM network</term>
					<term>Semantic folding</term>
					<term>Text anomaly detection</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Anomalies in texts can be caused mainly by various interventions in texts, such as, by supplementing parts of a text from different authors. Such a type of anomalies can disrupt text that would otherwise be consistent. In order to find anomalies we have combined multiple algorithms, including a non-traditional neural network model -the Hierarchical Temporal Memory (HTM) network. HTM networks are spatiotemporal predictors based on the neocortex that combine the ability to retain memories of time sequences like recurrent neural networks with the spatial representations of convolutional neural networks.</p><p>To represent the text inputs for the HTM algortihm we use semantic folding, which encodes text differently than other embedding method: as a collection of contexts they occur in. Alongside such a predictor we use numerous other, more well known metrics, and combine them into a two step algorithm. In the first step we find the division points between the anomalous and non-anomalous parts.</p><p>In the second step we determine which sections located between two division points are actually anomalous. The algorithm was tested on 40 benchmark texts from PAN plagiarism corpus PAN-PC-11 and the results of determining whether the text contains anomalies or not are 100 %. The percentage of fully detected anomalies is 70.15 %.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Anomalies in texts are certain types of outliers which can be detected in parts of texts different from the rest of the text. Anomaly detection is therefore the task of identifying such parts of text that deviate from the rest of the text to a suspicious degree. In this paper, we are concerned with creating a method capable of detecting anomalous or plagiarized sentences in English language texts.</p><p>We have formulated two problems:</p><p>T1 Determining whether the text itself is anomalous (the text contains an anomal part)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>T2 Determining the number and location of anomalies in text</head><p>Although solving the second problem provides a solution to the first, the first problem is solvable in a shorter time, it is sufficient to detect the first anomaly. For recommender systems, it is often enough to point out that a text does contain anomalous parts and leave the rest of it to manual analysis. Such an approach regarding the dataset in <ref type="bibr" target="#b0">[1]</ref> has already been attempted in <ref type="bibr" target="#b1">[2]</ref>, where the task was to merely determine whether a given text contained any anomalies. The second task is self-explanatory, we aim to detect precisely the chunks of texts that are anomalous. It is important to note that while the algorithm presented does contain a measurement of the degree of anomalousness, we consider sections either fully anomalous or fully anomaly free, as the dataset suggests. The first problem classifies a text as fully anomalous if there is even a single anomaly present, while with the second class we will be labelling individual sections as anomalous or not.</p><p>We proposed to experiment with streaming analysis detection taking into account context and narrative progression by analyzing texts sentence by sentence. To implement such analysis, multiple encoding methods were considered. Embedding methods such as Doc2Vec, described in <ref type="bibr" target="#b2">[3]</ref> encode the exact word for word composition of the sentence. While useful for many tasks, we wanted a method that would be able to extract and compare the context and topic of a sentence, rather than it's exact wording. We therefore chose to use the Semantic Folding Theory, described in <ref type="bibr" target="#b3">[4]</ref> and combined it with the HTM algorithm <ref type="bibr" target="#b4">[5]</ref>, which is a neural network designed specifically to work with the kinds of representations that the Semantic Folding Theory creates. Since there can be types of anomalies that are not semantic ones or cases where a semantic change is not indicative of an anomaly, we have decided to implement more metrics that are designed to capture syntactic and statistical information as well, supplementing the predictions made by the HTM network as well as comparing their usefulness to the aforementioned method.</p><p>The article is conceived as follows: In the second section we discuss the current state of the art. In the third section we describe the "Semantic Folding (SF)" method. In Section 4, we provide a brief description of HTM networks. Section 5 is devoted to the description of our new algorithm for finding anomalies. The results are processed in the sixth section while the seventh section provides a conclusion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related works</head><p>Anomalies can be considered a kind of outlier. General methods for finding outliers and anomalies such as those in Argavall <ref type="bibr" target="#b5">[6]</ref> and Chandola <ref type="bibr" target="#b6">[7]</ref> are also useful for finding anomalies in texts, but texts are a special type of data for which special methods can be used.</p><p>Zhuang et al. <ref type="bibr" target="#b7">[8]</ref> developed a generative model to identify frequent and characteristic semantic regions in the word embedding space to represent the given corpus, and a robust outlierness measure which is resistant to noisy content in documents. Experiments conducted on two realworld textual data sets showed that the method can achieve very strong improvement over outlier ranking.</p><p>In Kannan et al. <ref type="bibr" target="#b8">[9]</ref> a matrix factorization method is presented, which is naturally able to distinguish the anomalies using low rank approximations of the underlying texts.</p><p>Young et al. <ref type="bibr" target="#b15">[16]</ref> review significant deep learning related models and methods that have been used for numerous NLP tasks. They also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP.</p><p>Recently, several articles have been published on the search for anomalies using HTM networks, described in Hawkins et al. <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. Important publications on the use of HTM networks in finding anomalies include the paper of Ahmad and Purdy <ref type="bibr" target="#b11">[12]</ref>. They presented a novel HTM based on-line sequence memory anomaly detection technique for time-series data. They demonstrated impressive results from a live application that detects anomalies in financial metrics in real time.</p><p>In another article Ahmad et al. <ref type="bibr" target="#b12">[13]</ref>, it is proposed a novel anomaly detection algorithm that works on streaming data. The technique is based on an online sequence memory algorithm based on HTM. They presented results using the Numenta Anomaly Benchmark (NAB), a benchmark containing real-world data streams with labeled anomalies.</p><p>Cui et al. <ref type="bibr" target="#b13">[14]</ref> presented a comparative study of HTM networks, a neurally-inspired model, and other feedforward and recurrent artificial neural network models on both artificial and real-world sequence prediction algorithms. They informed that HTM and long-short term memory (LSTM) networks gave the best prediction accuracy. HTM has many other beneficial properties and features that are desirable for real-world sequence learning.</p><p>Hole <ref type="bibr" target="#b14">[15]</ref> concentrated on understanding how the HTM learning algorithms can detect anomalies in complex adaptive information and communications technology (ICT) systems. HTM finds anomalies in real-time streaming data. There is no need to store huge amounts of data since HTM builds models representing the properties of the data. They examined anomalies in Amazon Web Services (AWS) streaming data and then studied how HTM detects rogue human behavior.</p><p>The aforementioned research has inspired us to make use of these networks. To satisfy the input reuqirements of HTM networks, we needed to find a way to encode text data as Sparse Distributed Representations (SDRs).</p><p>In natural language processing (NLP), a method called "Semantic Folding (SF)" <ref type="bibr" target="#b3">[4]</ref> is important, which allows to store text data as sparse distributed representations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Semantic Folding</head><p>Semantic folding theory creates Sparse Distributed Representations (SDRs) of text data called semantic fingerprints to emulate the structure of the neocortex, the area of the brain that is responsible for several high-level cognitive functions, such as vision, hearing, touch, movement or, most relevant to our case: language. A single fingerprint ideally represents contexts and clusters of contexts that are present in a given text. Such representation is achieved by first gathering a corpus representative of the text that we aim to encode, slicing it into snippets (sequences of words, usually paragraphs) and arranging them into a 2D array using self-organizing maps. As a consequence, similar snippets (those that share a lot of words) end up close together, forming clusters.</p><p>After creating the array of our representation, we can obtain the semantic fingerprint of a word by checking every single context of the array whether it contains the input word or not. We set the given index to 1 in case the word is present in the snippets of the context and 0 otherwise. The result is a sparse vector since most words only occur in a handful of contexts, therefore it is preferable to store only the indices that have a value of 1 to save memory.</p><p>Since looking through every single context for each input word is very time-consuming, it is preferable to simply create a vocabulary of words from our corpus, calculate the encodings for each word and store their representation in a database. If we want to encode collections or sequences of words, we simply take the fingerprint of each individual word, concatenate the active bits for every index and then activate only the indices where the number of active bits in all the fingerprints exceeds a certain threshold.</p><p>Merging the fingerprints in such a way allows us to retain sparsity and prune the representation of all contexts that may be relevant to the individual words, but are irrelevant in the context they occur in. Such a representation does have a few perks of note. First, every individual bit in our representation has its specific individual meaning, unlike word encodings such as Bag-of-words or ASCII code. Furthermore comparing representations can be as easy as calculating the number of overlapping active bits. We can also take advantage of the fact that similar contexts are clustered and compare representations using metrics reliant on geometry such as Euclidean and cosine distance. While such representation does not take into account the word order and thus is unsuitable for language generation, it is very much suitable for topic matching and preventing semantic drift.</p><p>The human neocortex learns by recognizing patterns of information sequences representing sensory "inputs" and predicts following likely likely values based on previous observations. Hierarchical Temporal Memory (HTM) is a type of neural network that tries to reproduce the structure and processes of the neocortex. More detailed description of HTM is given in <ref type="bibr" target="#b4">[5]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">HTM Network Structure</head><p>A HTM network has an inherently bottom-up hierarchical structure, as seen in Figure <ref type="figure" target="#fig_0">1</ref>, comprised of multiple layers of 3-dimensional arrays of bits, referred to as cells. Interconnected layers form a hierarchy where each layer is connected to the one below it, with the one at the bottom connecting to the input of the network itself. A layer itself is a 3-dimensional array of bits comprised of columns arranged in a 2D array topology, where a column is made up of a single or multiple cells. Each column is connected to a subset of the input/previous layer, and cells in a single column are also connected to the cells above and below. The hierarchy of layers is inspired by biology, with the neocortex consisting of multiple regions that either receive their input directly from sensory organs or from other regions connected to them.  • Temporal Memory (TM) forms connections from the cells active in the current step to cells that were active just prior and makes predictions. The algorithm uses Hebb's rule, where connections are formed between cells that were previously active. Through the formation of those connections a sequence may be learned.</p><p>The TM can then use its learned knowledge of the sequences to form predictions. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Using HTM in our Algorithm</head><p>The explicit mathematical description of computations for the HTM network can be found in Mnatzaganian et al. <ref type="bibr" target="#b16">[17]</ref>. It describes all aspects of the spatial pooler, a critical learning component in HTM, under a single unifying framework. The primary learning mechanism is explored, where a maximum likelihood estimator for determining the degree of permanence update is proposed.</p><p>The HTM algorithm, in essence, allows to take an array of bits as an input and to predict the input of subsequent steps. Such predictions may be enough for some tasks, however, in order to detect anomalies, we need to extract the occurring differences in the patterns presented in the inputs.</p><p>Let's define vector x t as the input generated in our system at time t. Then the sequence of inputs to the algorithm can be defined as x 1 , . . . , x t−1 , x t , x t−1 , . . . , x k , . . . possibly continuing until the detection task is stopped manually.</p><p>In general, the goal of streaming anomaly detection is to find abnormalities in inputs as soon as they occur. Such a real time constraint means that for the purposes of an anomaly detection at time t the inputs of earlier times (1, ...,t − 1) are accessible only, not the input at time t + 1 or later. The HTM algorithm is capable of detecting such anomalies as well by using a method described in <ref type="bibr" target="#b11">[12]</ref>.</p><p>Let's define two variables to explain the HTM anomaly detection algorithm: Let a(x t ) be the sparse binary representation of the input vector x t , defined by the binary 2D matrix of all cells within a region, where a i, j (x t ) is set to 1 if the ith cell of the jth column is in the active state and to 0 otherwise. Let π(x t ) be the sparse binary representation of the prediction of the next input -a(x t+1 ), defined by the binary 2D matrix of all cells within a region, where π i, j (x t ) is set to 1 if the ith cell of the jth column is in the predictive state and to 0 otherwise. The values of the prediction matrix are greatly influenced not just by the input itself, but the context as well.</p><p>Therefore the accuracy of the algorithm prediction is largely dependent on its ability to model the data. These two variables are calculated at each step of the algorithm, however, they do not contain sufficient information to find anomalies by themselves. Instead, we use these variables to compute a raw anomaly score for each timestamp, labelled s t . The raw anomaly score essentially gives us a measurement of a deviation between the predicted and the actual input. The raw anomaly score is given by:</p><formula xml:id="formula_0">s t = π(x t−1 )a(x t ) |a(x t )|<label>(1)</label></formula><p>Both variables are binary vectors, the multiplication is the inner product, divided by the size of the output. The less the prediction was correct the larger our anomaly score is. The value of s t is therefore a scalar value between 0 and 1, 0 meaning the prediction was perfect, 1 meaning nothing has been correctly predicted. A weak prediction would therefore be indicative of an anomaly, however it does not take into account the predictive capability of our network on the amount or noise present in the text.</p><p>To counteract this, we calculate the distribution of anomaly scores within a certain time window, and therefore find the likelihood instead of simply tresholding the raw anomaly score. The anomaly likelihood metric is designed to measure a change in predictability, rather than change in the input pattern and thus it accounts for the beginning of the text where predictability is low. The metric is ideal for detecting not only the starting points of the anomalies but their ending points as well (since in that case the predictability would suddenly get much more accurate).</p><p>To calculate the anomaly likelihood metric, we use a large moving window represented by the vector W that stores the last k raw anomaly scores. In addition, we define the W window with a size of j which is much smaller than k that is used to calculate a small moving average of the last few anomaly scores. It makes a more accurate comparison metric than a single score. We calculate the anomaly likelihood using the Q-function (the tail distribution function to the Gaussian normal distribution function), where the mean and variance of raw anomaly scores for the normal distribution function are recalculated every time using the values of anomaly scores in our windowsized memory.</p><p>The anomaly likelihood metric at time t is defined as the complement of the tail probability:</p><formula xml:id="formula_1">L t = 1 − Q μt − µ t σ t<label>(2)</label></formula><p>where:</p><formula xml:id="formula_2">Q(x) = 1 √ 2π − ∞ x exp − u 2 2 du (<label>3</label></formula><formula xml:id="formula_3">)</formula><formula xml:id="formula_4">µ t = ∑ i=W −1 i=0 s t−i k , μt = ∑ i=W −1 i=0 s t−i j<label>(4)</label></formula><formula xml:id="formula_5">σ 2 t = ∑ i=W −1 i=0 (s t−i − µ t ) 2 k − 1<label>(5)</label></formula><p>Naturally, the greater the likelihood score the more probable it is that we have found the offset of an anomaly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Anomaly Detection in Texts</head><p>The new developed system combines the previously described methods and works in two steps:</p><p>1. Finding the locations of the text changes. The first step is to create an algorithm that can find the exact locations where the style of the text changes, whether from non-plagiarized to plagiarized or vice-versa. We consider sentences the smallest unit of text to be analyzed this way as we do not expect sentences that have both anomalous and non-anomalous parts within them. Therefore finding the locations of text changes in practical terms means finding the offsets of the first sentence of the anomalous part and the first sentence of the non-anomalous part.</p><p>2. Filtering out the non-anomalous potential sequences. The second step is to take the offsets of the first step and filter out the sections located between two offsets that are not actually anomalous.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Finding the Locations of Text Changes</head><p>We have created the following features for the purpose of determining where such points of change lie:</p><p>F1. The anomaly likelihood score of each sentence.</p><p>F2. The cosine similarity of the Doc2Vec vectors and their autoencoder predictions.</p><p>F3. The Euclidean distance between the current fingerprint and the fingerprint of preceding sentences.</p><p>F4. The cosine similarity between the current fingerprint and the fingerprint of preceding sentences.</p><p>F5. The Jaccard index between the current fingerprint and the fingerprint of preceding sentences.</p><p>F6. The average relational frequency of the words contained within a sentence.</p><p>F7. The lowest relational frequency of the words contained within a sentence.</p><p>F8. The highest relational frequency of the words contained within a sentence.</p><p>F9. The difference of the mean relational word frequency of the sentence and the text.</p><p>To calculate F1, we use the semantic fingerprint method to create a fingerprint of each sentence and use them as inputs to the HTM network described in Section 3 and then calculate the anomaly likelihood score.</p><p>To calculate F2, we use Doc2Vec, an embedding method described in <ref type="bibr" target="#b2">[3]</ref> to create a vector representation of each sentence of our text. We then use it as inputs to train an autoencoder to first create an encoding of lesser dimensionality and then reconstruct an output from the code that best matches the input. After training the autoencoder, we calculate the cosine similarity between the input embedding and the reconstructed output embedding. Since most of the text is expected to come from a single source with occasional anomalous parts inserted, the network, through training, creates generalized reconstructions that have more success reconstructing the inputs from the original text than from the anomalous parts.</p><p>To calculate F3, F4 and F5, we use semantic fingerprints, comparing the fingerprint of each sentence to the merged fingerprint of the preceding 5 sentences in a window (determined to be the average anomaly length). We use 3 different metrics of comparison: Euclidean distance, cosine similarity and Jaccard distance. The metrics are suited to detecting the first sentence of an anomalous section as well as the first sentence of a non-anomalous section that follows an anomalous section.</p><p>To calculate F6, F7 and F8, we use relational frequency metrics described in <ref type="bibr" target="#b17">[18]</ref>, which measure how specific a current word is to a given segment. We calculate the frequency of the most and least frequent word as well as the mean frequency of all words in a sentence. Anomalous sentences are presumed to have low relational frequency scores due to having words atypical for the rest of the text. A low mean relational frequency means that a sentence contains many unique words. A low highest relational frequency means that unique stopwords are used. A low lowest relational frequency means that a sentence unique word is present (which might not be by itself indicative of an anomaly).</p><p>To calculate F9, we use the the mean relative frequency of the author style metric described in <ref type="bibr" target="#b18">[19]</ref>. We calculate the frequency of each word in the sentence as well as in the document. We average these values across all of the words and compare the mean value of the sentence frequencies and the document frequencies to get the difference in author styles.</p><p>We use a Gradient Boosting Classifer (GBC), described in <ref type="bibr" target="#b19">[20]</ref>, to combine the predictive capability of the aforementioned features as well as evaluate their relative importance. The GBC is an ensemble of multiple models, specifically decision trees, which has superior prediction capability compare to the individual models. It does so by iteratively adding models to the combined model that minimize the residual loss obtained by the combined model in the previous step.</p><p>To label the dataset, we choose a rather simplistic method: We label all sentences as zeroes except the first sentence in each anomaly and the first sentence after the anomaly ends. Our prediction of these division points is then used to make the prediction about where the potential anomalous sections might be.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Filtering out Non-anomalous Potential Sequences</head><p>Now that we have our division points, we still need to identify which sections lying between two points are anomalous and which are not. If we merely tagged them in an alternating fashion, it would lead to a large number of errors, as a single faulty division point could mean that we misclassify our entire dataset. We need some way of determining the anomalous nature of individual sections. We can assume that most anomalous sections are relatively short compared to non-anomalous sections. We can pair up indices of division points that are located close to one another, specifically 50 sentences from each other. While we can create possible pairings of anomalous parts that are located close to one another, there are individual division points that cannot be paired. They might be a false positive or they corresponds to beginning or ending that has not been found yet. For each isolated index, we construct multiple artificial sections that are created varying distances before or after it. The exact process is described in algorithm 1.</p><p>Such pairings inevitably lead to false positives, therefore it is vital to devise a method that can recognize genuine anomalous parts.</p><p>We use a gradient boosting regressor, to predict the anomalousness of the potential section (the percentage of anomalous sentences from the section).</p><p>There are multiple parameters that we use as inputs, some from the previous step and others from modified algorithms that deal with sections instead of sentences. Input values for the regressor are the following: • The mean value of F2, F6-F9 for every sentence of the section (metrics using Semantic Folding are not used due to them having high values in using division points).</p><p>• The Euclidean distance between the fingerprint of the section and the fingerprint of the entire document.</p><p>• The cosine similarity between the fingerprint of the section and the fingerprint of the entire document.</p><p>• The Jaccard index between the fingerprint of the section and the fingerprint of the entire document.</p><p>• The average relational frequency of the words contained within the entire section.</p><p>• The lowest relational frequency of the words contained within the entire section.</p><p>• The highest relational frequency of the words contained within the entire section.</p><p>• The difference of the mean relational word frequency of the section and the text.</p><p>After training our regressor, we treshold the anomalousness values to obtain the list of anomalous sections. Due to the artificial creation of sections, overlaps may be possible. We describe a way to merge these section in algorithm 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Data preparation</head><p>We worked with the PAN intrinsic anomaly detection plagiarism corpus 2011 <ref type="bibr" target="#b0">[1]</ref> as experimental text data. It is For each text, we have two files: a.txt file containing the texts themselves and an.xml file containing various metadata such as the source of the base text and the author and most importantly: the list of anomalous parts defined by the division points and the length of such a part. The number of plagiarized parts in texts is not constant and can contain 0, with no plagiarism inserted, up to 10 anomalous parts. Anomalous parts are whole sentences or entire paragraphs. Thus we only consider entire sentences or larger collections of sentences as possible anomalies.</p><p>First we remove all stop words from the text, such as a, the, is, at, which, on etc., since these characters have little relevance upon the meaning or authorial style of the text. We solved two tasks as follows:</p><p>Task 1: To find out if the suspicious text is anomalous: Here, we are only trying to determine whether a text contains any anomalies or not, ignoring the location or the number of anomalies present.</p><p>Task 2: To identify the individual anomalous parts within the text correctly, taking into consideration their precise locations.</p><p>We first split the sentences using the NLTK (Natural Language Toolkit) tokenizer, described in <ref type="bibr" target="#b20">[21]</ref>, marking certain words ending with a punctuation mark as exceptions when splitting (such as st., mr., mrs., dr. etc.). We removed also all non-text characters, all stop words as well as words shorter than 3 letters. Finally, we also merged sentences that consist of only a single word with the first non-single word sentence that comes after it. Such a way of merging allows us to avoid a lot of false positives which would occur by having sentences that have very little meaning on their own. We also store which sentences they were merged from and apply the features calculated to all of the sentences.</p><p>We then calculate the features for each sentence. For the merged fingerprints of the preceding sentences used for F1, F3, F4 and F5, we have chosen to merge the fingerprints of five sentences that came before the current sentence. In case of the first sentence, we do not have a merged fingerprint, thus we use the current fingerprint as the input to the Temporal Pooler as is. In a case where there are less than 5 sentences available we use a merged fingerprint consisting from sentences that are available. Then, in case of F1, as was described, we create a fingerprint where only those bits have a value of 1 that are active in both the merged print and the current print for each sentence. The fingerprints arrays have a size of 128 × 128 and use the standard English associative dictionary of cortical.io. As the fingerprints are encoded as a collection of active bits, we have a list of sorted indices that are between 1 and 16384.</p><p>To increase computational efficiency, we split the list into four parts, each having a portion of the values in the following way: the first list contains all of the values between 1 and 4096, the second between 4097 and 8192, the third between 8193 and 12228 and the fourth between 12229 and 16384. We then create sparse vectors from these lists, by having 1 in an index that is present in the list and 0 everywhere else. Such a division does not pose much of a problem for the prediction capabilities of HTM algorithm, as it has a feature that allows it to separate individual patterns that represents a sequence of inputs that still belongs to a single object.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">HTM Network and Gradient Boosting Classifier Training</head><p>We first fit the HTM network using the entire text once as the training set and then run the same inputs through the network to get our predictions. We calculated the anomaly scores and from them the likelihood scores for each sentence of the document. For the moving window of mean anomaly scores, we used a window size of 5 that stores the previous 5 anomaly scores. For F2, we have used Doc2Vec to create 100 dimensional vectors from each sentence. We have chosen a five layered feed-forward perceptron as our autoencoder network, with the layer sizes being (experimentally chosen): 100, 50, 20, 50, 100 with the first and last layer being the same size as the Doc2Vec inputs. We trained the network on the text in 5 epochs (experimentally chosen), then performed the reconstruction for each sentence and calculated the cosine similarity between the input and the reconstruction.</p><p>Features F3-F5 are calculated just as described in Section 5, using the five sentences preceding the current sentence to form our merged fingerprint and comparing it to the fingerprint of the current sentence. Features F6-F9 are calculated just as described in Section 5, with calculating the relational frequencies and author style separately for every sentence on a text that did not have short words or stop words removed.</p><p>We used these features as the input parameters to our Gradient Boosting Classifier with 200 estimators and a maximum depth of 4. After training our classifier, we can get the positive examples of sentence offsets and construct the potential sections from them. To filter them out, we train a Gradient Bossting regressor with 200 estimators and a depth of 4 to predict their relevance. We then threshold the prediction to get the potential anomalous parts. We experimented with multiple thresholds and found the threshold value of 0.7 as sufficient. Finally, we merge the potential sections and then compare them with the actual anomalous sections using multiple metrics, such as precision, recall and accuracy to measure our success. For evaluation, we consider the anomalies as positive examples and label every single character of the full unmodified text, therefore taking into account the length of sentences as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Experimental results</head><p>We have evaluated the predictions of anomalies made by our model and organized the results into Table <ref type="table" target="#tab_1">1</ref>. For comparison, we have implemented the Author style algorithm proposed by Kuznetsov et al. <ref type="bibr" target="#b18">[19]</ref> and evaluated it on our dataset as well. The results have also been organized into Table <ref type="table" target="#tab_1">1</ref>, alongside the results of our own algorithm.</p><p>The columns of the table are arranged in the following manner: The Txt column corresponds to the id number of the text. The Plag det column tells us how many anomalies did our algorithm detect fully out of the number of plagiarisms present. The T1 (Task 1) column tells us the conclusion our algorithm reached when determining whether the text is anomalous or not (N for non anomalous, Y for anomalous). Columns Pre, Rec and Acc refer respectively to the precision, recall and accuracy values achieved by our algorithm with classifying anomalies. Columns Pre A2, Rec A2 and Acc A2 are the results achieved by <ref type="bibr" target="#b18">[19]</ref>. If there are no predicted anomalous parts/sentences, the precision metric has a values N/A since calculating the metric becomes meaningless. Likewise if there are no actual anomalous parts/sentences present in the text, the recall metric similarly has a value of N/A.</p><p>As we can see in Table <ref type="table" target="#tab_1">1</ref>, our algorithm has achieved an accuracy of 100 % in T1. Regarding T2, the overall percentage of fully found plagiarisms is 70.15 %. The results show high accuracy due to the far greater number of non-anomalous sections in the actual text as well as high precision due to the number of false positives being further by filtration. The downside of such extensive filtering can be seen in the recall values however, where the results vary. Such a discrepancy between the precision and recall values can be observed with both algorithms. When it comes to accuracy, the results of the author style algorithm have only been better in case of text 11, equal in multiple cases and inferior to ours in most.</p><p>When looking at the metrics of precision and recall, they tend to achieve a better, sometimes perfect precision score, but this usually comes at the cost of a significantly worse recall and accuracy score. While precision is important, we believe that for recommender systems, it is better to create a few false positives in order to flag most of the anomalous sections, rather than being more sure with the anomalousness of fewer sections.</p><p>As for the performance of the author style algorithm on T1, it failed to achieve 100 % accuracy, due to often predicting sentences to be anomalous when there are no anomalies to be found (such as texts 5 or 10 in Tale 1) or not finding any anomalies despite the text containing them (such as texts 23 or 28). It appears that our results mostly surpass the results generated by the author style algortihm, therefore showing that sequential anomaly detection is worth consideration.</p><p>However the disparity in precision is still not ideal, and lovering our threshold did not give us much improvement in recall in comparison to the decrease in precision. We believe this is occuring because of short anomalous sections, because our artificially constructed anomalous sections are at least 5 sentences long, which might be more than the minimum length of said anomalies. We can see that a lot more depends on the division point detection part of the algorithm, therefore we have decided to plot the importance of features for our classifier, showed in Figure <ref type="figure" target="#fig_4">4</ref>. As we can see, the most successful feature used was F9, the mean relational word frequency comparison between the whole text and the sentence. The anomaly likelihood is very close, being designed specifically to detect points of change. We can see that using Semantic Folding without a strong predictor like HTM decreases the performance as evidenced by the relative uselessness of the fingerprint comparison metrics (Euclidean, Cosine and Jaccard). The metrics that determine the most unique sentences instead of division points are also quite important in the prediction as demonstrated by the usefulness of the cosine similarity of the Doc2Vec vectors as well as the mean relational frequency of words. In case of the lowest frequency metric, there may easily be sentence-unique words in sentences that are from non-anomalous parts and sentences from anomalous parts that do not have sentence-unique words. As for the highest frequency, while some anomalous sentences might not have the same stop words as the rest of the sentences, others might, making it not all that reliable.</p><p>We have developed a method capable of detecting intrinsic anomalies in natural text based on sequential analysis of the syntactic and semantic patterns exhibited by potentially anomalous text. The algorithm consists of the two step process of identifying the location of the starting and ending sentences of anomalies and determining whether a section located between two such points is anomalous. For both of these steps we use gradient boosting to form a prediction out of various metrics extracted from the text using the Semantic Folding algorithm, HTM network, Doc2Vec, autoencoders and metrics based on word frequencies.</p><p>The algorithm was tested and evaluated on 40 English texts with artificially inserted plagiarisms from the PAN intrinsic anomaly detection plagiarism corpus 2011. Our objective was twofold, the first, to determine whether the text contains any anomalies at all and the second, to determine the exact number and position of the anomalous passages. The algorithm achieved an accuracy of Task 1 was 100 % and better-than-expected results in Task 2, depending on the text with relatively high values of precision and accuracy, but varying recall. The overall percentage of fully found plagiarisms within the text was 70.15 %. We found that our algorithm was able to improve on the results of Kuznetsov et al. <ref type="bibr" target="#b18">[19]</ref>. While there is much room for improvement, we believe the paradigm of continuous analysis in plagiarism detection to be an interesting and valid one that provides non-dismissible results and might be a step in the right direction when solving similar problems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Structure of HTM Network [5].</figDesc><graphic coords="3,63.78,420.57,216.86,153.34" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: SP -creating SDR of layer based on input [5].</figDesc><graphic coords="3,314.65,80.50,216.86,138.84" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: TM -representation of a layer after determining predictive cells (predictive darker cells, white non active cells, grey active cells) in TM. [5].</figDesc><graphic coords="3,326.70,379.54,192.76,122.82" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Algorithm 2 :</head><label>2</label><figDesc>Algorithm for merging overlapping sections Data: S -List of potential anomalous sections that are defined by the index of their starting sentence and the index of their last sentence Result: S -List of potential anomalous sections without overlaps OverlappingSections ← T RUE while OverlappingSections do OverlappingSections ← FALSE for i in range(0, length(S) − 1) do for j in range(i, length(S)) do if overlaps(S[i], S[ j]) then OverlappingSections ← T RUE f irst = min(S[i]. f irst, S[ j]. f irst) last = max(S[i].last, S[ j].last) S.addSection( f irst, last) S.removeSection(S[i]) S.removeSection(S[ j]) end end end end a collection of 4753 larger text bodies in the English language, made up of various topics, where some texts have plagiarised parts artificially inserted into them. We have chosen 40 texts from the corpus to test our system.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Feature importance of the gradient boosting classifier</figDesc><graphic coords="8,63.78,486.97,216.84,124.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Information about 40 English texts from corpus<ref type="bibr" target="#b0">[1]</ref>.</figDesc><table><row><cell>Txt</cell><cell>Plag det</cell><cell>T1</cell><cell>Pre</cell><cell>Rec</cell><cell>Acc</cell><cell>Pre A2</cell><cell>Rec A2</cell><cell>Acc A2</cell></row><row><cell>1</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>0.0</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>2</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>0.0</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>3</cell><cell>6/8</cell><cell>Y</cell><cell>0.86</cell><cell>0.64</cell><cell>0.98</cell><cell>0.73</cell><cell>0.66</cell><cell>0.97</cell></row><row><cell>4</cell><cell>3/4</cell><cell>Y</cell><cell>0.97</cell><cell>0.44</cell><cell>0.98</cell><cell>0.70</cell><cell>0.15</cell><cell>0.96</cell></row><row><cell>5</cell><cell>0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>0.0</cell><cell>N/A</cell><cell>0.98</cell></row><row><cell>6</cell><cell>4/5</cell><cell>Y</cell><cell>0.96</cell><cell>0.70</cell><cell>0.98</cell><cell>1.0</cell><cell>0.24</cell><cell>0.96</cell></row><row><cell>7</cell><cell>11/16</cell><cell>Y</cell><cell>0.96</cell><cell>0.57</cell><cell>0.94</cell><cell>0.97</cell><cell>0.32</cell><cell>0.90</cell></row><row><cell>8</cell><cell>0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>0.0</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>9</cell><cell>7/13</cell><cell>Y</cell><cell>0.97</cell><cell>0.74</cell><cell>0.94</cell><cell>0.85</cell><cell>0.67</cell><cell>0.91</cell></row><row><cell>10</cell><cell>0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>0.0</cell><cell>N/A</cell><cell>0.99</cell></row><row><cell>11</cell><cell>5/11</cell><cell>Y</cell><cell>0.97</cell><cell>0.50</cell><cell>0.90</cell><cell>0.91</cell><cell>0.57</cell><cell>0.91</cell></row><row><cell>12</cell><cell>0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>13</cell><cell>6/10</cell><cell>Y</cell><cell>0.97</cell><cell>0.73</cell><cell>0.93</cell><cell>1.0</cell><cell>0.07</cell><cell>0.77</cell></row><row><cell>14</cell><cell>6/6</cell><cell>Y</cell><cell>0.85</cell><cell>1.0</cell><cell>0.99</cell><cell>1.0</cell><cell>0.02</cell><cell>0.96</cell></row><row><cell>15</cell><cell>0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>16</cell><cell>8/13</cell><cell>Y</cell><cell>0.87</cell><cell>0.60</cell><cell>0.90</cell><cell>0.75</cell><cell>0.05</cell><cell>0.78</cell></row><row><cell>17</cell><cell>2/2</cell><cell>Y</cell><cell>0.81</cell><cell>1.0</cell><cell>0.99</cell><cell>1.0</cell><cell>0.08</cell><cell>0.96</cell></row><row><cell>18</cell><cell>4/4</cell><cell>Y</cell><cell>0.92</cell><cell>1.0</cell><cell>0.98</cell><cell>1.0</cell><cell>0.87</cell><cell>0.96</cell></row><row><cell>19</cell><cell>6/9</cell><cell>Y</cell><cell>0.97</cell><cell>0.54</cell><cell>0.93</cell><cell>0.98</cell><cell>0.11</cell><cell>0.87</cell></row><row><cell>20</cell><cell>13/16</cell><cell>Y</cell><cell>0.91</cell><cell>0.81</cell><cell>0.97</cell><cell>0.89</cell><cell>0.73</cell><cell>0.95</cell></row><row><cell>21</cell><cell>8/12</cell><cell>Y</cell><cell>0.96</cell><cell>0.65</cell><cell>0.97</cell><cell>0.90</cell><cell>0.69</cell><cell>0.96</cell></row><row><cell>22</cell><cell>6/6</cell><cell>Y</cell><cell>0.83</cell><cell>1.0</cell><cell>0.99</cell><cell>0.88</cell><cell>0.27</cell><cell>0.96</cell></row><row><cell>23</cell><cell>6/7</cell><cell>Y</cell><cell>0.93</cell><cell>0.94</cell><cell>0.99</cell><cell>N/A</cell><cell>0.0</cell><cell>0.95</cell></row><row><cell>24</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>25</cell><cell>4/5</cell><cell>Y</cell><cell>0.87</cell><cell>0.98</cell><cell>0.99</cell><cell>1.0</cell><cell>0.21</cell><cell>0.97</cell></row><row><cell>26</cell><cell>5/12</cell><cell>Y</cell><cell>0.98</cell><cell>0.52</cell><cell>0.88</cell><cell>0.94</cell><cell>0.53</cell><cell>0.88</cell></row><row><cell>27</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>28</cell><cell>4/5</cell><cell>Y</cell><cell>0.69</cell><cell>1.0</cell><cell>0.98</cell><cell>N/A</cell><cell>0.0</cell><cell>0.95</cell></row><row><cell>29</cell><cell>6/8</cell><cell>Y</cell><cell>0.88</cell><cell>0.72</cell><cell>0.99</cell><cell>1.0</cell><cell>0.07</cell><cell>0.96</cell></row><row><cell>30</cell><cell>3/4</cell><cell>Y</cell><cell>0.94</cell><cell>0.94</cell><cell>0.99</cell><cell>N/A</cell><cell>0.0</cell><cell>0.95</cell></row><row><cell>31</cell><cell>7/7</cell><cell>Y</cell><cell>0.94</cell><cell>1.0</cell><cell>0.99</cell><cell>1.0</cell><cell>0.18</cell><cell>0.90</cell></row><row><cell>32</cell><cell>1/3</cell><cell>Y</cell><cell>0.99</cell><cell>0.15</cell><cell>0.96</cell><cell>0.82</cell><cell>0.09</cell><cell>0.95</cell></row><row><cell>33</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>0.0</cell><cell>1.0</cell></row><row><cell>34</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>0.0</cell><cell>1.0</cell></row><row><cell>35</cell><cell>2/6</cell><cell>Y</cell><cell>0.94</cell><cell>0.40</cell><cell>0.86</cell><cell>0.99</cell><cell>0.20</cell><cell>0.82</cell></row><row><cell>36</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>37</cell><cell>8/9</cell><cell>Y</cell><cell>0.88</cell><cell>0.97</cell><cell>0.98</cell><cell>1.0</cell><cell>0.07</cell><cell>0.87</cell></row><row><cell>38</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>0.0</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>39</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row><row><cell>40</cell><cell>0/0</cell><cell>N</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell><cell>N/A</cell><cell>N/A</cell><cell>1.0</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements. The research is supported by the Slovak Scientific Grant Agency VEGA, Grant No. 1/0177/21 "Descriptional and Computational Complexity of Automata and Algorithms".</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">PAN Plagiarism Corpus</title>
		<author>
			<persName><forename type="first">M</forename><surname>Potthast</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Stein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Eiselt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barrón-Cedeño</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<idno type="DOI">10.5281/zenodo.3250095</idno>
		<ptr target="http://www.uniweimar.de/en/media/chairs/webis/corpora/pan-pc-11/" />
		<imprint>
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Almarimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Andrejková</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Salem</surname></persName>
		</author>
		<idno>nbn:de:0074-2046-8</idno>
		<title level="m">Proceedings of the 11th Joint Conference on Mathematics and Computer Science Eger</title>
				<meeting>the 11th Joint Conference on Mathematics and Computer Science Eger<address><addrLine>Hungary</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">May 20-22, 2016</date>
			<biblScope unit="volume">2046</biblScope>
		</imprint>
	</monogr>
	<note>Anomaly Searching in Text Sequences CEUR</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Distributed Representations of Sentences and Documents</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st International Conference on Machine Learning</title>
				<meeting>the 31st International Conference on Machine Learning</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="1188" to="1196" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Semantic Folding (Theory and its Application in Semantic Fingerprinting</title>
		<author>
			<persName><forename type="first">De</forename><surname>Sousa Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">White paper</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1" to="59" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Hawkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Purdy</surname></persName>
		</author>
		<author>
			<persName><surname>Lavin</surname></persName>
		</author>
		<ptr target="http://numenta.com/business-strategy-and-ip/.Release0.4" />
		<title level="m">A: Biological and Machine Intelligence (BAMI)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Outlier analysis</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-47578-3</idno>
		<ptr target="https://doi.org/10.1007/978-3-319-47578-3" />
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>Springer Science+Business Media</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Anomaly detection: A survey</title>
		<author>
			<persName><forename type="first">V</forename><surname>Chandola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Banerjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Identifying Semantically Deviating Outlier Documents</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zhuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ch</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Han</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the conference on Empirical Methods in Natural Language Processing<address><addrLine>Copenhagen, Denmark</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">September 7-11, 2017. c 2017</date>
			<biblScope unit="page" from="2748" to="2757" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Outlier Detection for Text Data: An Extended Version</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kannan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Woo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Park</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1701.01325v1</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Why neurons have thousands of synapses, a theory of sequence memory in neocortex</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hawkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmad</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Neural Circuits</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">23</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A framework for intelligence and cortical function based on grid cells in the neocortex</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hawkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Klukas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Purdy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmad</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Neural Circuits</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">F</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Purdy</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1607.02480v1</idno>
		<title level="m">Real-Time Anomaly Detection for Streaming Analytics</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Unsupervised real-time anomaly detection for streaming data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lavina</forename></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Purdy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">267</biblScope>
			<biblScope unit="page" from="134" to="147" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A comparative study of HTM and other neural network models for online sequence learning with streaming data</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ch</forename><surname>Surpur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hawkins</surname></persName>
		</author>
		<idno type="DOI">10.1109/IJCNN.2016.772738</idno>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Neural Networks (IJCNN)</title>
				<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="1530" to="1538" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Anomaly Detection with HTM</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Hole</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Chapter 12 in the book: Anti-fragile ICT Systems</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Recent Trends in Deep Learning Based Natural Language Processing</title>
		<author>
			<persName><forename type="first">T</forename><surname>Young</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hazarika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Poria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Cambria</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1708.02709v8</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>cs.CL</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">A Mathematical Formalization of Hierarchical Temporal Memory&apos;s Spatial Pooler</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mnatzaganian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fokoué</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kudithipudi</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1601.06116v3</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Approaches for Intrinsic and External Plagiarism Detection</title>
		<author>
			<persName><forename type="first">G</forename><surname>Oberreuter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>L'huillier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ríos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Velásquez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Notebook for PAN at CLEF</title>
				<imprint>
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Methods for intrinsic plagiarism detection and author diarization</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kuznetsov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Motrenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kuznetsova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Strijov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Notebook for PAN at CLEF</title>
				<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Gradient boosting machines, a tutorial</title>
		<author>
			<persName><forename type="first">A</forename><surname>Natekin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Knoll</surname></persName>
		</author>
		<idno type="DOI">10.3389/fn-bot.2013.00021</idno>
	</analytic>
	<monogr>
		<title level="m">Frontiers in Neurorobotics www.frontiersin</title>
				<imprint>
			<date type="published" when="2013-12">December 2013. 2013. 2019</date>
			<biblScope unit="volume">7</biblScope>
		</imprint>
	</monogr>
	<note>Article 121</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">NLTK: The Natural Language Toolkit</title>
		<author>
			<persName><forename type="first">E</forename><surname>Loper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bird</surname></persName>
		</author>
		<idno>CoRR, cs.CL/0205028</idno>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
