<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Enhancing Interpretability in Multivariate Time Series Classification through Dimension and Feature Selection</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Zed</forename><surname>Lee</surname></persName>
							<email>zed.lee@dsv.su.se</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer and Systems Sciences</orgName>
								<orgName type="institution">Stockholm University</orgName>
								<address>
									<settlement>Stockholm</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Enhancing Interpretability in Multivariate Time Series Classification through Dimension and Feature Selection</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">5782D536A417DA71DA9583488551710E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:48+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Multivariate Time Series</term>
					<term>Interpretability</term>
					<term>Dimension Selection</term>
					<term>Feature Selection</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Interpretability in multivariate time series classification is crucial for understanding model decisions. However, the complexity of these classifiers often results in overwhelming feature spaces, hindering interpretability. To address this issue, we propose two novel methods: 1) Dimension selection based on segmentation of time series (DST) and 2) Feature selection based on discretization similarity (FDS). DST segments time series data and applies dimension selection to each segment, capturing distinct properties across different time ranges. FDS reduces feature redundancy by comparing discretization techniques and eliminating those with similar bin boundaries. Experiments on 24 UEA multivariate datasets demonstrate that our methods can significantly reduce the number of features while maintaining accuracy, offering a practical solution for enhancing interpretability in multivariate time series classification.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Time series datasets involve large quantities of data across multiple dimensions. The complexity of multivariate time series classification can quickly become overwhelming due to the interactions between different dimensions, which may negatively impact the classification outcome. Consequently, multivariate time series classifiers have grown increasingly complex in their model structures and feature spaces to enhance predictive performance. However, these classifiers often lack interpretability, posing both a challenge and a requirement.</p><p>Interpretable time series classifiers, such as MR-SEQL <ref type="bibr" target="#b0">[1]</ref> and MR-PETSC <ref type="bibr" target="#b1">[2]</ref>, have been developed using symbolic discretization. Although these symbolic features have specific meanings and are linked to an interpretable linear classifier, several issues hinder full interpretability. First, as ensemble-based methods, both classifiers define multiple event sequence patterns for the same time points under various parameter settings for discretization to create bag-of-word patterns, resulting in inconsistencies in value ranges for the same discretized patterns, undermining interpretability. Z-Time <ref type="bibr" target="#b2">[3]</ref> addresses this issue by eliminating the ensemble structure and applying various discretization techniques across the time series with unique event labels, ensuring each event label corresponds to a specific value range. However, the second problem is the sheer number of features used by all three classifiers, making human interpretation impractical.</p><p>In this paper, we suggest that interpretability should be evaluated not only by the architecture of models and features but also by the number of features used for classification. While dimensionality reduction is a common approach in various machine learning tasks <ref type="bibr" target="#b3">[4]</ref>, it is not suitable for interpretability as it can distort values. Initial efforts in dimensionality selection for multivariate time series often assume the selection of specific dimensions throughout the entire time series <ref type="bibr" target="#b4">[5]</ref>, which might not be optimal. This paper proposes two techniques leveraging previous work <ref type="bibr" target="#b4">[5]</ref>, Z-Time's segmentation properties, and multiple discretization techniques. First, we segment the time series and select different dimensions based on the properties within each segment. Second, we measure the similarity of different discretized bins and remove those with the highest similarity using the elbow method.</p><p>Our main contributions and novelty of this paper include:</p><p>• Novelty. We introduce the use of segmentation and discretization similarity to reduce the number of interpretable features in multivariate time series. • Effectiveness and efficiency. Our proposed techniques can reduce the number of features by up to 86% while maintaining accuracy, with an average accuracy drop of only up to 9% on the UEA multivariate time series datasets <ref type="bibr" target="#b5">[6]</ref>. • Reproducibility. Our code is publicly available on our GitHub repository<ref type="foot" target="#foot_0">1</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>While many algorithms for multivariate time series classification have leveraged ensembles <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> and deep learning techniques <ref type="bibr" target="#b8">[9]</ref>, recent attention has been directed towards interpretable time series classification. Most state-of-the-art interpretable time series classifiers utilize symbolic discretization <ref type="bibr" target="#b9">[10]</ref> to create feature spaces, combined with linear classifiers. MR-SEQL <ref type="bibr" target="#b0">[1]</ref> integrates a symbolic sequential learner with two discretization techniques: symbolic aggregate approximation (SAX) <ref type="bibr" target="#b9">[10]</ref> and symbolic Fourier approximation (SFA) <ref type="bibr" target="#b10">[11]</ref>, to form the feature space representation. Similarly, MR-PETSC <ref type="bibr" target="#b1">[2]</ref> employs standard frequent pattern mining with a relative duration constraint, instead of a sequential learner, to capture non-contiguous patterns as well as subsequences. Although both MR-SEQL and MR-PETSC can be applied to multivariate time series classification, their interpretability has been studied primarily for univariate problems, without addressing relationships between variables. The most recent work, Z-Time <ref type="bibr" target="#b2">[3]</ref>, offers the best efficiency (i.e., runtime) and effectiveness (i.e., accuracy) for multivariate time series classification. Unlike MR-PETSC and MR-SEQL, Z-Time is designed to consider the relationships between dimensions by incorporating temporal relations through temporal abstraction. Z-Time enhances interpretability by avoiding ensemble structures with multiple sliding windows and instead applying different discretization techniques, ensuring each event label has a single definition and value range. For feature reduction, earlier methods focused on dimension selection based on correlation <ref type="bibr" target="#b11">[12]</ref> or similarity scores <ref type="bibr" target="#b12">[13]</ref>. The most recent approach <ref type="bibr" target="#b4">[5]</ref> selects dimensions based on the prototype distance between classes, which has also been tested in <ref type="bibr" target="#b13">[14]</ref> for HIVE-COTE 2.0 <ref type="bibr" target="#b7">[8]</ref>, the most accurate classifier on the UCR dataset <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Background</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.1.">Multivariate time series</head><p>Let t = {𝑡 1 , . . . , 𝑡 𝑚 } represent a time series spanning 𝑚 time points. A collection of such time series forms a time series instance T = {t 1 , . . . , t 𝑑 }, consisting of 𝑑 variables or dimensions. If 𝑑 = 1, T is univariate; if 𝑑 &gt; 1, T is multivariate. Each time series instance T 𝑘 is assigned a class label 𝑦 𝑘 ∈ y, where y is a list of class labels corresponding to each instance. The goal of time series classification models is to predict these class labels correctly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.2.">Dimension selection techniques</head><p>Recent work by <ref type="bibr" target="#b4">[5]</ref> has proposed two supervised dimension selection methods, which our suggestions build upon:</p><p>• Elbow class sum (ECS): This method calculates the distance matrix between class centroid values and sums all the pairwise distances calculated for each dimension. The elbow method is then applied to find a cut-off point. • Elbow class pairwise (ECP): This method introduces an additional step to ECS. Instead of summation, it applies the elbow method to the pairwise distances for each dimension and then unions the eligible dimensions obtained from the distances for each dimension.</p><p>These methods assume that the selected dimensions span the entire time series. While ECP is regarded as the best among them, it sometimes fails to choose smaller dimensions, returning the whole set of dimensions. We address this issue by suggesting a segmentation-based application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.3.">Discretization techniques</head><p>Discretization techniques have been actively used in interpretable time series classifiers to convert time series into sets of symbols. Each time step 𝑡 𝑖 ∈ t is converted into an event 𝑒 𝑖 , creating an event sequence e. Each event value can take a unique event type 𝜖. Z-Time uses the following three techniques:</p><p>• Equal width discretization (EWD): Assuming t follows a uniform distribution, discretization boundaries are defined so that all event labels have value ranges of equal length, i. • Symbolic aggregate approximation (SAX): SAX uses a window size 𝑤 and an event label size to perform both discretization and summarization. The discretization boundaries are defined assuming t follows a normal distribution <ref type="bibr" target="#b9">[10]</ref>, using the points that produce equi-sized areas under the normal distribution curve.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Dimension selection based on segmentation of time series (DST)</head><p>Our assumption is that important dimensions may vary across different time ranges, whereas current methods select dimensions by treating the time series as a whole. Selecting dimensions from segments of the time series could potentially enhance performance. This approach might not be feasible for interpretable classifiers that use sliding windows and ensemble structures, but Z-Time applies segmentation to capture the distinct properties of different time periods. This method is effective for time series with many unrelated parts or where the distribution changes over time. Unlike sliding windows, which overlap over time points and discretize values within these windows, segmentation offers a more straightforward approach to interpretability since each time point is discretized only once. First, a time series instance T is divided into 𝑘 equal-length segments {T 1 , . . . , T 𝑘 }. Then, a dimension selection algorithm such as ECP or ECS is applied to each segment T 𝑖 , resulting in different dimensions being selected for each segment. When multiple time series instances are considered, the dimension selection algorithm is applied to a set of instances to ensure consistent dimension selection. Second, after segmentation, Z-Time is applied to each segment individually. This results in 𝑘 different feature sets, which are concatenated to create a single feature set for input to an interpretable linear classifier.</p><p>While segmentation has improved Z-Time's performance, it has the side effect of linearly increasing the number of features, as each feature created from each segment must be distinguishable. This necessitates an additional step to significantly reduce the number of features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Feature selection based on discretization similarity (FDS)</head><p>The second proposed strategy to reduce features involves finding similarities among discretization techniques. Z-Time uses three different discretizations, each with and without PAA, creating six different representations for each dimension of a time series instance. Sometimes, different techniques can produce similar bin boundaries, making it redundant to retain all of them. This method compares the boundaries of the bins created by each discretization technique by calculating the differences in boundary values. After computing the sum of boundary differences, the elbow method is used to select dimensions with significant differences.</p><p>Z-Time uses equal width discretization (EWD), equal frequency discretization (EFD), and SAX. Suppose a set of boundaries for a technique is g = {𝑔 1 , 𝑔 2 , . . . , 𝑔 𝑛 }. The average difference is calculated as follows:</p><formula xml:id="formula_0">1 𝑛 𝑛 ∑︁ 𝑖 (𝑔 𝑖,𝑘 − 𝑔 𝑗,𝑘 ) 2</formula><p>The elbow method is then applied to identify the number of techniques with sufficiently high average differences, resulting in a set g ′ where |g ′ | ≤ |g|. While this strategy does not reduce the number of dimensions, it significantly reduces the number of features, since in the worst case, the number is quadratic to the number of discretization techniques. Each technique creates a different set of event labels, thereby enhancing the overall interpretability of the classification model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>Our experiment aims to evaluate the effectiveness of our proposed methods in reducing the number of features created by Z-Time. We used 26 UEA multivariate datasets with no missing values for our experiments <ref type="bibr" target="#b5">[6]</ref>. The properties of these datasets can be found in the original repository. We excluded two datasets: FaceDetection, due to a memory limit of 128 GB for the chosen parameters, and PenDigit, as it was too small to apply segmentation. We compared different combinations of the following options:</p><p>• Setting 1: Dimension selection methods (ECP, ECS) • Setting 2: Segmentation (with/without DST) • Setting 3: Feature reduction (with/without FDS) • Setting 4: Segment size (𝑘 = 2, 𝑘 = 4)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>The relative numbers for the number of features, the number of dimensions, accuracy, and the total runtime compared to the default setting without any feature reduction technique. For accuracy, higher is better, while for all the others, lower is better.  the number of features, the number of dimensions, accuracy, and the total runtime compared to the default setting without any feature reduction technique. The best technique is marked in bold, while the second best is underlined. First, we observe that FDS significantly reduces the number of features, achieving an additional average reduction of 24.9% of the original features for the same setting. Without FDS, the minimum feature number is 26% of the original, but it can be further reduced to 12% with FDS. Additionally, while ECP generally shows better accuracy than ECS, ECP reduces features to 26% of the original, whereas ECS can reduce them to 12% while maintaining the same accuracy with 𝑘 = 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Segments</head><p>Since Table <ref type="table">1</ref> only shows average values, it might obscure the effect of DST, as results always appear inferior without DST. While standard ECP and ECS maintain better accuracy on average, ECP and ECS after segmentation show better accuracy in many instances. ECP after segmentation performs better in terms of accuracy on 16 datasets, considering all different settings (FDS and the number of segments). Table <ref type="table" target="#tab_1">2</ref> shows the number of datasets where better accuracy is achieved by using DST. ECS with DST shows better accuracy than ECS without DST on 16 datasets with 𝑘 = 4 and on 13 datasets with 𝑘 = 2, which is more than half in both cases. However, with ECP, there is no meaningful improvement from applying DST. ECS after segmentation shows a significant drop in specific datasets, affecting the overall average, mainly due to the incorrect choice of the number of segments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper, we introduced two methods to enhance the interpretability of multivariate time series classifiers: 1) Dimension selection based on segmentation of time series (DST) and 2) Feature selection based on discretization similarity (FDS). Our experiments on 24 UEA multivariate datasets demonstrated that these methods could significantly reduce the number of features, by up to 86%, while maintaining accuracy, with only an average accuracy drop of up to 9%. These methods simplify the feature space and enhance interpretability, offering a practical solution for multivariate time series classification without compromising predictive performance. Future work can explore optimizing segmentation processes with dynamic lengths and refining similarity measures in FDS to enhance the quality of features.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>e., 𝑡 𝜖𝑎 𝑚𝑎𝑥 − 𝑡 𝜖𝑎 𝑚𝑖𝑛 = 𝑡 𝜖 𝑏 𝑚𝑎𝑥 − 𝑡 𝜖 𝑏 𝑚𝑖𝑛 [15]. • Equal frequency discretization (EFD): Discretization boundaries are defined so that each event label occurs with the same frequency in e, i.e., |𝑒 𝑖 ∈ e : 𝑒 𝑖 = 𝜖 𝑎 | = |𝑒 𝑖 ∈ e : 𝑒 𝑖 = 𝜖 𝑏 | [15].</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>In total, there are 16 different combinations per dataset. While detailed results are available in our repository, we present the average values in Table1. Table1shows relative numbers for</figDesc><table><row><cell>FDS</cell><cell cols="5">Techniques Features % Dimension % Accuracy % Time %</cell></row><row><cell></cell><cell>ECP</cell><cell>0.56</cell><cell>0.64</cell><cell>1.05</cell><cell>0.62</cell></row><row><cell>FALSE</cell><cell>ECP+DST ECS</cell><cell>0.72 0.26</cell><cell>0.78 0.42</cell><cell>1.00 1.02</cell><cell>0.78 0.37</cell></row><row><cell>2</cell><cell>ECS+DST ECP</cell><cell>0.31 0.25</cell><cell>0.46 0.64</cell><cell>0.95 1.06</cell><cell>0.44 0.46</cell></row><row><cell>TRUE</cell><cell>ECP+DST ECS</cell><cell>0.32 0.12</cell><cell>0.78 0.42</cell><cell>0.97 1.00</cell><cell>0.54 0.27</cell></row><row><cell></cell><cell>ECS+DST</cell><cell>0.14</cell><cell>0.46</cell><cell>0.91</cell><cell>0.32</cell></row><row><cell></cell><cell>ECP</cell><cell>0.56</cell><cell>0.64</cell><cell>1.02</cell><cell>0.61</cell></row><row><cell>FALSE</cell><cell>ECP+DST ECS</cell><cell>0.71 0.27</cell><cell>0.77 0.42</cell><cell>1.00 0.98</cell><cell>0.78 0.38</cell></row><row><cell>4</cell><cell>ECS+DST ECP</cell><cell>0.30 0.26</cell><cell>0.43 0.64</cell><cell>0.97 0.96</cell><cell>0.42 0.43</cell></row><row><cell>TRUE</cell><cell>ECP+DST ECS</cell><cell>0.34 0.13</cell><cell>0.77 0.42</cell><cell>0.94 0.92</cell><cell>0.55 0.26</cell></row><row><cell></cell><cell>ECS+DST</cell><cell>0.14</cell><cell>0.43</cell><cell>0.93</cell><cell>0.31</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>The win/lose comparisons on the accuracy per dataset between ECP/ECS with/without DST. The numbers indicate the number of dataset each setting shows the highest accuracy.</figDesc><table><row><cell cols="2">Options</cell><cell></cell><cell>ECP</cell><cell></cell><cell></cell><cell>ECS</cell><cell></cell></row><row><cell>Segments</cell><cell>FDS</cell><cell cols="6">without DST tie with DST without DST tie with DST</cell></row><row><cell>2</cell><cell>TRUE FALSE</cell><cell>11 6</cell><cell>8 11</cell><cell>5 7</cell><cell>9 9</cell><cell>2 2</cell><cell>13 13</cell></row><row><cell>4</cell><cell>TRUE FALSE</cell><cell>7 7</cell><cell>8 10</cell><cell>9 7</cell><cell>6 7</cell><cell>2 3</cell><cell>16 14</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/zedshape/dim-reduce</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Interpretable time series classification using linear models and multi-resolution multi-domain symbolic representations</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">Le</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gsponer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ilie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>O'reilly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ifrim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1183" to="1222" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Petsc: pattern-based embedding for time series classification</title>
		<author>
			<persName><forename type="first">L</forename><surname>Feremans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cule</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Goethals</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="1015" to="1061" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Z-time: efficient and effective interpretable multivariate time series classification</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lindgren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Papapetrou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="206" to="236" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">O S</forename><surname>Sorzano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vargas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Montano</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1403.2877</idno>
		<title level="m">A survey of dimensionality reduction techniques</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Fast channel selection for scalable multivariate time series classification</title>
		<author>
			<persName><forename type="first">B</forename><surname>Dhariyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ifrim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advanced Analytics and Learning on Temporal Data: 6th ECML PKDD Workshop, AALTD 2021</title>
		<title level="s">Revised Selected Papers</title>
		<meeting><address><addrLine>Bilbao, Spain</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021-09-13">September 13, 2021. 2021</date>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="36" to="54" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Bagnall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lines</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Vickers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Keogh</surname></persName>
		</author>
		<ptr target="http://www.timeseriesclassification.com" />
		<title level="m">The uea &amp; ucr time series classification repository</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Hive-cote: The hierarchical vote collective of transformationbased ensembles for time series classification</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lines</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bagnall</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICDM, IEEE</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1041" to="1046" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Hive-cote 2.0: a new meta ensemble for time series classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Middlehurst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Large</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Flynn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lines</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bostrom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bagnall</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">110</biblScope>
			<biblScope unit="page" from="3211" to="3243" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Inceptiontime: Finding alexnet for time series classification</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ismail Fawaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lucas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Forestier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pelletier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">F</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">I</forename><surname>Webb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Idoumghar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P.-A</forename><surname>Muller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Petitjean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="1936" to="1962" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Experiencing sax: a novel symbolic representation of time series</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Keogh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lonardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="107" to="144" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The boss is concerned with time series classification in the presence of noise</title>
		<author>
			<persName><forename type="first">P</forename><surname>Schäfer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="1505" to="1530" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A feature selection method for multi-dimension time-series data</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kathirgamanathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cunningham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advanced Analytics and Learning on Temporal Data: 5th ECML PKDD Workshop, AALTD 2020</title>
		<title level="s">Revised Selected Papers</title>
		<meeting><address><addrLine>Ghent, Belgium</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020-09-18">September 18, 2020. 2020</date>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="220" to="231" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Niculescu-Mizil</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2005.00259</idno>
		<title level="m">Supervised feature subset selection and feature ranking for multivariate time series without feature extraction</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Scalable classifier-agnostic channel selection for multivariate time series classification</title>
		<author>
			<persName><forename type="first">B</forename><surname>Dhariyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">Le</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ifrim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="1010" to="1054" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">A comparative study of discretization methods for naive-bayes classifiers</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">I</forename><surname>Webb</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002. 2002</date>
			<publisher>PKAW</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
