<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Generate Explanations for Time-series classification by ChatGPT</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Zhechang</forename><surname>Xue</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Karlsruhe Institute of Technology</orgName>
								<address>
									<addrLine>Kaiserstraße 12</addrLine>
									<postCode>76131</postCode>
									<settlement>Karlsruhe</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yiran</forename><surname>Huang</surname></persName>
							<email>yhuang@teco.edu</email>
							<affiliation key="aff1">
								<orgName type="department">Telecooperation Office</orgName>
								<orgName type="institution">Karlsruhe Institute of Technology</orgName>
								<address>
									<addrLine>Vincenz-Prießnitz-Straße 1</addrLine>
									<postCode>76131</postCode>
									<settlement>Karlsruhe</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hongnan</forename><surname>Ma</surname></persName>
							<affiliation key="aff2">
								<orgName type="department" key="dep1">School of Computer Science</orgName>
								<orgName type="department" key="dep2">Merchant Venturers Building</orgName>
								<address>
									<addrLine>75 Woodland Road</addrLine>
									<postCode>BS8 1TH</postCode>
									<settlement>Bristol</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Beigl</surname></persName>
							<email>michael@teco.edu</email>
							<affiliation key="aff3">
								<orgName type="department">Telecooperation Office</orgName>
								<orgName type="institution">Karlsruhe Institute of Technology</orgName>
								<address>
									<addrLine>Vincenz-Prießnitz-Straße 1</addrLine>
									<postCode>76131</postCode>
									<settlement>Karlsruhe</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Generate Explanations for Time-series classification by ChatGPT</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C74C4C2FAC83DF4F0335FCF71136C797</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Time-series classification</term>
					<term>ChatGPT</term>
					<term>Explainability</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>With the development of machine learning, the concept of explainability has gained increasing significance. It plays a crucial role in instilling trust among clients regarding the results generated by AI systems. Traditionally, researchers have relied on feature importance to explain why AI produces certain outcomes. However, this method has limitations. Despite the existence of documents that introduce various samples and describe formulas, comprehending the implicit meaning of these features remains challenging. As a result, establishing a clear and understandable connection between features and data can be a daunting task. In this paper, we aim to introduce a novel method for explaining time-series classification, leveraging the capabilities of ChatGPT to enhance the interpretability of results and foster a deeper understanding of feature contributions within time-series data.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>With the explosion of AI technology in the last decade, the demand for explainability has increased significantly. However, most models suffer from a lack of explanability. Explainability means the output of machine learning can be interpreted or understood in a clear and easily comprehensible way. This defect can be highly crippling as many domains involving critical decision-making (finance, medicine, etc.) need a model that can explain its decisions to human users.</p><p>Classification is one of the most common tasks of the time-series data process. FIR (Feature importance ranking) is one of the most powerful tools to help people understand why models make such decisions. It aims to measure the contributions of individual input features (variables) to the performance of a model <ref type="bibr" target="#b0">[1]</ref>. Due to the frequent absence of variables with direct realworld significance in time-series data compared to other data types, extraction of time series feature becomes one of the most essential preliminary steps <ref type="bibr" target="#b1">[2]</ref>. Normally, these features are calculated by complex formulas.</p><p>However, these features are not easily understandable. Take fft_coefficient__attr_'abs'__coeff_30 as an example. This feature is generated by TSFRESH <ref type="foot" target="#foot_0">1</ref> , one of the most popular feature extraction packages. The document of TSFRESH, which explain the meanings of every feature, says that this feature means the fourier coefficients of the one-dimensional discrete fourier transform for real input. However, this explanation is purely mathematical and does not easily translate into a physical interpretation in the real world.</p><p>An ideal explanation should be: "This data is classified to class [shake hand], since the feature fft_coefficient__attr_'abs'__coeff_30 has a high value, indicating the presence of a periodic pattern likely caused by a shaking motion. " This explanation explicitly links the mathematical definition of the feature to a real-world motion, making it more intuitively understandable.</p><p>Given ChatGPT's ability for causal inference, we aim to utilize this ability to connect classification outcomes to easily understandable explanations. Long et al. <ref type="bibr" target="#b2">[3]</ref> showcased that LLMs can accurately generate Directed Acyclic Graphs (DAGs) with proper prompts, indicating their capacity for causal reasoning. Moreover, ChatGPT has demonstrated prowess in data analysis. Salim et al. <ref type="bibr" target="#b3">[4]</ref> conducted the first time-series forecasting with a prompt-based method, and they concluded that the performance of prediction is better than the numerical-based method.</p><p>Based on the previous works, we have found that it's feasible to use LLM to analyze timeseries datasets. By far, all the experiments based on LLM are time-series predictions. There's no research about time-series classification and its explanability. Thus, we want to conduct innovative research on this topic.</p><p>In this paper, we will explore a method for generating explanations in time-series classification. Our approach involves using ChatGPT to classify data samples and subsequently explaining the rationale behind its classifications. It is important to note that GPT's numerical processing capabilities are comparatively inferior to numerical methods. Therefore, our goal is to trade off a small degree of accuracy in exchange for the ability to generate explanations. The code of this method is available on GitHub. <ref type="foot" target="#foot_1">2</ref></p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Some traditional methods for explaining time-series classification have been proposed. Senin et al. pioneered interpretable time series classification using SAX and vector space models <ref type="bibr" target="#b4">[5]</ref>. This approach enables the ranking of time-series patterns by importance, serving as a precursor to FIR. However, despite its interpretability, the classification performance is not ideal.</p><p>Motivated by LIME <ref type="bibr" target="#b5">[6]</ref>, a famous approach to the explanation of classifier, Guillemé et al. <ref type="bibr" target="#b6">[7]</ref> proposed the first agnostic Local Explainer For TIme Series classificaTion (LEFTIST), which provides explanations for predictions made by any time series classifier. They showed that explanations generated by LEFTIST can help users understand the classification in easy cases.</p><p>Coincidentally, Torty et al. <ref type="bibr" target="#b7">[8]</ref> proposed a framework called LIMESegment, which has demonstrated the production of more faithful and robust explanations compared to the existing  We have found that local explanation plays a big role in time-series classification. By observing the pattern of data, reliable explanations can be generated. we can try to combine LLM and local explanation together, to make the explanation not only reliable but also understandable.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Components of prompt</head><p>The main idea of this methodology is to translate time-series data into a description of the movement. Based on this description, GPT will classify the data into a specific class. Finally, GPT will explain the rationale behind its classification.</p><p>To elicit a comprehensive response, a well-structured prompt is essential. A prompt comprises a context and a query. An ideal response should encompass classification and an explanation.</p><p>The context component encompasses background information and a description of classification rules. The background part should outline the data measurement scenario, providing ChatGPT with a clear understanding of the mission. This understanding aids ChatGPT in grasping the context and objectives of the subsequent tasks. The description of classification rules acts as a set of instructions derived from the training dataset. These instructions guide ChatGPT in accurately classifying the given description of the test data sample based on features. In the mission part of the query, a precise inquiry should be formulated concerning both the correct classification and an explanation. Additionally, the desired answer format should be specified to enhance readability. It's imperative to instruct ChatGPT to refrain from providing code-based responses and instead use textual explanations.</p><p>Subsequently, a detailed methodology for generating descriptions of classification rules and test data samples will be introduced.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Extract classification rules</head><p>Indeed, utilizing data descriptions in the methodology rather than directly inputting raw timeseries data into ChatGPT offers several advantages. Firstly, it overcomes the limitation of input length, as raw data often exceeds ChatGPT's input capacity. Secondly, ChatGPT may struggle with understanding multi-digit numbers, which can lead to inaccuracies in analysis and classification <ref type="bibr" target="#b9">[10]</ref>. Therefore, employing a method to describe time-series data effectively becomes crucial, and generating classification rules from the training dataset serves as a viable and beneficial approach to address these challenges.</p><p>The process of extracting classification rules is outlined in Fig. <ref type="figure" target="#fig_0">1</ref>. Absolutely, describing data using features provides a more comprehensive understanding compared to using only numerical values. Features add context and meaning to the data, allowing for a deeper analysis and interpretation of its characteristics. In the initial step, TSFRESH <ref type="bibr" target="#b10">[11]</ref> is utilized to construct a feature table comprising hundreds of features from the training dataset. Recognizing that an explanation based on hundreds of features can be perplexing, features generated by TSFRESH will be then ranked using a decision tree and only top-k features can be reserved. Subsequently, the feature table is utilized to train a decision tree model. Finally, classification rules are extracted from the decision tree and translated into a textual form, offering guidance for ChatGPT to accurately classify data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Generate lookup table</head><p>To ensure the generation of accurate descriptions for both classification rules and test data samples, a lookup table is indeed necessary. This table serves as a reference point, providing the meanings of various features used in the classification rules and test data. With the lookup table in place, the process of translating data into descriptions becomes more precise and consistent, enhancing the overall accuracy of the generated descriptions. This table consists of four columns: 'Feature name', 'Meaning', 'Type', and 'Value'. An example of the lookup table generated by classification rules is presented in Fig. <ref type="figure">2</ref>. "Feature name" refers to the name assigned to a specific feature. "Meaning" represents the physical interpretation, providing a contextual understanding of its role in the explanation. "Type" distinguishes whether the meaning associated with a feature is a noun or an adjective, aiding in structuring the description appropriately. "Value" quantifies the extent or significance of the meaning associated with a feature, providing additional detail about its impact or amount.</p><p>As is shown in Fig. <ref type="figure">2</ref>, a single classification rule comprises multiple features, many of which are not readily understandable even with references from TSFRESH's documentation. For example, number_crossing_0 is explained as the number of crossings, which still lacks a clear physical interpretation.</p><p>To address this issue, we ask ChatGPT to explain these features. The question and ChatGPT's answer are detailed in Fig. <ref type="figure">3</ref>. ChatGPT answers us that a big value of this feature implies that there are frequent changes in direction or acceleration. This demonstrates that ChatGPT can provide concrete and understandable explanations for features.</p><p>As is shown in Fig. <ref type="figure">2</ref>, the meanings of features are manually selected from the answers provided by ChatGPT.</p><p>Lastly, the feature values will also be directly recorded, forming an essential basis for determining how to describe the features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Translate data to description</head><p>After constructing the lookup table, a translation from data to descriptions can be conducted. The "Type" and "Value" columns in the lookup table play a crucial role in determining how to describe the meaning associated with each feature.</p><p>When the count of a feature becomes a crucial consideration, it is defined as a noun.. Phrases if number_crossing_0 &lt; 6.0, and agg_linear_trend_attr_''intercept''_ &gt; 0. ChatGPT: A significant value of "number_crossing_0" in the context of accelerometer measurements on the x-axis typically indicates a pattern of rapid and frequent changes in direction or acceleration. This could suggest that the object or system being measured is undergoing dynamic and possibly complex movements, such as sudden stops and starts, sharp turns, or oscillations. Such patterns are often associated with activities like vehicular motion, sports activities, or machinery operations where rapid changes in movement direction or intensity occur frequently.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 3:</head><p>The example of ask GPT to explain the given feature such as 'A lot of' or 'Few' are used to describe them. If the provided data feature value exceeds the corresponding value in the lookup table, then the phrase 'A lot of' is used; conversely, 'Few' is used. Similarly, when emphasizing the extent of the feature's impact or significance, it is defined as an adjective, terms like 'Big' or 'Slight' are employed for description, For classification rules, for instance, consider the rule in Fig. <ref type="figure" target="#fig_1">4</ref> as an example: "if num-ber_crossing_0 &lt; 6.0, and agg_linear_trend__attr_"intercept"_<ref type="foot" target="#foot_2">3</ref> &gt; 0.4, then class: 3 ". The program automatically finds the corresponding meanings of features in the lookup table and replaces them. The translated rule becomes: "If there are few direction changes and a big decreasing trend, then class 3. "</p><p>For a test data sample, an example in Fig. <ref type="figure" target="#fig_1">4</ref> number_crossing_0 = 9.0 is translated into A lot of changes in the direction, since 9.0 exceeds 6.0. Ultimately, the test sample is assembled with the meanings of each feature.</p><p>Finally, the background, a description of classification rules, a description of test data samples, and the mission are sent to ChatGPT. ChatGPT will classify the data and provide explanations for its decisions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Experiment Setup</head><p>Dataset Benchmark We use human activity recognition datasets because explanations are more intuitively understandable. For example, we can infer from a sample with a periodic pattern to a periodic activity, which indicates a shaking motion. In this paper, we use 3 datasets:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lookup table</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Description of classification rules</head><p>If there are few changes in the direction and a big decreasing trend, then class 3.</p><p>Classification rules agg_linear_trend __attr_"intercept &gt; 0.4 number_crossing &lt; 6.0 Class 3</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Description of test data sample</head><p>There are a lot of changes in the direction and a slight decreasing trend.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lookup table</head><p>Test data sample agg_linear_trend __attr_"intercept = 0.23 number_crossing = 9.0 AllGestureWiimoteX<ref type="foot" target="#foot_3">4</ref> , Basketball motion<ref type="foot" target="#foot_4">5</ref> , and HMP <ref type="foot" target="#foot_5">6</ref> . For each dataset, we randomly selected three classes. Evaluate Metrics To assess the performance of our classification models, we employ accuracy as the metric, which evaluates the model's capability to accurately classify the provided samples. Accuracy is determined by the number of correctly classified test data samples divided by the total number of samples. Models We conducted evaluations using datasets across three models: GPT-3.5-0301, GPT-3.5-0613, and GPT-4-0613. Baseline In this study, we select a decision-tree as the baseline. We apply the feature selection package TSFRESH to training samples. Then we select the top 3 features of each class and let decision-tree to classify the feature table generated by TSFRESH. The accuracy of AllGestureWiimoteX is 68.33%, Basketball is 66.66%, and HMP is 100%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Result</head><p>The result is shown in Table <ref type="table" target="#tab_2">1</ref>. Their accuracy is all close to the baseline and therefore it can be concluded that our method is universal for the human activity recognition dataset.</p><p>Besides, by observing the accuracy of GPT-models, we conclude that with the advancement of the model, the performance of our method improves, since in all datasets, the experiment on GPT-4 has higher accuracy than on GPT-3.5-turbo-0613, and the experiment on GPT-3.5-turbo-0613 performs better than on GPT-3.5-turbo-0301.</p><p>A representative explanation is presented in Fig. <ref type="figure" target="#fig_2">5</ref>, where we can see from the explanation that ChatGPT can classify a shaking movement, making the explanation intuitively understandable.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Limitation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3-class classification</head><p>Originally, the three datasets we used in the experiment contained multiple classes. However, given our emphasis on ChatGPT's ability to generate explanations, we opted for a lightweight experiment. As such, we randomly selected three classes from each dataset for analysis. ChatGPT: According to the given descriptions, your data sample seems to fall into two categories -'shake hand' due to the lots of changes in direction, and 'move to left' due to a big decreasing trend and few increasing values. However, the identification of movement is often prioritized by the most distinctive feature that clearly separates one from the others. The 'shake hand' gesture is most prominently characterized by lots of changes in the direction. This aspect, along with the decreasing trend and few increasing values, may suggest that the hand movement involved turning the hand left and right repeatedly, while gradually dropping it down a bit -a very likely movement pattern for a shaking hand -thus falling into the 'shake hand' category.</p><p>On the other hand, the 'move to left' gesture is characterized by few changes in the direction and a big decreasing trend. Even though these aspects are present in your data sample, the lots of changes in direction do not match the key characteristic of 'move to left', i.e., 'few changes in the direction'. Hence, it would be more accurate to categorize this data sample as 'shake hand'. So, to summarize: [Class: shake hand]. Select meaning manually In section. 3.3, the meanings of each feature are manually selected. However, an alternative approach could involve training a model to automatically extract keywords from ChatGPT's answers regarding the meanings of these features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and future work</head><p>In this paper, we have introduced a method to generate explanations for time-series classification.</p><p>We have explored the application of LLM in the field of human activity recognition. We have proved that explanations can be generated by our method. By applying this method, the classification performs excellent with a high accuracy. Besides, we have shown that explanations generated by this method are understandable by human evaluation, which means they are close to human cognitive habits. Furthermore, our pipeline has proved to be universal for other human activity recognition, since all datasets get a result with high accuracy. It's also proved that the performance of our pipeline will be improved with the advancement of models. So far, GPT-4 has proved to be better than GPT-3.5.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Extract classification rules from train dataset state-of-the-art adaptation of LIME for time-series classification. (e.g. Neves et al. [9])We have found that local explanation plays a big role in time-series classification. By observing the pattern of data, reliable explanations can be generated. we can try to combine LLM and local explanation together, to make the explanation not only reliable but also understandable.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Translate data to description</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Explanation of shake hand</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Raw train dataset Feature table Feature table with top-k features</head><label></label><figDesc></figDesc><table><row><cell></cell><cell></cell><cell>Decision Tree</cell><cell></cell><cell>Classification Rules</cell></row><row><cell>TSFRESH</cell><cell>Rank</cell><cell>Classify</cell><cell>Transform</cell><cell>If feature_1 &gt; 6.0 &amp;</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>feature_2 &gt; 0.233,</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>then: Class 2</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>......</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>table User :</head><label>User</label><figDesc>number_crossing_0 is a time series feature, what can you infer from this feature if this value is big during a measurement of accelerometer on x-axis. Give me only one most important pattern.</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>4, then class: 3</cell></row><row><cell>Directly</cell><cell>GPT</cell><cell cols="2">Manually</cell><cell>Directly</cell></row><row><cell>Feature name</cell><cell>Meaning</cell><cell></cell><cell>Type</cell><cell>Value</cell></row><row><cell>number_crossing_0</cell><cell cols="2">Direction changes</cell><cell>Noun</cell><cell>6.0</cell></row><row><cell cols="2">Figure 2: the pipeline of generating lookup</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1</head><label>1</label><figDesc>The accuracy of experiments on 3 datasets</figDesc><table><row><cell></cell><cell>Motion</cell><cell cols="4">Baseline GPT-3.5-0301 GPT-3.5-0613 GPT-4</cell></row><row><cell></cell><cell>Shake hand</cell><cell>75%</cell><cell>66%</cell><cell>67%</cell><cell>70%</cell></row><row><cell>AllGesture</cell><cell>Move to left Move to right</cell><cell>50% 80%</cell><cell>50% 79%</cell><cell>52% 78%</cell><cell>48% 80%</cell></row><row><cell></cell><cell>Total</cell><cell>68.33%</cell><cell>65%</cell><cell>65.67%</cell><cell>66%</cell></row><row><cell></cell><cell>Dribble</cell><cell>100%</cell><cell>100%</cell><cell>100%</cell><cell>100%</cell></row><row><cell>Basketball</cell><cell>Hold Pass</cell><cell>100% 0%</cell><cell>100% 0%</cell><cell>100% 0%</cell><cell>100% 0%</cell></row><row><cell></cell><cell>Total</cell><cell>66.66%</cell><cell>66.66%</cell><cell>66.66%</cell><cell>66.66%</cell></row><row><cell></cell><cell>Comb hair</cell><cell>100%</cell><cell>24%</cell><cell>80%</cell><cell>92%</cell></row><row><cell>HMP</cell><cell>Descend stair Lie down bed</cell><cell>100% 100%</cell><cell>100% 100%</cell><cell>100% 100%</cell><cell>100% 100%</cell></row><row><cell></cell><cell>Total</cell><cell>100%</cell><cell>74.67%</cell><cell>93%</cell><cell>97.33%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://tsfresh.readthedocs.io/en/latest/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://github.com/lab992/Generate_explanations_for_classification_by_ChatGPT</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">This feature indicates a decreasing trend.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://www.timeseriesclassification.com/description.php?Dataset=AllGestureWiimoteX</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://archive.ics.uci.edu/dataset/587/basketball+dataset</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">https://github.com/wchill/HMP_Dataset</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>This work was partially funded by the Ministry of The Ministry of Science, Research and the Arts Baden-Wuerttemberg as part of the SDSC-BW and by the German Ministry for Research as well as by Education as part of SDI-C (Grant 01IS19030A)</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Explaining anomalies detected by autoencoders using shapley additive explanations</title>
		<author>
			<persName><forename type="first">L</forename><surname>Antwarg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Shapira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rokach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert systems with applications</title>
		<imprint>
			<biblScope unit="volume">186</biblScope>
			<biblScope unit="page">115736</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Tsfel: Time series feature extraction library</title>
		<author>
			<persName><forename type="first">M</forename><surname>Barandas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Folgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Santos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abreu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schultz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gamboa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">SoftwareX</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">100456</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Can large language models build causal graphs?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Long</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piché</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Montreal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Research</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.05279</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Promptcast: A new prompt-based learning paradigm for time series forecasting</title>
		<author>
			<persName><forename type="first">H</forename><surname>Xue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">D</forename><surname>Salim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Sax-vsm: Interpretable time series classification using sax and vector space model</title>
		<author>
			<persName><forename type="first">P</forename><surname>Senin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Malinchik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2013 IEEE 13th international conference on data mining</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1175" to="1180" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">why should i trust you?&quot; explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</title>
				<meeting>the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Agnostic local explanation for time series classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Guillemé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Masson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rozé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Termier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 31st international conference on tools with artificial intelligence (ICTAI)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="432" to="439" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Limesegment: Meaningful, realistic time series explanations</title>
		<author>
			<persName><forename type="first">T</forename><surname>Sivill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Flach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Statistics</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="3418" to="3433" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">boa, Interpretable heartbeat classification using local model-agnostic explanations on ecgs</title>
		<author>
			<persName><forename type="first">I</forename><surname>Neves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Folgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Santos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barandas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Campagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ronzio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers in Biology and Medicine</title>
		<imprint>
			<biblScope unit="volume">133</biblScope>
			<biblScope unit="page">104393</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Does chatgpt comprehend the place value in numbers when solving math word problems</title>
		<author>
			<persName><forename type="first">J</forename><surname>An</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gweon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshop&quot; Towards the Future of AI-augmented Human Tutoring in Math Learning&quot; co-located with The 24th International Conference on Artificial Intelligence in Education (AIED 2023)</title>
				<meeting>the Workshop&quot; Towards the Future of AI-augmented Human Tutoring in Math Learning&quot; co-located with The 24th International Conference on Artificial Intelligence in Education (AIED 2023)<address><addrLine>Tokyo, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3491</biblScope>
			<biblScope unit="page" from="49" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Time series feature extraction on basis of scalable hypothesis tests (tsfresh-a python package)</title>
		<author>
			<persName><forename type="first">M</forename><surname>Christ</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Braun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Neuffer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Kempa-Liehr</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">307</biblScope>
			<biblScope unit="page" from="72" to="77" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
