<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Topological Data Analysis for Trustworthy AI</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Victor</forename><forename type="middle">Toscano</forename><surname>Durán</surname></persName>
							<email>vtoscano@us.es</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Applied Mathematics I</orgName>
								<orgName type="institution">University of Sevilla</orgName>
								<address>
									<settlement>Sevilla</settlement>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Topological Data Analysis for Trustworthy AI</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">3A83A715A565FBB768B38FF7F1C32F98</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:38+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial Intelligence</term>
					<term>Neural Networks</term>
					<term>Topological Data Analysis</term>
					<term>Time series</term>
					<term>reliability</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Artificial Intelligence (AI) is transforming industries by analyzing large amounts of data to find patterns and make decisions more efficiently than ever before. Neural networks, which are inspired by the human brain, are a key part of AI but often work in ways that are hard to understand, leading to concerns about their reliability. This doctoral research proposal, titled "Topological Data Analysis for Trustworthy AI, " aims to tackle these issues by using Topological Data Analysis (TDA) and Computational Topology. The research will develop a new method to compare piecewise neural networks with ReLU activation functions using topological entropy, which could help make these networks more transparent. It will also apply TDA techniques to improve the analysis of time series data in neural networks, aiming to enhance prediction accuracy and understanding of how these networks work over time. Additionally, the study will look at applying TDA to recurrent neural networks like LSTM and potentially to Transformer models. This research aims to make AI systems more reliable and understandable, with benefits for areas like healthcare and autonomous systems. The proposal also includes plans for attending conferences and publishing research findings.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As more data is fed into the network, it adjusts its internal parameters to optimize performance, enabling it to recognize complex patterns and make predictions with increasing accuracy.</p><p>However, alongside its tremendous potential, AI also presents significant challenges, chief among them being the issue of reliability, particularly concerning so-called "black box" algorithms <ref type="bibr" target="#b2">[3]</ref>. While neural networks excel at solving complex problems, their inner workings often remain opaque to human understanding. This lack of transparency raises concerns regarding the reliability and trustworthiness of AI systems, especially in high-stakes applications such as autonomous vehicles, medical diagnosis, and financial forecasting.</p><p>The problem of reliability in AI underscores the need for transparency and accountability in algorithmic decision-making. Efforts to address this issue include research into explainable AI, which aims to develop models that not only produce accurate results but also provide insights into the reasoning behind those decisions. By making AI systems more interpretable and understandable to human users, researchers hope to build trust and mitigate the risks associated with opaque algorithms.</p><p>Despite these challenges, the potential benefits of AI are undeniable, with far-reaching implications for virtually every sector of society. From improving healthcare outcomes and optimizing resource allocation to enhancing cybersecurity and mitigating climate change, AI offers solutions to some of humanity's most pressing problems. As we continue to harness the power of artificial intelligence, it is essential to remain vigilant, balancing innovation with ethical considerations and ensuring that AI serves the collective good.</p><p>In the ever-evolving landscape of artificial intelligence (AI), where innovation is the norm and breakthroughs are constant, emerging fields like Topological Data Analysis (TDA) <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref> and Computational Topology <ref type="bibr" target="#b5">[6]</ref> are gaining recognition for their potential to augment the efficiency and capabilities of neural networks and AI systems as a whole.</p><p>At its core, TDA is a branch of mathematics that leverages tools from algebraic topology to analyze the shape, structure, and connectivity of complex data sets. By applying topological principles to high-dimensional data, TDA seeks to extract meaningful insights that may be obscured by traditional statistical or geometric methods. This approach allows researchers and practitioners to uncover hidden patterns, identify critical features, and gain a deeper understanding of the underlying structure inherent in the data.</p><p>Computational Topology, on the other hand, focuses on the development and implementation of algorithms and computational techniques for solving topological problems. It bridges the gap between theoretical concepts in topology and practical applications in fields such as computer science, engineering, and data analysis. Through the use of advanced computational tools, Computational Topology enables researchers to tackle complex problems in data analysis, visualization, and machine learning <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>One of the most promising aspects of TDA and Computational Topology is their potential to enhance the efficiency and effectiveness of neural networks and AI algorithms. By incorporating topological insights into the design and training of neural networks, researchers can develop more robust and adaptive models capable of handling diverse and complex data sets. TDA techniques such as persistent homology have been successfully applied to tasks such as image recognition, natural language processing, and time-series analysis, demonstrating their efficacy in extracting meaningful features and improving classification accuracy.</p><p>In recent years, there has been a growing interest in interdisciplinary research at the intersec-tion of TDA, Computational Topology, and artificial intelligence. Collaborative efforts between mathematicians, computer scientists, and domain experts have yielded novel approaches and techniques for solving complex problems in data analysis and machine learning. This convergence of disciplines holds great promise for advancing the capabilities of AI systems and unlocking new opportunities for innovation across a wide range of applications.</p><p>As we continue to explore the synergies between TDA, Computational Topology, and artificial intelligence, it is clear that these fields will play an increasingly important role in shaping the future of data-driven decision-making, enabling more efficient, reliable, and interpretable AI systems.</p><p>In summary, in this doctoral proposal named "Topological Data Analysis for Trustworthy AI", I will focus on the application of Topological Data Analysis and Computational Topology as a fundamental tool for improving the reliability of artificial intelligence in challenging contexts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>In recent years, there has been a surge of interest in integrating artificial intelligence (AI) with topological data analysis (TDA) to enhance the efficiency, robustness, and interpretability of AI systems. This section explores some of the key contributions and advancements in this interdisciplinary research area.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Enhanced Data Analysis with Topological Summaries</head><p>An emerging focus is the application of topological summaries to improve data analysis itself. These summaries are mathematical tools used in topological data analysis (TDA) to capture and characterize the underlying structure of complex data sets by focusing on the intrinsic structure of complex datasets, rather than relying on traditional geometric methods. These summaries, rooted in concepts like homology and persistent homology, capture the fundamental shapes and features of data, providing stable representations that resist noise and variations. By integrating these topological descriptors, researchers can gain deeper insights into the data, leading to more informed decisions and enhanced analysis outcomes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>TDA for Feature Extraction in AI</head><p>Topological Data Analysis (TDA) has proven to be a powerful method for extracting meaningful features from high-dimensional data, which traditional techniques often overlook. Persistent homology, a key TDA tool, captures stable topological features like connected components and loops across multiple scales, making it effective for AI tasks such as image recognition. By identifying essential features that enhance classification accuracy, TDA has been successfully applied in areas like object detection and texture classification, where the data's inherent shape is critical for distinguishing categories.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Topological Representations for Neural Networks</head><p>Incorporating topological representations into neural network design and training offers promising improvements in generalization, overfitting reduction, and interpretability. Topological regularization, which imposes topological constraints during learning, helps neural networks capture essential data structures, stabilizes training, and increases resilience against adversarial attacks. Additionally, using topological insights to refine decision boundaries enhances the robustness and reliability of AI models, contributing to the development of more effective and interpretable neural networks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Interpretable AI with TDA</head><p>The integration of Topological Data Analysis (TDA) into AI models has advanced the field of interpretable AI by making complex systems more transparent and accountable. TDAbased methods provide topological explanations for AI decisions, offering insights into how data features influence predictions, particularly in critical areas like healthcare, finance, and autonomous systems. This aligns with the goals of explainable AI (XAI), where the focus extends beyond performance to include the interpretability and trustworthiness of AI outputs, addressing the increasing demand for transparency in AI technologies.</p><p>Through these advancements, TDA not only contributes to the development of more efficient and robust AI systems but also addresses the growing demand for interpretability and transparency in AI technologies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Research Questions, Hypothesis, and Objetives</head><p>In this project, we embark on the application of topology as a fundamental tool to enhance the reliability of neural networks in challenging contexts. I aim to initially focus on achieving satisfactory results in the previous research done by my advisors, and then applying Topological Data Analysis (TDA) techniques to analyze time series data in neural networks, aiming to improve prediction accuracy and understand temporal dynamics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Questions</head><p>1. How can topology be leveraged to improve the reliability of neural networks in challenging contexts? 2. What role does topological entropy play in measuring similarities between piecewise neural networks using activation functions like ReLU? 3. How can TDA be extended to analyze time series data in neural networks, and what insights can be gained from this analysis?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hypothesis</head><p>1. Piecewise neural networks employing ReLU activation functions can be evaluated for similarity using topological entropy, leading to greater transparency in their operation.</p><p>2. The application of TDA to time series analysis in neural networks will yield valuable insights into the temporal dynamics of network behavior and improve predictive performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Objectives</head><p>Firstly, I will extend previous research carried out by my advisors in <ref type="bibr" target="#b7">[8]</ref> on piecewise neural networks <ref type="bibr" target="#b8">[9]</ref>, particularly those using ReLU activation functions, by developing a new approach based on topological entropy to measure similarities between these networks. In addition, evaluate the effectiveness of the proposed approach in improving the transparency and reliability of piecewise neural networks. In summary, the research conducted by my advisors in <ref type="bibr" target="#b7">[8]</ref> will be the starting point of my thesis.</p><p>Secondly, my research will extend into the application of Topological Data Analysis (TDA) techniques to the analysis of time series data within neural networks. This will involve leveraging existing knowledge in time series analysis and integrating recent advancements in the field. A significant aspect of this part of the research will be to explore how incorporating TDA can improve prediction accuracy and provide deeper insights into temporal dynamics. For that, topological descriptors, also known as topological summaries, will be used, which are mathematical tools used in topological data analysis to capture and characterise the underlying structure of complex data sets. Unlike traditional data analysis methods that rely on specific geometry and Euclidean metrics, topological descriptors focus on the intrinsic properties of the data space, providing a robust and stable representation in the face of noise and variations in the data. Additionally, I will evaluate the applicability of TDA techniques to recurrent neural networks, such as Long Short-Term Memory (LSTM) networks <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>.</p><p>Finally, contingent on achieving positive results and having sufficient time, there may be an opportunity to expand the research to include "Transformers" models <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>.</p><p>Moreover, as part of my professional development, I intend to attend numerous conferences to stay abreast of the latest advancements and network with peers in the field. I also plan to write articles related to my thesis and present them at conferences to contribute to the academic community.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stay</head><p>I have arranged two research stays as part of my thesis. The first will take place in October 2024 at the Institute of Electronic, Informatics and Telecommunications Engineering, National Research Council in Genoa, under the guidance of Prof. Maurizio Mongelli, who specializes in machine learning applied to bioinformatics and cyber-physical systems. During this stay, I will focus on advancing my understanding of machine learning techniques, particularly in the context of explainable AI. The second stay is planned for Summer 2025 at Bastian Grossenbacher Rieck's laboratory in Helmholtz Munich, a leading center for machine learning research, especially in computational healthcare. There, I will work with the AIDOS Lab, under the guidance of Prof. Bastian Rieck 1 , who is focus on geometry and topology in machine learning with a keen interest in biomedical applications, to deepen my knowledge of topological machine learning techniques and their application in healthcare.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Research Approach and Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Approach</head><p>The research approach for this study combines theoretical exploration, algorithm development, and empirical validation to investigate the application of topological data analysis (TDA) in enhancing the reliability and transparency of neural networks, particularly in challenging contexts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Methods</head><p>The methodological approach of this research involves three main phases:</p><p>• First, a theoretical exploration will be conducted through a comprehensive review of existing literature on TDA, neural networks, and topological concepts like persistent homology and topological entropy, aiming to develop new methodologies for assessing neural network reliability and enhancing interpretability. • The second phase focuses on the development of novel algorithms to apply TDA to neural network architectures, designing techniques for measuring similarities using topological summaries like persistent entropy, with particular emphasis on ReLU networks. Moreover, this phase will focus in extend TDA techniques to analyze time series data and integrating them into neural networks for time series, like recurrent neural networks. • Finally, empirical validation will be carried out by implementing these algorithms on realworld datasets to evaluate their effectiveness in improving the reliability, interpretability, and performance of neural networks, and comparing the outcomes with traditional approaches.</p><p>The underlying hypothesis is that integrating TDA techniques, which offer a unique perspective on data structure and relationships, can overcome the limitations of traditional neural networks, especially in complex and high-dimensional data domains, thereby enhancing their performance and transparency, and enhance the analysis of time series.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Preliminary Results and Contributions</head><p>The research is currently in its early stages, focusing on a comprehensive literature review to identify relevant methodologies and theoretical frameworks for integrating Topological Data Analysis (TDA) with neural networks. While concrete results are not yet available, initial findings suggest promising avenues for enhancing AI interpretability and reliability through topological methods, establishing a strong foundation for future empirical research and experimentation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Expected Next Research Steps</head><p>We plan to refine our approach for measuring similarities between piecewise neural networks using topological entropy. This involves enhancing mathematical models that quantify the relationships between different neural network components based on their topological properties, aiming to develop a robust metric that better captures the complexity and behavior of these networks. Additionally, we will explore advanced techniques for applying topological data analysis (TDA) to time series within neural networks, with the goal of improving the predictive power and interpretability of AI models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Expected final contribution to knowledge</head><p>The expected outcome of this research is a significant contribution to the field of AI, particularly in the areas of reliability, transparency, and interpretability. By integrating Topological Data Analysis with neural networks, the research aims to produce AI systems that are not only more accurate but also more understandable to human users. This integration has the potential to revolutionize how neural networks are designed and applied, particularly in high-stakes areas where trust and transparency are paramount. Ultimately, the research aspires to bridge the gap between complex AI models and human interpretability, contributing to the development of AI systems that are both powerful and ethically sound.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://bastian.rieck.me/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Thanks to my thesis tutor, Rocío González Díaz, for her invaluable help, advice, and for this incredible opportunity, and to my thesis supervisors, Miguel Ángel Gutiérrez Naranjo and Matteo Rucco, for their guidance and support. This work was supported in part by the European Union HORIZON-CL4-2021-HUMAN-01-01 under grant agreement 101070028 (REXASI-PRO) and by TED2021-129438B-I00 / AEI/10.13039/501100011033 / Unión Europea NextGenerationEU/PRTR.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Winston</surname></persName>
		</author>
		<ptr target="https://books.google.es/books?id=b4owngEACAAJ" />
		<title level="m">Artificial Intelligence, A-W Series in Computerscience</title>
				<imprint>
			<publisher>Addison-Wesley Publishing Company</publisher>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Deep learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
		<idno type="DOI">10.1038/nature14539</idno>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">521</biblScope>
			<biblScope unit="page" from="436" to="444" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</title>
		<author>
			<persName><forename type="first">C</forename><surname>Rudin</surname></persName>
		</author>
		<idno type="DOI">10.1038/s42256-019-0048-x</idno>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Topology and data</title>
		<author>
			<persName><forename type="first">G</forename><surname>Carlsson</surname></persName>
		</author>
		<idno type="DOI">10.1090/S0273-0979-09-01249-X</idno>
	</analytic>
	<monogr>
		<title level="j">Bulletin of The American Mathematical Society -BULL AMER MATH SOC</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="page" from="255" to="308" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An introduction to topological data analysis: Fundamental and practical aspects for data scientists</title>
		<author>
			<persName><forename type="first">F</forename><surname>Chazal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Michel</surname></persName>
		</author>
		<idno type="DOI">10.3389/frai.2021.667963</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Edelsbrunner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Harer</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-33259-6_7</idno>
		<title level="m">Computational Topology: An Introduction</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Deep learning: A critical appraisal</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">F</forename><surname>Marcus</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1801.00631</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A new topological entropy-based approach for measuring similarities among piecewise linear functions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rucco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gonzalez-Diaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-J</forename><surname>Jimenez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Atienza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cristalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Concettoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ferrante</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Merelli</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.sigpro.2016.12.006</idno>
	</analytic>
	<monogr>
		<title level="j">Signal Processing</title>
		<imprint>
			<biblScope unit="volume">134</biblScope>
			<biblScope unit="page" from="130" to="138" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Piecewise linear neural networks and deep learning</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Xi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Suykens</surname></persName>
		</author>
		<idno type="DOI">10.1038/s43586-022-00125-7</idno>
	</analytic>
	<monogr>
		<title level="j">Nature Reviews Methods Primers</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Persistent homology on time series</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</author>
		<idno type="DOI">10.7939/R3K931F13</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.7939/R3K931F13" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">An introduction to persistent homology for time series</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ravishanker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1002/wics.1548</idno>
	</analytic>
	<monogr>
		<title level="j">WIREs Computational Statistics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Dynamically maintaining the persistent homology of time series</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Di Montesano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Edelsbrunner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Henzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ost</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2311.01115</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Persistent homology for time series and spatial data clustering</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">F</forename><surname>De Mello</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2015.04.010</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="6026" to="6038" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Time series analysis using persistent homology of distance matrix</title>
		<author>
			<persName><forename type="first">T</forename><surname>Ichinomiya</surname></persName>
		</author>
		<idno type="DOI">10.1587/nolta.14.79</idno>
	</analytic>
	<monogr>
		<title level="j">Nonlinear Theory and Its Applications</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="79" to="91" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>IEICE</note>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">A persistent homology approach to time series classification</title>
		<author>
			<persName><forename type="first">Y.-M</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Cruse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lawson</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2003.06462</idno>
		<idno type="arXiv">arXiv:2003.06462</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Analysis of financial time series using tda: theoretical and empirical results</title>
		<author>
			<persName><forename type="first">L</forename><surname>Leaverton</surname></persName>
		</author>
		<ptr target="http://hdl.handle.net/2445/163638" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Using topological data analysis to process time-series data: A persistent homology way</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ma</surname></persName>
		</author>
		<idno type="DOI">10.1088/1742-6596/1550/3/032082</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Physics: Conference Series</title>
		<imprint>
			<biblScope unit="volume">1550</biblScope>
			<biblScope unit="page">32082</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Topological data analysis and its application to time-series data analysis</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Umeda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kaneko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kikuchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fujitsu Scientific and Technical Journal</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="65" to="71" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Persistent homology: Functional summaries of persistence diagrams for time series analysis</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Sánchez</surname></persName>
		</author>
		<ptr target="http://hdl.handle.net/2445/181324" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1706.03762</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st International Conference on Neural Information Processing Systems</title>
				<meeting>the 31st International Conference on Neural Information Processing Systems</meeting>
		<imprint>
			<publisher>Curran Associates Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="6000" to="6010" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Persformer: A transformer architecture for topological machine learning</title>
		<author>
			<persName><forename type="first">R</forename><surname>Reinauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Caorsi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Berkouk</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2112.15210</idno>
		<idno type="arXiv">arXiv:2112.15210</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
