<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The 2 nd Workshop on Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools and Risks November 11-12, 2020</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ying</forename><surname>Zhao</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Erik</forename><surname>Blasch</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Doug</forename><surname>Lange</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Tony</forename><surname>Kendall</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Arjuna</forename><surname>Flenner</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Bonnie</forename><surname>Johnson</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Bruce</forename><surname>Nagy</surname></persName>
						</author>
						<title level="a" type="main">The 2 nd Workshop on Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools and Risks November 11-12, 2020</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DB245F0CAAAE5992082491AA10C655AD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Challenges: Advancements in hardware, algorithms, and data collection are enabling unexplored defense applications of AI. Development of these applications requires overcoming several challenges. The first challenge is noisy and unstructured data. The second is that adversaries can deceive, corrupt, and camouflage true data; defense applications need to evaluate bad data, find fake data, and perform with limited data. A second challenge is mapping AI algorithms at the strategic, operational, and tactical levels to defense applications <ref type="bibr" target="#b0">[1]</ref>. During this mapping, AI applications need to comply with four factors: data; trust; security; and human-machine teaming. In conjunction with AI, data analytics must address the issues of agility, interoperability, and maintainability. Agility of product development includes five topics: open architectures; signal processing; systems software; autonomy via context awareness; and health monitoring. Interoperability is essential for multi-domain coordinated sensing, modeling, and instrumentation. Maintainability enables disaster operations, cyber sensemaking, and predictive maintenance. These topics were discussed through data strategy, algorithms, trust, and standards.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data strategy:</head><p>To foster better data collection, the 2020 U.S. DoD data strategy [2] contains seven desirable data elements: visibility; accessible; understandability; linkages; trustworthiness; interoperable; and security. Collected training data must be secured to prevent hostile takeover and made robust against external attacks. Moreover, due to expensive data collections such as battle damage assessment, the DoD needs high-fidelity 3D modeling to generate synthetic training data. The presence of adversaries and unique data requirements necessitates careful consideration of collected and synthetic data.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithms and Technologies:</head><p>A wide variety of algorithms and their related technologies were discussed. Presenters discussed (co)evolutionary algorithms, game theory, and optimization techniques. Evolutionary algorithms, which do not require gradient computation, can quickly search and evolve to find new battlespace measure/countermeasure configurations and emerging properties. Evolutionary algorithms were also inventively applied to look for tax loop-holes and fixes <ref type="bibr">[7]</ref>. Counterfactual regret minimization (CFR) and Alpha-Zero algorithms were highlighted in four applications: AFSIM enabled competitive wargaming simulations; Gomoku; Othello; and DARPA sail-on. Lexical link analysis, an unsupervised learning algorithm, was used to improve prediction and readiness for Navy logistics and supply enterprise. Deep learning was applied to Synthetic Aperture Radar (SAR) images. Interactive machine learning (IML), in a humanmachine shared environment, learns human tasks. Lastly, a problem was presented still in need of an algorithmic solution: With implicitly self-similar structures such as fractals, order may emerge from a randomly generated but constrained topology <ref type="bibr" target="#b2">[4]</ref>.</p><p>Many technologies were not tied to any algorithm, such as: cyber malware detection; attack and defense's arms race; multi-segment asymmetrical wargames; strike mission planning; battlespace readiness engagement matrix; and SoarTech's technology for DARPA's AlphaDogFight trials. Two important technologies with needed applications were highlighted: trusted AI and complex system theory. The first technology was used to build warfighter assistants where trusted AI is an automation tool. The second category of complex system theory controlled a swarm in battlefield conditions. This technology was shown to produce millisecond topological pictures of IoT/edge devices over distributed C2/resilient communications within denied environments.</p><p>Trust: Mission execution requires trust between the AI enabled and human team members. Due to this importance, many of the talks chose to address trust. The DARPA XAI program discovered that users understand and trust models that match expectations; they even prefer satisfying models over high-performing models. Also, Lipton et al. discussed ten model interpretability dimensions of trust <ref type="bibr" target="#b1">[3]</ref>. The diversity of discussions on trust demonstrated that the defense community needs teams including experts on algorithms, design guidance, and best practices to access measures of trust concepts in AI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Standards:</head><p>As proposed by the Joint AI center (JAIC), AI systems need standards for responsible, equitable, traceable, reliable, and governable AI systems <ref type="bibr">[5]</ref>. A Multisource AI scorecard table (MAST) supports Test and Evaluation, which may be viewed as an initial version of AI application standards. MAST connects governance, explainability and compliance for AI enterprises [6]: Mast adheres to AI defense applications need to be resilient to deception/misclassification, to noisy data, to exploitation of classifiers from known weaknesses and unanticipated attacks.</p><p>In conclusion, defense applications tend to be human-in-the-loop, where Defense AI and deep models are a "force multiplier" supporting moral, ethical and legal human decision making.</p></div>		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">AI Superpowers: China, Silicon Valley, and the New World Order</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">F</forename><surname>Lee</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Houghton Mifflin Harcourt</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The Mythos of Model Interpretability</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Lipton</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1606.03490" />
	</analytic>
	<monogr>
		<title level="m">Presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016)</title>
				<meeting><address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Leveraging deterministic chaos to mitigate combinatorial explosions</title>
		<author>
			<persName><forename type="first">J</forename><surname>Schaff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Engineering emergence: a modeling and simulation approach</title>
				<editor>
			<persName><forename type="first">L</forename><forename type="middle">B</forename><surname>Rainey</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Jamshidi</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>CRC press @2019</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
