<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Method of Differential Measurement to Locate the Sound Event Epicenter</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Hermann</forename><surname>Schloss</surname></persName>
							<email>schloss@syssoft.uni-trier.de</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Trier</orgName>
								<address>
									<settlement>Trier</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gennadiy</forename><surname>Poryev</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>64/13, Volodymyrs&apos;ka str</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Method of Differential Measurement to Locate the Sound Event Epicenter</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">55A6B6E5FEA1D248E3FE271A19CD7F8C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:51+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>sound propagation</term>
					<term>emergency services</term>
					<term>spatial sound survey</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The focus of this article is an innovative technique designed to accurately determine the geographical coordinates of the epicenter of loud sound events (LSEs), which can include incidents such as large structural collapses, munition depot explosions, artillery and missile strikes and more. To pinpoint the location, the technique involves strategically positioning a minimum of three sound sensor units in the field. These units must have predetermined or already-known geographical coordinates and should be equipped with precision synchronized clocks. By analyzing the variation in the time it takes for the sound wave to arrive at each of these sensor units, it becomes feasible to compute the exact location of the epicenter without having prior knowledge about its distance or direction.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Historically, locating sound events has been a challenge, especially when direct observation or measurement of the LSE is neither feasible nor practical <ref type="bibr" target="#b2">[3]</ref>. Solutions to this challenge encompass a wide array of techniques. These techniques typically leverage a mix of hardware and software methodologies. In specific contexts, like military operations, the tools used can be quite specialized and advanced <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. Often, these solutions integrate directional sound sensor units <ref type="bibr" target="#b3">[4]</ref>, advanced signal filtering <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>, and even seismic or underwater <ref type="bibr" target="#b7">[8]</ref> activity sensors to supplement additional data for analysis and enhancement of precision.</p><p>However, our research propounds a divergent strategy. We propose a practical system for locating LSEs, one that can be readily deployed in the field using specialized sensor units built from easily accessible hardware. Examples of such hardware include commonly available platforms like the Raspberry Pi and its numerous analogues or even conventional consumer-grade smartphones or tablets. This approach not only democratizes access to such a system but also significantly reduces the costs associated with development, production, deployment and overall operational budgets.</p><p>It's crucial to clarify a few points at this juncture. First, this paper will not delve into the intricate details of LSE signal detection using digital signal processing and recognition <ref type="bibr" target="#b8">[9]</ref>. The model that we envision operates on the premise that the signal has been efficiently detected, filtered, and extracted. Hence, every sensor unit in the system should and is assumed to be capable of reporting the exact timestamp of the LSE incident to a central control node.</p><p>Additionally, methods to achieve precise time synchronization aren't discussed here either. This is primarily because modern global navigation satellite system (GNSS) sensors typically offer satisfactory timestamping accuracya feature intrinsic to their primary function.</p><p>Thirdly, this work should be considered as a foray into the scope of ideas and thought experiments, not as beginning of the development of a specific mathematical basis usable in device designing framework. The final product, should it ever be implemented and deployed, may have completely different approach to locate LSE epicenter in comparison to the simulation modelling offered here. It may, for instance, utilize purely analytical solutions, should it be demonstrated that they are more timeefficient and power-efficient software-wise. Or, in addition to simulation and\or analytical methods it can employ machine learning techniques <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12]</ref>. Also, within the scope of this work only stationary LSEs are considered, since detecting moving sound sources, especially at real-time involve completely different models <ref type="bibr" target="#b12">[13]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Materials and Methods</head><p>The first key consideration for successful LSE location is the placement of sensor units. We have considered several possibilities as to how this could be achieved. It is vital that these units are positioned at specific sites with known geographical coordinates. These coordinates must be either pre-set into the control node memory prior to each deployment or automatically reported to it each time sensor is placed on the new location. An advantage of such system is its adaptability: the sensors' locations can change between different measurement sessions, provided that the control node is promptly updated with their new coordinates. Notably, keeping a GNSS sensor stationary for extended periods of time often refine the accuracy of its locational readings. To sum up, what we have in mind is the following possible options:</p><p> Static, ground-based placement: the simplest method is a stationary setup. Here, on-ground technicians methodically install sensors, subsequently documenting their precise coordinates using readily available consumer-grade GNSS devices. Modern conveniences like the GPS/GALILEO modules, commonly found in today's mobile devices, make this option exceedingly cost-effective due to the minimal necessity for supplementary hardware investments. Such modules are also readily available on the market separately.</p><p> Mobile deployment of the sensors whereby they can be placed on a non-stationary vehicle. This should greatly expand the flexibility in sensor configuration, providing the model with more reliable data and ability to adapt to changing tactical situations, thus increasing the responsiveness of the system in general. In this scenario, each sensor unit may obtain its own geographical coordinates either through connected specialized GNSS module or through GNSS navigator device of the vehicle it is placed on.</p><p> Overhead aerial deployment via UAVs: for areas that are challenging to access or pose significant security risks, deploying sensors from the airborne platforms become an attractive proposition. Drones or UAVs can be dispatched to drop sensors over these terrains. This mode, while innovative, requires robust components. Sensors, their enclosures and all the electronics inside need to be structurally strong and durable, equipped with a self-contained GNSS module, long-lasting batteries, and a failsafe mechanisma security protocol designed to wipe out all system-critical data to prevent potential security breaches via falling into the hands of the adversary or the third party. Fall retardation devices are also need to be considered depending on the projected deployment scenario.</p><p>Another key consideration is the spread and ranging scope of the sensor placement which is important due to how the model works as discussed below. The idea being that at least three placed sensors make a triangular formation on Earth surface. Hence the general area of the anticipated LSEs should be both located at a distance comparable in order of magnitude with the dimensions of the said triangle and, whenever possible, located high enough above the surface to provide with direct line-ofsight for the area where LSE is anticipated.</p><p>The obvious prerequisite for choosing sensor unit locations is the connectivity, specifically the ability for the sensor unit to send its coordinates and LSE data to the control node. Variety of approaches may be used to that effect, starting from equipping sensor node with cellular data uplink wherever the cellular operator coverage allows for it or, in case of stationary placement in the urban environment, utilizing WiFi or even ad-hoc wired Internet connectivity. Dedicated radio trunking channel is also possible for locations far from developed urban infrastructure. In case of automotive deployment, a satellite data uplink such as Starlink is also a viable solution.</p><p>One important caveat that operators must be wary of is the potential pitfall of a linear sensor arrangement, that is when sensors are arranged in such a way that a straight line can be drawn with minimal distance from it to each sensor. Such geometric layout can increase the probability of erroneous readings, leading to false positives and uncertainty of the data interpretation. As a rule of thumb, the line connecting the sensors should be as much "broken" as possible, with apex angles close or smaller than the straight angle. It is our belief that this layout potentially gives the most accurate result possible. However, to prove this mathematically is the subject of further research.</p><p>In essence, the strategic placement and arrangement of sound sensors are cornerstones in the efficient detection of LSEs. By merging technological finesse with tactical pragmatism, it's possible to devise a system that strikes a balance between precision and real-time adaptability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Modelling</head><p>Figure <ref type="figure" target="#fig_0">1</ref> depicts the general schematics of the sensor placement in relation to the epicenter of the LSE. Note that there is actually no need to try to maintain relative symmetry of sensor placement between themselves or with the LSE and the model is perfectly capable of calculating the LSE coordinates even if it is significantly moved farther out or sideways. As mentioned above, we recognize the necessity to perform additional research on how the accuracy of the proposed method is affected by the relative positioning of the LSE and sensors and by the distance to the LSE, but it too is out of the scope for this paper. The model works under the assumption that nor the exact distances S0E, S1E and S2E nor the incident bearing angles of the LSE sound waves are known. What is known, however, is the exact geographical coordinates of the S0, S1 and S2 plus the precise time at which the sound wave from the LSE arrived at each sensor unit. The model at present does not take into account any external factors affecting speed of sound such as air temperature, pressure, humidity, wind direction and speed, density of obstacles, wave re-reflection etc and is assuming the speed of sound to be vs=343 meters per second under standard conditions.</p><p>Unless the sensors are uniquely positioned forming a perfect circle with the LSE epicenter E at its center and equidistant from it, there will be detectable variances in the LSE sound wave's arrival timestamps on each sensor. For example, D1 represents the time difference between the arrivals at S1 and S0, calculated as D1=T1-T0. Similarly, D2 (or the difference between the arrivals at S2 and S0) is D2=T2-T0, where T0, T1, and T2 denote the arrival timestamps at sensors S0, S1, and S2 respectively.</p><p>The simulation model works under another important assumption that Since S0E, S1E and S2E are directly dependent on the location of the LSE, the model works by estimating among possible LSE locations such that D1 and D2 correspond to the directly measured values of T0, T1 and T2 as much as possible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head><p>In theory, this model offers a promising method for detecting LSE epicenters by harnessing the potential of strategically placed sensors and analyzing time differentials. As technology evolves and data collection becomes more sophisticated, refining this model can pave the way for even more accurate and timely LSE detection. To validate the proposed model we have implemented its simplistic version using the Go language, which has recently gained traction for its performance and efficiency. The core objective was not just to assess the model's potential, but to affirmatively demonstrate its utility in real-world applications. The specifications, shown in Figure <ref type="figure" target="#fig_1">2</ref>, indicate time deviations of D1=31.965 seconds and D2=26.918 seconds respectively, along with a trio of sensor coordinates implemented by the ePoint structure which represents geographical pointers. Also included was the dimension of the simulation grid discussed below.  The choice of positioning for the sensors holds considerable significance too. Sensor S0 is located proximal to the Boryspil International Airport. Sensor S1 was placed adjacent to an influential energy hub, the coal power plant in Ukrainka city. Lastly, Sensor S2 was stationed towards the southern vicinity of Vasylkiv city. A detailed visual representation is shown from Figure <ref type="figure" target="#fig_2">3</ref>. It is important to mention that while these positions were chosen predominantly for their convenience and illustrative purposes in this simulation, their strategic placement underlines the model's flexibility.</p><p>Having received input parameters, the model first determines the extent of the simulation area. At present, it extends roughly rectangular area to the bounds of nearest+1 integer values of latitude and longitude, therefore the simulation area is calculated as spanning from 29 to 32 degrees of eastern longitude and from 49 to 52 degrees of northern latitude.</p><p>Within this area, the rectangular grid is formed with the dimension size specified above, specifically 300×300 evaluation points. For each point in the grid, two floating point values are calculated: the timestamp differences that could have been measured if the epicenter of the LSE was at this specific point. Next phase of the modelling should involve finding specific point, if any, such that those timestamp differences are both as close to the actual measured ones as possible and at the same time. This essentially represents classical optimization task with two independent parameters.</p><p>Given the specifics of the model and the problematics at hand, we have developed simple two-tier algorithm for finding such compound minimums. The solution is more evident from the looks of the surface graphs built for the values of D1 and D2 across the grid as seen in figures 4 and 5 respectively. Note that the graphs are built using absolute values of |D1| and |D2| since the aim of the optimization is bringing the difference between these parameters and measured values as close to zero as possible. As seen from these figures, both variables exhibit similar behavior, having somewhat arc-like "valley" of minimums. Therefore, the model only needs to find where these arcs from different variables intersect to find the spot where both differences are closer to zero, since it was observed that individually they can reach values somewhat lower than the ones in the vicinity of the LSE.</p><p>To that extent the model scans every line of the grid for |D1| to find out local minimum for that line, if it exists, such that D1 As soon as the aforementioned conditions for minimums are met, the current grid point is returned as a pair of geographical coordinates. In this case, the coordinates were reported as 30.55 degrees of eastern longitude and 50.47 degrees of northern latitude (with only 0.01 degree margin of error) which roughly corresponds to the location of the Hydropark in Kyiv, as depicted in figure <ref type="figure" target="#fig_6">6</ref>.  Before we move to the discussion part, one final consideration should be mentioned. There is a potential discrepancy between the statement that explosions of the artillery and MRLS projectiles belong to the scope of the LSE being considered in this work and the fact that the distances used as input parameters for the simulation model measure in at least dozens of kilometers apart. One might argue that sound waves originating at these explosions, having traveled such distances, will be barely heard, let alone detectable with enough degree of certainty.</p><p>While this is mostly true, the technique proposed in this work should be able to downscale quite well, being able to be deployed and to detect LSEs while operating in the ranges of a few kilometers. However, given the nature and practical experience of contemporary military conflicts whereby both sides tend to employ relatively modern artillery systems and various electronic countermeasure equipment such as weapon tracking radars, the LSE detection using the technique proposed in this work at such distances may have little to no practical use, as weapon tracking systems are more accurate, reveal the adversary's position much quicker and are not prone to ambient sound pollution. Therefore, from the practical point of view it seem prudent to let the operators in the field decide whether and how to use the system that implements proposed LSE technique and over what geographical scale.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head><p>Through this simluation, we have not only tested our model but underscored its potential in practical scenarios. Even in its nascent stage, the model demonstrated some prowess in triangulating the LSE's epicenter. We consider the initial simulation results to be rather optimistic. The prototype model, in its primary rendition, has shown its potential to identify the LSE's epicenter with appreciable accuracy. However, the path from a promising model to a real-world operational system is strewn with challenges and demands. A meticulous and elaborate strategy encompassing hardware prototyping, alongside rigorous field trials, is required to ensure the robustness and reliability of the entire system.</p><p>In the future, the continuation of this research should be planned as multi-faceted. One prime focus will be on evaluating the fidelity of time and location measurements in devices that are widely available to consumers, identifying the feasibility of leveraging these common devices in our LSE detection framework. Also, devising advanced algorithms and techniques for segregating genuine LSEs from the variety of background noises is deemed important.</p><p>One of the fundamental tenets of signal processing and sensor networks is the principle that increasing the number of observation points (sensors, in this context) can bolster the accuracy of source localization. When it comes to LSE positioning, this principle holds true and is pivotal. In the existing model, we've utilized a triple sensor configuration to triangulate the LSE's position. However, the question arises what would be the implications of deploying more than three sensors?</p><p>Given the spatiotemporal nature of the LSE detection problem, it inherently exists in a twodimensional space. Triangulation using three sensors can determine the position of the LSE by leveraging the time differences of arrival. However, when more sensors are integrated, the system can transition from simple triangulation to multilateration.</p><p>Let's denote the additional time differences, when a fourth sensor S3 is introduced, as D3=T3-T0, where T3 is the timestamp of the LSE sound wave arrival at S3. Introducing this fourth dimension provides an additional set of equations, enhancing the constraints for the localization algorithm. Consequently, this can reduce the error ellipse (in a two-dimensional scenario) and provides a unique solution without the need for additional information or assumptions.</p><p>Benefits of Increased Sensor Deployment are as follows  Redundancy: In real-world scenarios, sensor failures or temporal unavailability (due to maintenance or environmental factors) can impede accurate LSE detection. By deploying more sensors, the system gains redundancy. Even if one or multiple sensors become inoperative, the system can still function without significant loss of accuracy.</p><p> Noise Mitigation: A greater number of sensors can assist in mitigating the effects of ambient noise and other non-LSE-related events. By cross-referencing signals across multiple sensors, the system can effectively distinguish between genuine LSE signals and background noise, thereby enhancing signal-to-noise ratios.</p><p> Enhanced Resolution in Dense Environments: In environments where multiple LSEs might occur in proximity, having more sensors can assist in distinguishing between individual events, providing a more granular understanding of the sound landscape.</p><p> Optimization Potential: With a larger dataset from multiple sensors, advanced optimization techniques such as particle swarm optimization or gradient descent can be deployed more effectively to pinpoint the LSE location.</p><p>It's evident from the above considerations that augmenting the number of deployed sensors can substantially enhance the precision and reliability of LSE positioning. However, it's crucial to note that while increasing sensors offers numerous advantages, it also introduces complexities in data processing, communication overhead, and potential costs. Future iterations of the model would benefit from a detailed cost-benefit analysis to determine the optimal number of sensors, ensuring a balance between accuracy and system complexity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions</head><p>In this work we have demonstrably proved the viability of the simulation model for computational approach to determine the location of the Loud Sound Event epicenter given only location of the fielded sensor unit sets and precisely recorded timestamp differences from them. Unlike its more technically sophisticated counterparts the system based on this principle may be built, deployed and operated with significantly less budget spending. There are also ways to improve the simulation model in future works, especially in optimizing the search for compound minimums.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: General schematics of the sensor placement in relation to the epicenter of the LSE</figDesc><graphic coords="3,188.35,294.17,231.90,201.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Go source code fragment defining the model input parameters</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Sensor layout at strategic locations around Kyiv used for input parameters in the model</figDesc><graphic coords="4,118.60,424.14,371.65,322.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Surface graph of the |D1| timestamp differences across the simulation grid</figDesc><graphic coords="5,127.22,343.63,354.72,244.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>[lat,lon]&lt;D1[lat-1,lon] and D1[lat,lon]&lt;D1[lat+1,lon]. If such point is found, it is then checked if it also has local minimum for the |D2| such that D2[lat,lon]&lt;D2[lat,lon-1] and D2[lat,lon]&lt;D2[lat,lon+1]. Note the scan for |D2| is simulated in orthogonal direction to avoid false positives in cases where both "arcs" intersect with angles close to right angle, since that tend to happen only at LSE epicenters located in relatively close proximity to the sensor units.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Surface graph of the |D2| timestamp differences across the simulation grid</figDesc><graphic coords="6,117.85,103.30,373.44,259.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Sensor layout and calculated position of the LSE epicenter</figDesc><graphic coords="6,125.35,386.68,358.35,311.05" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A spatial sound localization system for mobile robots</title>
		<author>
			<persName><forename type="first">Huakang</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takuya</forename><surname>Yosiara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Qunfei</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Teppei</forename><surname>Watanabe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jie</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Instrumentation Measurement Technology Conference IMTC 2007</title>
				<imprint>
			<date type="published" when="2007">2007. 2007</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Spherical microphone array for spatial sound localization for a mobile robot</title>
		<author>
			<persName><forename type="first">Yoko</forename><surname>Sasaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mitsutaka</forename><surname>Kabasawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Simon</forename><surname>Thompson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Satoshi</forename><surname>Kagami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kyoichi</forename><surname>Oro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems</title>
				<imprint>
			<date type="published" when="2012">2012. 2012</date>
			<biblScope unit="page" from="713" to="718" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Spatial sound localization in an augmented reality environment</title>
		<author>
			<persName><forename type="first">Jaka</forename><surname>Sodnik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Saso</forename><surname>Tomazic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Raphael</forename><surname>Grasset</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andreas</forename><surname>Duenser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Billinghurst</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments, OZCHI&apos;06</title>
				<meeting>the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments, OZCHI&apos;06<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="111" to="118" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A robust sound source localization method based on acoustic vector sensor arrays</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">784</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Time-difference-of-arrival estimation for sound source localization in noisy and reverberant environments</title>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Signal Processing Letters</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="155" to="159" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Machine learning approaches for outdoor sound localization in urban environments</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Moreau</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Sound and Vibration</title>
		<imprint>
			<biblScope unit="volume">512</biblScope>
			<biblScope unit="page">116487</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A review of acoustic event localization in smart city applications</title>
		<author>
			<persName><forename type="first">M</forename><surname>Thompson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">G</forename><surname>Georgiou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="123456" to="123467" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Underwater acoustic source localization based on a hybrid deep learning framework</title>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Xia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ocean Engineering</title>
		<imprint>
			<biblScope unit="volume">264</biblScope>
			<biblScope unit="page">111497</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Sound source localization with sparse acoustic arrays using a coherent signal model</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fernandez-Grande</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Xenaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gerstoft</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of the Acoustical Society of America</title>
		<imprint>
			<biblScope unit="volume">145</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="L320" to="L325" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Deep learning for acoustic source localization and tracking in a 3D space</title>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE/CAA Journal of Automatica Sinica</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="82" to="91" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Acoustic source localization and tracking using deep neural networks</title>
		<author>
			<persName><forename type="first">T</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A P</forename><surname>Habets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE/ACM Transactions on Audio, Speech, and Language Processing</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="1482" to="1495" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">European Green Deal: Satellite Monitoring in the Implementation of the Concept of Agricultural Development in an Urbanized Environment</title>
		<author>
			<persName><forename type="first">O</forename><surname>Opryshko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Pasichnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kiktev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dudnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hutsol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mudryk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Herbut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Łyszczarz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kukharets</surname></persName>
		</author>
		<idno type="DOI">10.3390/su16072649</idno>
		<ptr target="https://doi.org/10.3390/su16072649" />
	</analytic>
	<monogr>
		<title level="j">Sustainability</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page">2649</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Real-time localization of moving sound sources using a distributed microphone array</title>
		<author>
			<persName><forename type="first">J</forename><surname>Murray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Collins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Audio Engineering Society</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="issue">7/8</biblScope>
			<biblScope unit="page" from="526" to="537" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
