<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Method of Differential Measurement to Locate the Sound Event Epicenter</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hermann Schloss</string-name>
          <email>schloss@syssoft.uni-trier.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gennadiy Poryev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>64/13, Volodymyrs'ka str., Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Trier</institution>
          ,
          <addr-line>Trier</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>9</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The focus of this article is an innovative technique designed to accurately determine the geographical coordinates of the epicenter of loud sound events (LSEs), which can include incidents such as large structural collapses, munition depot explosions, artillery and missile strikes and more. To pinpoint the location, the technique involves strategically positioning a minimum of three sound sensor units in the field. These units must have predetermined or already-known geographical coordinates and should be equipped with precision synchronized clocks. By analyzing the variation in the time it takes for the sound wave to arrive at each of these sensor units, it becomes feasible to compute the exact location of the epicenter without having prior knowledge about its distance or direction.</p>
      </abstract>
      <kwd-group>
        <kwd>Keywords1</kwd>
        <kwd>sound propagation</kwd>
        <kwd>emergency services</kwd>
        <kwd>spatial sound survey</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Historically, locating sound events has been a challenge, especially when direct observation or
measurement of the LSE is neither feasible nor practical [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Solutions to this challenge encompass a
wide array of techniques. These techniques typically leverage a mix of hardware and software
methodologies. In specific contexts, like military operations, the tools used can be quite specialized and
advanced [
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ]. Often, these solutions integrate directional sound sensor units [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], advanced signal
filtering [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5,6,7</xref>
        ], and even seismic or underwater [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] activity sensors to supplement additional data for
analysis and enhancement of precision.
      </p>
      <p>However, our research propounds a divergent strategy. We propose a practical system for locating
LSEs, one that can be readily deployed in the field using specialized sensor units built from easily
accessible hardware. Examples of such hardware include commonly available platforms like the
Raspberry Pi and its numerous analogues or even conventional consumer-grade smartphones or tablets.
This approach not only democratizes access to such a system but also significantly reduces the costs
associated with development, production, deployment and overall operational budgets.</p>
      <p>
        It's crucial to clarify a few points at this juncture. First, this paper will not delve into the intricate
details of LSE signal detection using digital signal processing and recognition [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The model that we
envision operates on the premise that the signal has been efficiently detected, filtered, and extracted.
Hence, every sensor unit in the system should and is assumed to be capable of reporting the exact
timestamp of the LSE incident to a central control node.
      </p>
      <p>Additionally, methods to achieve precise time synchronization aren't discussed here either. This is
primarily because modern global navigation satellite system (GNSS) sensors typically offer satisfactory
timestamping accuracy — a feature intrinsic to their primary function.</p>
      <p>Thirdly, this work should be considered as a foray into the scope of ideas and thought experiments,
not as beginning of the development of a specific mathematical basis usable in device designing</p>
      <p>2023 Copyright for this paper by its authors.
CEUR</p>
      <p>
        ceur-ws.org
framework. The final product, should it ever be implemented and deployed, may have completely
different approach to locate LSE epicenter in comparison to the simulation modelling offered here. It
may, for instance, utilize purely analytical solutions, should it be demonstrated that they are more
timeefficient and power-efficient software-wise. Or, in addition to simulation and\or analytical methods it
can employ machine learning techniques [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10,11,12</xref>
        ].
      </p>
      <p>
        Also, within the scope of this work only stationary LSEs are considered, since detecting moving
sound sources, especially at real-time involve completely different models [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>2.</p>
    </sec>
    <sec id="sec-2">
      <title>Materials and Methods</title>
      <p>The first key consideration for successful LSE location is the placement of sensor units. We have
considered several possibilities as to how this could be achieved. It is vital that these units are positioned
at specific sites with known geographical coordinates. These coordinates must be either pre-set into the
control node memory prior to each deployment or automatically reported to it each time sensor is placed
on the new location. An advantage of such system is its adaptability: the sensors' locations can change
between different measurement sessions, provided that the control node is promptly updated with their
new coordinates. Notably, keeping a GNSS sensor stationary for extended periods of time often refine
the accuracy of its locational readings. To sum up, what we have in mind is the following possible
options:</p>
      <p> Static, ground-based placement: the simplest method is a stationary setup. Here, on-ground
technicians methodically install sensors, subsequently documenting their precise coordinates using
readily available consumer-grade GNSS devices. Modern conveniences like the GPS/GALILEO
modules, commonly found in today's mobile devices, make this option exceedingly cost-effective due
to the minimal necessity for supplementary hardware investments. Such modules are also readily
available on the market separately.</p>
      <p> Mobile deployment of the sensors whereby they can be placed on a non-stationary vehicle. This
should greatly expand the flexibility in sensor configuration, providing the model with more reliable
data and ability to adapt to changing tactical situations, thus increasing the responsiveness of the system
in general. In this scenario, each sensor unit may obtain its own geographical coordinates either through
connected specialized GNSS module or through GNSS navigator device of the vehicle it is placed on.</p>
      <p> Overhead aerial deployment via UAVs: for areas that are challenging to access or pose significant
security risks, deploying sensors from the airborne platforms become an attractive proposition. Drones
or UAVs can be dispatched to drop sensors over these terrains. This mode, while innovative, requires
robust components. Sensors, their enclosures and all the electronics inside need to be structurally strong
and durable, equipped with a self-contained GNSS module, long-lasting batteries, and a failsafe
mechanism — a security protocol designed to wipe out all system-critical data to prevent potential
security breaches via falling into the hands of the adversary or the third party. Fall retardation devices
are also need to be considered depending on the projected deployment scenario.</p>
      <p>Another key consideration is the spread and ranging scope of the sensor placement which is
important due to how the model works as discussed below. The idea being that at least three placed
sensors make a triangular formation on Earth surface. Hence the general area of the anticipated LSEs
should be both located at a distance comparable in order of magnitude with the dimensions of the said
triangle and, whenever possible, located high enough above the surface to provide with direct
line-ofsight for the area where LSE is anticipated.</p>
      <p>The obvious prerequisite for choosing sensor unit locations is the connectivity, specifically the
ability for the sensor unit to send its coordinates and LSE data to the control node. Variety of approaches
may be used to that effect, starting from equipping sensor node with cellular data uplink wherever the
cellular operator coverage allows for it or, in case of stationary placement in the urban environment,
utilizing WiFi or even ad-hoc wired Internet connectivity. Dedicated radio trunking channel is also
possible for locations far from developed urban infrastructure. In case of automotive deployment, a
satellite data uplink such as Starlink is also a viable solution.</p>
      <p>One important caveat that operators must be wary of is the potential pitfall of a linear sensor
arrangement, that is when sensors are arranged in such a way that a straight line can be drawn with
minimal distance from it to each sensor. Such geometric layout can increase the probability of erroneous
readings, leading to false positives and uncertainty of the data interpretation. As a rule of thumb, the
line connecting the sensors should be as much "broken" as possible, with apex angles close or smaller
than the straight angle. It is our belief that this layout potentially gives the most accurate result possible.
However, to prove this mathematically is the subject of further research.</p>
      <p>In essence, the strategic placement and arrangement of sound sensors are cornerstones in the efficient
detection of LSEs. By merging technological finesse with tactical pragmatism, it's possible to devise a
system that strikes a balance between precision and real-time adaptability.</p>
      <p>3.</p>
    </sec>
    <sec id="sec-3">
      <title>Modelling</title>
      <p>The model works under the assumption that nor the exact distances S0E, S1E and S2E nor the incident
bearing angles of the LSE sound waves are known. What is known, however, is the exact geographical
coordinates of the S0, S1 and S2 plus the precise time at which the sound wave from the LSE arrived at
each sensor unit. The model at present does not take into account any external factors affecting speed
of sound such as air temperature, pressure, humidity, wind direction and speed, density of obstacles,
wave re-reflection etc and is assuming the speed of sound to be vs=343 meters per second under standard
conditions.</p>
      <p>Unless the sensors are uniquely positioned forming a perfect circle with the LSE epicenter E at its
center and equidistant from it, there will be detectable variances in the LSE sound wave's arrival
timestamps on each sensor. For example, D1 represents the time difference between the arrivals at S1
and S0, calculated as D1=T1-T0. Similarly, D2 (or the difference between the arrivals at S2 and S0) is
D2=T2-T0, where T0, T1, and T2 denote the arrival timestamps at sensors S0, S1, and S2 respectively.</p>
      <p>The simulation model works under another important assumption that</p>
      <p>Since S0E, S1E and S2E are directly dependent on the location of the LSE, the model works by
estimating among possible LSE locations such that D1 and D2 correspond to the directly measured
values of T0, T1 and T2 as much as possible.</p>
      <p>In theory, this model offers a promising method for detecting LSE epicenters by harnessing the
potential of strategically placed sensors and analyzing time differentials. As technology evolves and
data collection becomes more sophisticated, refining this model can pave the way for even more
accurate and timely LSE detection. To validate the proposed model we have implemented its simplistic
version using the Go language, which has recently gained traction for its performance and efficiency.
The core objective was not just to assess the model's potential, but to affirmatively demonstrate its
utility in real-world applications. The specifications, shown in Figure 2, indicate time deviations of
D1=31.965 seconds and D2=26.918 seconds respectively, along with a trio of sensor coordinates
implemented by the ePoint structure which represents geographical pointers. Also included was the
dimension of the simulation grid discussed below.</p>
      <p>const (
gridCells = 300
D1 = 31.965</p>
      <p>D2 = 26.918
)
var sen = eSensor{
ePoint{lon: 30.894340,</p>
      <p>lat: 50.342565},
ePoint{lon: 30.745456,</p>
      <p>lat: 50.138585},
ePoint{lon: 30.318235,</p>
      <p>lat: 50.165195}}</p>
      <p>The choice of positioning for the sensors holds considerable significance too. Sensor S0 is located
proximal to the Boryspil International Airport. Sensor S1 was placed adjacent to an influential energy
hub, the coal power plant in Ukrainka city. Lastly, Sensor S2 was stationed towards the southern vicinity
of Vasylkiv city. A detailed visual representation is shown from Figure 3. It is important to mention
that while these positions were chosen predominantly for their convenience and illustrative purposes in
this simulation, their strategic placement underlines the model's flexibility.</p>
      <p>Having received input parameters, the model first determines the extent of the simulation area. At
present, it extends roughly rectangular area to the bounds of nearest+1 integer values of latitude and
longitude, therefore the simulation area is calculated as spanning from 29 to 32 degrees of eastern
longitude and from 49 to 52 degrees of northern latitude.</p>
      <p>Within this area, the rectangular grid is formed with the dimension size specified above, specifically
300×300 evaluation points. For each point in the grid, two floating point values are calculated: the
timestamp differences that could have been measured if the epicenter of the LSE was at this specific
point. Next phase of the modelling should involve finding specific point, if any, such that those
timestamp differences are both as close to the actual measured ones as possible and at the same time.
This essentially represents classical optimization task with two independent parameters.</p>
      <p>Given the specifics of the model and the problematics at hand, we have developed simple two-tier
algorithm for finding such compound minimums. The solution is more evident from the looks of the
surface graphs built for the values of D1 and D2 across the grid as seen in figures 4 and 5 respectively.
Note that the graphs are built using absolute values of |D1| and |D2| since the aim of the optimization is
bringing the difference between these parameters and measured values as close to zero as possible.
As seen from these figures, both variables exhibit similar behavior, having somewhat arc-like
“valley” of minimums. Therefore, the model only needs to find where these arcs from different variables
intersect to find the spot where both differences are closer to zero, since it was observed that individually
they can reach values somewhat lower than the ones in the vicinity of the LSE.</p>
      <p>To that extent the model scans every line of the grid for |D1| to find out local minimum for that line,
if it exists, such that D1[lat,lon]&lt;D1[lat-1,lon] and D1[lat,lon]&lt;D1[lat+1,lon]. If such point is found, it is
then checked if it also has local minimum for the |D2| such that D2[lat,lon]&lt;D2[lat,lon-1] and
D2[lat,lon]&lt;D2[lat,lon+1]. Note the scan for |D2| is simulated in orthogonal direction to avoid false
positives in cases where both “arcs” intersect with angles close to right angle, since that tend to happen
only at LSE epicenters located in relatively close proximity to the sensor units.</p>
      <p>As soon as the aforementioned conditions for minimums are met, the current grid point is returned
as a pair of geographical coordinates. In this case, the coordinates were reported as 30.55 degrees of
eastern longitude and 50.47 degrees of northern latitude (with only 0.01 degree margin of error) which
roughly corresponds to the location of the Hydropark in Kyiv, as depicted in figure 6.</p>
      <p>Before we move to the discussion part, one final consideration should be mentioned. There is a
potential discrepancy between the statement that explosions of the artillery and MRLS projectiles
belong to the scope of the LSE being considered in this work and the fact that the distances used as
input parameters for the simulation model measure in at least dozens of kilometers apart. One might
argue that sound waves originating at these explosions, having traveled such distances, will be barely
heard, let alone detectable with enough degree of certainty.</p>
      <p>While this is mostly true, the technique proposed in this work should be able to downscale quite
well, being able to be deployed and to detect LSEs while operating in the ranges of a few kilometers.
However, given the nature and practical experience of contemporary military conflicts whereby both
sides tend to employ relatively modern artillery systems and various electronic countermeasure
equipment such as weapon tracking radars, the LSE detection using the technique proposed in this work
at such distances may have little to no practical use, as weapon tracking systems are more accurate,
reveal the adversary's position much quicker and are not prone to ambient sound pollution. Therefore,
from the practical point of view it seem prudent to let the operators in the field decide whether and how
to use the system that implements proposed LSE technique and over what geographical scale.
5.</p>
    </sec>
    <sec id="sec-4">
      <title>Discussion</title>
      <p>Through this simluation, we have not only tested our model but underscored its potential in practical
scenarios. Even in its nascent stage, the model demonstrated some prowess in triangulating the LSE's
epicenter. We consider the initial simulation results to be rather optimistic. The prototype model, in its
primary rendition, has shown its potential to identify the LSE's epicenter with appreciable accuracy.
However, the path from a promising model to a real-world operational system is strewn with challenges
and demands. A meticulous and elaborate strategy encompassing hardware prototyping, alongside
rigorous field trials, is required to ensure the robustness and reliability of the entire system.</p>
      <p>In the future, the continuation of this research should be planned as multi-faceted. One prime focus
will be on evaluating the fidelity of time and location measurements in devices that are widely available
to consumers, identifying the feasibility of leveraging these common devices in our LSE detection
framework. Also, devising advanced algorithms and techniques for segregating genuine LSEs from the
variety of background noises is deemed important.</p>
      <p>One of the fundamental tenets of signal processing and sensor networks is the principle that
increasing the number of observation points (sensors, in this context) can bolster the accuracy of source
localization. When it comes to LSE positioning, this principle holds true and is pivotal. In the existing
model, we've utilized a triple sensor configuration to triangulate the LSE's position. However, the
question arises what would be the implications of deploying more than three sensors?</p>
      <p>Given the spatiotemporal nature of the LSE detection problem, it inherently exists in a
twodimensional space. Triangulation using three sensors can determine the position of the LSE by
leveraging the time differences of arrival. However, when more sensors are integrated, the system can
transition from simple triangulation to multilateration.</p>
      <p>Let's denote the additional time differences, when a fourth sensor S3 is introduced, as D3=T3-T0,
where T3 is the timestamp of the LSE sound wave arrival at S3. Introducing this fourth dimension
provides an additional set of equations, enhancing the constraints for the localization algorithm.
Consequently, this can reduce the error ellipse (in a two-dimensional scenario) and provides a unique
solution without the need for additional information or assumptions.</p>
      <p>Benefits of Increased Sensor Deployment are as follows
 Redundancy: In real-world scenarios, sensor failures or temporal unavailability (due to
maintenance or environmental factors) can impede accurate LSE detection. By deploying more sensors,
the system gains redundancy. Even if one or multiple sensors become inoperative, the system can still
function without significant loss of accuracy.</p>
      <p> Noise Mitigation: A greater number of sensors can assist in mitigating the effects of ambient
noise and other non-LSE-related events. By cross-referencing signals across multiple sensors, the
system can effectively distinguish between genuine LSE signals and background noise, thereby
enhancing signal-to-noise ratios.</p>
      <p> Enhanced Resolution in Dense Environments: In environments where multiple LSEs might occur
in proximity, having more sensors can assist in distinguishing between individual events, providing a
more granular understanding of the sound landscape.
 Optimization Potential: With a larger dataset from multiple sensors, advanced optimization
techniques such as particle swarm optimization or gradient descent can be deployed more effectively
to pinpoint the LSE location.</p>
      <p>It's evident from the above considerations that augmenting the number of deployed sensors can
substantially enhance the precision and reliability of LSE positioning. However, it's crucial to note that
while increasing sensors offers numerous advantages, it also introduces complexities in data processing,
communication overhead, and potential costs. Future iterations of the model would benefit from a
detailed cost-benefit analysis to determine the optimal number of sensors, ensuring a balance between
accuracy and system complexity.</p>
    </sec>
    <sec id="sec-5">
      <title>6. Conclusions</title>
      <p>In this work we have demonstrably proved the viability of the simulation model for computational
approach to determine the location of the Loud Sound Event epicenter given only location of the fielded
sensor unit sets and precisely recorded timestamp differences from them. Unlike its more technically
sophisticated counterparts the system based on this principle may be built, deployed and operated with
significantly less budget spending. There are also ways to improve the simulation model in future
works, especially in optimizing the search for compound minimums.</p>
    </sec>
    <sec id="sec-6">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Huakang</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Takuya</given-names>
            <surname>Yosiara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Qunfei</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Teppei</given-names>
            <surname>Watanabe</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Jie</given-names>
            <surname>Huang</surname>
          </string-name>
          .
          <article-title>A spatial sound localization system for mobile robots</article-title>
          .
          <source>In 2007 IEEE Instrumentation Measurement Technology Conference IMTC</source>
          <year>2007</year>
          , pages
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Yoko</given-names>
            <surname>Sasaki</surname>
          </string-name>
          , Mitsutaka Kabasawa, Simon Thompson, Satoshi Kagami, and
          <string-name>
            <given-names>Kyoichi</given-names>
            <surname>Oro</surname>
          </string-name>
          .
          <article-title>Spherical microphone array for spatial sound localization for a mobile robot</article-title>
          .
          <source>In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems</source>
          , pages
          <fpage>713</fpage>
          -
          <lpage>718</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Jaka</given-names>
            <surname>Sodnik</surname>
          </string-name>
          , Saso Tomazic, Raphael Grasset, Andreas Duenser, and
          <string-name>
            <given-names>Mark</given-names>
            <surname>Billinghurst</surname>
          </string-name>
          .
          <article-title>Spatial sound localization in an augmented reality environment</article-title>
          .
          <source>In Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities</source>
          , Artefacts and Environments, OZCHI'
          <volume>06</volume>
          , page 111-
          <fpage>118</fpage>
          , New York, NY, USA,
          <year>2006</year>
          .
          <article-title>Association for Computing Machinery</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>A robust sound source localization method based on acoustic vector sensor arrays</article-title>
          .
          <source>Sensors</source>
          ,
          <volume>21</volume>
          (
          <issue>3</issue>
          ),
          <fpage>784</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>Y.</given-names>
            , &amp;
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Time-difference-of-arrival estimation for sound source localization in noisy and reverberant environments</article-title>
          .
          <source>IEEE Signal Processing Letters</source>
          ,
          <volume>29</volume>
          ,
          <fpage>155</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Moreau</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Machine learning approaches for outdoor sound localization in urban environments</article-title>
          .
          <source>Journal of Sound and Vibration</source>
          ,
          <volume>512</volume>
          ,
          <fpage>116487</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Georgiou</surname>
            ,
            <given-names>P. G.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>A review of acoustic event localization in smart city applications</article-title>
          .
          <source>IEEE Access</source>
          ,
          <volume>9</volume>
          ,
          <fpage>123456</fpage>
          -
          <lpage>123467</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Xia</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Underwater acoustic source localization based on a hybrid deep learning framework</article-title>
          .
          <source>Ocean Engineering</source>
          ,
          <volume>264</volume>
          ,
          <fpage>111497</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Fernandez-Grande</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xenaki</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Gerstoft</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Sound source localization with sparse acoustic arrays using a coherent signal model</article-title>
          .
          <source>The Journal of the Acoustical Society of America</source>
          ,
          <volume>145</volume>
          (
          <issue>4</issue>
          ),
          <fpage>EL320</fpage>
          -
          <lpage>EL325</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>L.</given-names>
            , &amp;
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>W.</surname>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Deep learning for acoustic source localization and tracking in a 3D space</article-title>
          .
          <source>IEEE/CAA Journal of Automatica Sinica</source>
          ,
          <volume>7</volume>
          (
          <issue>1</issue>
          ),
          <fpage>82</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Patel</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Habets</surname>
            ,
            <given-names>E. A. P.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Acoustic source localization and tracking using deep neural networks</article-title>
          .
          <source>IEEE/ACM Transactions on Audio, Speech, and Language Processing</source>
          ,
          <volume>28</volume>
          ,
          <fpage>1482</fpage>
          -
          <lpage>1495</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Opryshko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Pasichnyk</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kiktev</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dudnyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hutsol</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mudryk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Herbut</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; Łyszczarz,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Kukharets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>European Green</surname>
          </string-name>
          <article-title>Deal: Satellite Monitoring in the Implementation of the Concept of Agricultural Development in an Urbanized Environment</article-title>
          .
          <source>Sustainability</source>
          <year>2024</year>
          ,
          <volume>16</volume>
          , 2649. https://doi.org/10.3390/su16072649
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Murray</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp; Collins,
          <string-name>
            <surname>T.</surname>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Real-time localization of moving sound sources using a distributed microphone array</article-title>
          .
          <source>Journal of Audio Engineering Society</source>
          ,
          <volume>67</volume>
          (
          <issue>7</issue>
          /8),
          <fpage>526</fpage>
          -
          <lpage>537</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>