<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Improvement of Precise Vehicle Location in Urban Areas Using Video-based Photogrammetry</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">André</forename><surname>Pinhal</surname></persName>
							<email>apinhal@utwente.nl</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">University of Porto</orgName>
								<orgName type="institution" key="instit2">Observatório Astronómico</orgName>
								<address>
									<postCode>4430-146</postCode>
									<settlement>Vila Nova de Gaia</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">José</forename><surname>Gonçalves</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">University of Porto</orgName>
								<orgName type="institution" key="instit2">Observatório Astronómico</orgName>
								<address>
									<postCode>4430-146</postCode>
									<settlement>Vila Nova de Gaia</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Interdisciplinary Centre of Marine and Environmental Research</orgName>
								<orgName type="laboratory">CIIMAR</orgName>
								<address>
									<postCode>4450-208</postCode>
									<settlement>Matosinhos</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Improvement of Precise Vehicle Location in Urban Areas Using Video-based Photogrammetry</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">DE5D4A563445B9B75E5E42D05FBD9387</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>RTK</term>
					<term>ambiguity x</term>
					<term>action camera</term>
					<term>MMS</term>
					<term>structure from motion</term>
					<term>point cloud</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents a system under development at the University of Porto, which integrates a Septentrio MosaicGO GNSS receiver, with a helical antenna, associated with a Gopro action camera. Video frames can be processed by a Structure from Motion (SfM) approach to do an alignment of sequential frames, derive relative position of the camera projection centres, and eventually create point clouds of the surrounding environment. The system being developed has the camera and the antenna mounted in a block, which is placed on a rear-view mirror of a car. The camera acquires a video in 4K mode, at 60 frames per second. Frames are extracted from the video at a frequency of between 2Hz to 10 Hz, depending on the car speed. The camera collects GNSS data with its own navigation receiver, allowing to tag all extracted video frames with GPS time and position. Due to the low accuracy of the camera receiver, its positions are discarded and only GPS time is kept. This time is used to synchronize with much more accurate positions obtained by the RTK receiver, connected to a CORS network. Many times, the obstacles present in urban areas do not allow for the high accuracy positioning. A trajectory made within an urban area will have some frames with very precise positions and other with much higher errors. All frames are photogrammetrically processed. Standard deviations of camera positions are considered in the bundle adjustment, allowing for the improvement of the camera positions of lower accuracy. The SfM processing also generates a point cloud of the surrounding objects, which can be densi ed. Objects identi ed on this point cloud can be used to assess the location accuracy of the process. Several experiments carried with the system in the city of Porto allowed to con rm that a positional accuracy of 10 cm can be achieved.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Precise positioning by GNSS RTK (Real Time Kinematics) is now very common in Surveying and GIS data collection for 3D city models. There are many surveying solutions for use in mobile mapping systems (MMS), integrating GNSS positioning, inertial navigation systems (INS) and data acquisition sensors, such as cameras or laser scanners <ref type="bibr" target="#b0">[1]</ref>. Popular brands, as Riegl, Leica Geosystems or Trimble, among others, provide solutions that allow for the generation of very dense and accurate point clouds in urban environments, but with costs of several hundred thousand euros, and whose use is restricted to professional markets. Many users will be interested in having cheaper systems, making use of small and low-cost GNSS receivers now available, which can have good performance in urban areas <ref type="bibr" target="#b1">[2]</ref>. There are several devices for development, costing less than 1000 euros, which have triple frequency and all the capacity for high-precision di erential positioning, in real time or in post-processing. Regardless of the additional sensors associated with the GNSS receiver, it is currently possible to have high-precision kinematic positioning in a motor vehicle with this type of low-cost receivers.</p><p>As inevitably happens with GNSS positioning in urban environments, or generally in situations of major obstructions to signal propagation, RTK positioning, with ambiguity xing (" x" solution), with errors of a few centimeters, will not be possible in signi cant parts of trajectories made in this type of environment. "Float" type solutions will often result, with precision on the order of a few decimeters, or even "single point" solutions (SPP), with errors of a few meters. This limitation is normally overcome with inertial navigation systems, signi cantly increasing the costs of an MMS. The system under development aims to solve this problem through the use of sequences of images acquired by a small video camera. It is possible to apply to the image sequences, photogrammetric techniques that determine the relative orientation of the images, thus contributing to a more complete positioning solution.</p><p>The system is intended to use cameras classi ed as "action camera". These cameras are compact, rugged, and versatile to capture both video and still images in outdoor conditions. They are very popular among adventure enthusiasts, many times also interested in geolocating images and videos. For this reason, there are several models that incorporate a GPS navigation receiver, allowing to geotag photographs, and video as well. That is the case of cameras of the brand Gopro, since the launch of model Hero 5. These cameras acquire video in format MP4, which can accommodate GPS data, to be later extracted using speci c software <ref type="bibr" target="#b2">[3]</ref> provided by the camera manufacturer <ref type="bibr" target="#b3">[4]</ref>. It is possible to obtain in this way GPS position and GPS time of every individual video frames.</p><p>Photogrammetry has seen major developments in recent years, through the incorporation of methodologies derived from computer vision. It was mainly with an algorithm developed by David Lowe <ref type="bibr" target="#b4">[5]</ref> -SIFT (Scale Invariant Feature transform) -that made the automatic extraction of common points between images much easier, even for images with variations of orientation and scale. This availability of conjugate points between images of large blocks is combined with the bundle adjustment methods already possible for large blocks. This process is also applicable to cameras for which only approximate internal orientation parameters are known, integrating in the bundle adjustment a self-calibration process. This largely automated methodology of image orientation started to be designated in some communities as "Structure from Motion" (SfM), allowing the processing of aerial or terrestrial images.</p><p>In the present system a video is acquired, frames are extracted and are oriented ("aligned", in the terminology associated to SfM) in relative terms. Provided that, for some of the images, coordinates of projection centers are known, all images in the block will have their positions determined. In this way it is possible to complete or correct the camera trajectory.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Description of the hardware implemented</head><p>The system described in this paper incorporates a Septentrio Mosaic GO X5 GNSS receiver, that has a compact and lightweight design, suitable for integration into platforms such as drones and handheld devices. It supports multiple satellite constellations, including GPS, GLONASS, Galileo and BeiDou, and does phase measurement in three di erent frequencies. The receiver is mounted in a box, which is attached to the back of the camera, and a helical antenna is placed on top of the box. A small power bank is mounted next to the receiver, which once turned on starts recording a le in the Septentrio Binary Format (SBF). When connected to a CORS (Continuous Operation Reference Station) network the receiver provides reliable real time kinematic (RTK) positioning. The receiver was used in 10 Hz, providing positions with enough density to properly model curved trajectories of a car moving in a city.</p><p>The operation of the system requires the use of a smartphone, which connects to the internet. An application runs on the smartphone, which allows for controlling the receiver, receive an NTRIP correction from a CORS network, carry out the di erential correction and monitor the receiver's performance. In the present case an Android smartphone, was used, running the SW-Maps application<ref type="foot" target="#foot_0">1</ref> , which is free software and includes all those capabilities. Although positions may be recorded in the app, they are retrieved from the receiver memory, in the form of the SBF le and in NMEA format (National Marine Electronics Association), for the RTK positions.</p><p>The camera has a xing piece to be mounted, together with the receiver, on the side rear view mirror of a car. Once the camera and the receiver are turned on, both can be controlled from inside the vehicle, with the smartphone. Figure <ref type="figure" target="#fig_0">1</ref> shows the block mounted on the right rear-view mirror of a car. The camera optical axis is horizontal, rotated by 15 degrees to the right of the vehicle axis. The camera has a GNSS receiver, which works at a frequency of 18 Hz. After the camera is turned on, the user should check when GPS position is available and only then start the video recording. The receiver must be connected beforehand and should, preferably, have RTK xed position when the camera starts the video. Note that there is no electronics establishing a connection between the two devices. The way of synchronizing between image and RTK positions is done through the GPS time recorded by the two devices.</p><p>The camera acquires video in 4K resolution, with a dimension of 3840 by 2160 pixels, at a rate of 60 fps (frames per second). Action cameras are known for having a large eld of view, at the cost of a signi cant radial deformation. Although images can be used in this way, preference was given to the image mode called "linear", which consists of the application of a general correction model to produce images with a regular central projection. The resulting image has an equivalent focal distance of approximately 1800 pixels, still a very wide angle. Small additional corrections, in the focal distance, principal point position and radial distortion coe cients, characteristic of each camera unit, may be necessary, but these will be handled in the processing step, through a self-calibration incorporated in the SfM bundle-adjustment.</p><p>Videos taken from the car are processed to extract navigation information from the camera, which is carried out by programs from the GPMF library<ref type="foot" target="#foot_1">2</ref> . Information is extracted in groups of 60 frames, that is, every 1.001 seconds of the video, and includes video time of each group, position, speed and UTC time, in seconds of the day. Table <ref type="table" target="#tab_0">1</ref> shows an example of this information. Positions result from the camera navigation receiver and will, in fact, be discarded. The main information is the video time, which is transformed into frame number, and UTC. Frames are extracted from the video, in JPEG format, at a suitable cadence so that the overlaps between consecutive images are adequate for the alignment process to be successful. For the conditions under which the system was operated, the 4Hz cadence proved to be adequate. In situations of higher speed, and especially when there is rotation, the cadence may be increased. In cases where the vehicle is stopped, for example at tra c lights, the corresponding frames may be discarded if the GPMF data has a speed lower than a tolerance, for example 0.5 m/s.</p><p>The positions collected by the GNSS receiver are retrieved in the NMEA format, essentially through messages GGA and GST <ref type="bibr" target="#b5">[6]</ref>, that provide results of UTC time, position, quality of the position (Q), with values of 1 for SPP, 4 for FIX and 5 for FLOAT. Standard deviations are also obtained, which will be of few centimetres in case of Q=4. Surveys were carried with a connection to the Portuguese permanent station network ReNEP ("Rede Nacional de Estações Permanentes"). Although a post-processing can be done over the SBF data, in general the RTK positioning could be obtained in those cases where obstacles allowed for that. Table <ref type="table" target="#tab_1">2</ref> shows a sample of data extracted from the NMEA les, with a loss of ambiguity x and a sudden increase of estimated precision. At this point, a linear interpolation is carried in order to calculate camera position. There is a need of a previous calibration of the relation between frame number and UTC, which is described in the next section. There is also a small o set between the antenna phase centre and the centre of the camera, of approximately 4 cm in the horizontal component and 4 cm in the vertical. In a rst approximation this is not being corrected, since it is smaller than what is initially expected for the system accuracy. The camera GPS receiver has a frequency of 18 Hz, i.e., 3.3 times smaller than the video frame rate. Assuming an error of 1.5 frames in the synchronization, i.e., 0.25 seconds, for a car moving at a speed of 30 km/h, the error corresponds to a distance of 20 cm. For that reason, we put the expectations at the level of 1 or 2 decimetres. However, the procedure for correction of the small lever arm is described in the calibration section.</p><p>Finally, all the selected frames were aligned in a photogrammetric software that applies the SfM concept, which in our case was Agisoft Metashape <ref type="bibr" target="#b6">[7]</ref>. Interpolated positions are provided only for those images that had Q=4. Standard deviation of 0.2 m was considered for the least squares adjustment. As a result of the bundle adjustment positions of all images are obtained. They will be analysed in an independent manner.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">System calibration</head><p>In order to process the image data and RTK positions collected, some calibration steps are necessary, especially regarding the times to be assigned to the frames extracted from the video.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Time calibration</head><p>As seen in Table <ref type="table" target="#tab_1">2</ref>, the UTC time does not correspond to the exact moments of the rst frame of each block of 60 frames, since they are not equally spaced. A linear relationship was established between frame number, N f , and UTC time, through,</p><formula xml:id="formula_0">UT C = A 0 + A 1 N f ,<label>(1)</label></formula><p>where A 0 is the time of the rst frame (starting count at frame zero) and A 1 is the frame rate. This will allow us to determine a more reliable value for the UTC time of the rst frame.</p><p>An independent validation of this assessment of the time of the rst frame, was done with a small ashlight, connected to the GNSS receiver, red in front of the camera. A precise time of the ash event is recorded on the GNSS receiver, and with a few ashes along a video, the calibration can also be done with the same formula. Very similar results were obtained, with di erences in the time of the rst frame around 20 ms, i.e., only slightly more than one frame. Figure <ref type="figure" target="#fig_1">2</ref> shows an image of the ash in a frame. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Lever arm between receiver and camera</head><p>As referred before, there is a lever arm between the antenna and the camera, which is of only a few centimeters and, in a rst approach, is not being considered. There is no attitude assessment, so the coordinate transportation must be done with the orientation of the trajectory. As the vehicle moves approximately in a horizontal plane, the azimuth of the trajectory can be estimated and the corresponding rotation applied to the vector between antenna and camera. In the vertical component there is only a need of subtracting the height di erence between the phase center and the camera. The actual projection center position is not known, but since the focal distance of the lens is only 3 mm, it was assumed as the center of the lens, outside the camera.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Assessment of system performance</head><p>The system was tested in some urban environments, in a rst approach, in areas without extreme situations of di culty in signal capture. A route was taken in an urban residential area of the city of Porto, without tall buildings, but with trees along the streets. The route began and ended at the same point, it had some crossings and there were some repetitions of some sections along the route, which had a total length of 3.2 km. Images were extracted at a rate of 4 Hz, in a total of 2464 images. Figure <ref type="figure" target="#fig_2">3</ref> shows, on the left side, the path followed, over the Google Maps image base, and on the right, two examples of frames from the video, one in a more unobstructed area and another with more tree cover.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Image alignment</head><p>After interpolation of RTK positions for all images, it was observed that 72% were FIX-type positions. The images were loaded into the Agisoft Mestashape program and a rst alignment was made, at this step without coordinates of the projection centers. All were successfully aligned. The coordinates of the images that had FIX were then inserted, with an a-priori accuracy value of 0.2 meters, in the three coordinates. The bundle adjustment was reprocessed, resulting in adjusted coordinates for all images. The e ect of trajectory quality improvement is observed in places where the positions were not FIX. Figure <ref type="figure" target="#fig_3">4</ref> shows, on the left, an area where there is an interruption in the FIX positions (blue dots), in a total of nearly 40 images. After aligning the images, the positions were regularized, resulting in a much smoother trajectory that was in line with expectations.  Although visually there is a qualitative improvement in the trajectory, and a very small change in the sections where there was FIX, this assessment has some subjectivity, so it is preferable to have a numerical evaluation of the error.</p><p>The simplest way to make this assessment is through checkpoints, whose coordinates can be determined photogrammetrically through the images. A total of 14 points were selected, on well-de ned points, which were identi ed in the nal photogrammetric project, on the images where they are observed with the highest quality. This results in coordinates of these points. Subsequently, the points were surveyed on the terrain, with GNSS, and the corresponding three-dimensional errors were evaluated. Figure <ref type="figure" target="#fig_4">5</ref> shows the location of the checkpoints and the location of two of them in the images.  Average errors are small, not evidencing systematic trends. The RMSEs are of the order of one decimetre, agreeing with the initial expectation. These were the rst tests, which are quite promising. More performance assessments of the system, in diverse conditions, will be carried in a near future.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and future work</head><p>A positioning system for precise position determination of vehicles in urban environments is under development at the University of Porto. The system integrates video imagery processed by SfM in order to contribute to an integrated trajectory solution. This allows to complement the position gaps in the urban environment. Initial tests point to a possible accuracy at decimeter level. More tests will be carried out in order to assess system performance in a diversity of environments with strong obstructions, both in urban areas as in forested environments.</p><p>Several improvements to the system will be developed soon, namely an improvement in the temporal synchronization process, for example using new models of action cameras with higher frame rates. It is also intended to improve the correct reduction of the GNSS position to the camera projection center.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Gopro camera and Septentrio Mosaic Go X5 GNSS receiver, mounted in a block on a side rear view mirror of a car.</figDesc><graphic coords="3,173.54,104.26,248.18,331.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Example of a flash light fired in front of the camera, to assess GPS time of a frame.</figDesc><graphic coords="5,150.98,335.43,293.32,165.18" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: On the le : trajectory over the Google Maps image background; on the right, examples of frames, with less trees (top) and more trees (bottom).</figDesc><graphic coords="6,105.84,289.86,383.59,189.26" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Example of part of the trajectory where some camera positions where not FIX (Float or SPP), on the le . The right image shows the regularized trajectory.</figDesc><graphic coords="6,150.98,536.47,293.32,197.14" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Location of the check points (le ) and examples of two points considered (right).</figDesc><graphic coords="7,105.84,199.49,383.58,189.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Sample of GPMF data extracted</figDesc><table><row><cell>Tvideo, s</cell><cell cols="5">Latitude, °Longitude, °Altitude, m Speed, m/s UTC, seconds of day</cell></row><row><cell>0.001</cell><cell>41.1550155</cell><cell>-8.6617334</cell><cell>122.159</cell><cell>0.142</cell><cell>52526.179</cell></row><row><cell>1.001</cell><cell>41.1550151</cell><cell>-8.6617350</cell><cell>122.110</cell><cell>0.242</cell><cell>52527.169</cell></row><row><cell>2.001</cell><cell>41.1550131</cell><cell>-8.6617353</cell><cell>122.226</cell><cell>0.117</cell><cell>52528.159</cell></row><row><cell>3.001</cell><cell>41.1550124</cell><cell>-8.6617346</cell><cell>122.295</cell><cell>0.143</cell><cell>52529.204</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 Sample</head><label>2</label><figDesc></figDesc><table><row><cell cols="8">of Septentrio receiver positions, type of position (Q), number of satellites and estimated precision</cell></row><row><cell cols="2">(standard deviations, in meters)</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>UTC, seconds</cell><cell cols="4">Latitude, °Longitude, °Altitude, m Q N, sat</cell><cell>LAT , m</cell><cell>LON , m</cell><cell>ALT , (m)</cell></row><row><cell>53129.4</cell><cell>41.15580621 -8.66189798</cell><cell>64.228</cell><cell>4</cell><cell>6</cell><cell>0.044</cell><cell>0.016</cell><cell>0.098</cell></row><row><cell>53129.5</cell><cell>41.15580270 -8.66189887</cell><cell>64.314</cell><cell>4</cell><cell>6</cell><cell>0.045</cell><cell>0.016</cell><cell>0.099</cell></row><row><cell>53129.6</cell><cell>41.15579907 -8.66189934</cell><cell>64.177</cell><cell>1</cell><cell>14</cell><cell>6.029</cell><cell>1.598</cell><cell>9.056</cell></row><row><cell>53129.7</cell><cell>41.15579563 -8.66189954</cell><cell>64.188</cell><cell>1</cell><cell>14</cell><cell>6.029</cell><cell>1.598</cell><cell>9.056</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Statistics of the errors found in the independent check pointsCoordinateNo. points AVG, cm RMSE, cm MAX, cm</figDesc><table><row><cell>Longitude</cell><cell>14</cell><cell>-1.9</cell><cell>9.3</cell><cell>14.6</cell></row><row><cell>Latitude</cell><cell>14</cell><cell>-0.8</cell><cell>11.6</cell><cell>20.8</cell></row><row><cell>Altitude</cell><cell>14</cell><cell>-4.4</cell><cell>5.2</cell><cell>12.6</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://softwel.com.np/mobile_products</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://github.com/gopro/gpmf-parser</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was developed within project 4Map4Health (CHIST-ERA/0006/2019), nanced by the Portuguese Foundation for Science and Technology, under program ERA NET CHIST-ERA.</p><p>The GNSS RTK positioning was done with the ReNEP permanent stations of the Directorate General for Territorial Development (DGT).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A Review of Mobile Mapping Systems: From Sensors to Applications</title>
		<author>
			<persName><forename type="first">M</forename><surname>Elhashash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Albanwan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Qin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page">4262</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Pavlov i -Pre eren, Low-Cost Dual-Frequency GNSS Receivers and Antennas for Surveying in Urban Areas</title>
		<author>
			<persName><forename type="first">V</forename><surname>Hamza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Stopar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sterle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page">2861</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Validation of Telemetry Data Acquisition Using GoPro Cameras</title>
		<author>
			<persName><forename type="first">K</forename><surname>Petroskey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Funk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Tibavinsky</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">SAE Technical Paper</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><surname>Gopro</surname></persName>
		</author>
		<ptr target="https://github.com/gopro/gpmf-parser(v2.2.1" />
		<title level="m">Metadata Format -GPMF, Processing software</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Distinctive image features from scale-invariant keypoints</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Lowe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of computer vision</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="page" from="91" to="110" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Compatibility of NMEA GGA with GPS receivers implementation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ardalan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Awange</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">GPS Solutions</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="3" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">Agisoft</forename><surname>Metashape</surname></persName>
		</author>
		<ptr target="https://www.agisoft.com/pdf/metashape-pro_2_1_en.pdf" />
		<title level="m">Agisoft Metashape User Manual, Professional Edition</title>
				<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="volume">2</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
