<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Traffic-sign Recognition for Visually Impaired Pedestrians in Kyrgyzstan: Two-keypoint SIFT/BRISK Descriptor with CameraX</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ayman</forename><surname>Aljarbouh</surname></persName>
							<email>ayman.aljarbouh@ucentralasia.org</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Central Asia</orgName>
								<address>
									<addrLine>125/1 Toktogul Street</addrLine>
									<postCode>720001</postCode>
									<settlement>Bishkek</settlement>
									<country key="KG">Kyrgyzstan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dmytro</forename><surname>Zubov</surname></persName>
							<email>dzubov@ieee.org</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Central Asia</orgName>
								<address>
									<addrLine>125/1 Toktogul Street</addrLine>
									<postCode>720001</postCode>
									<settlement>Bishkek</settlement>
									<country key="KG">Kyrgyzstan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrey</forename><surname>Kupin</surname></persName>
							<email>kupin@knu.edu.ua</email>
							<affiliation key="aff1">
								<orgName type="institution">Kryvyi Rih National University</orgName>
								<address>
									<addrLine>11 Vitaly Matusevich, Kryvyi Rih</addrLine>
									<postCode>50027</postCode>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nurlan</forename><surname>Shaidullaev</surname></persName>
							<email>nurlan.shaidullaev@ucentralasia.org</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Central Asia</orgName>
								<address>
									<addrLine>125/1 Toktogul Street</addrLine>
									<postCode>720001</postCode>
									<settlement>Bishkek</settlement>
									<country key="KG">Kyrgyzstan</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Traffic-sign Recognition for Visually Impaired Pedestrians in Kyrgyzstan: Two-keypoint SIFT/BRISK Descriptor with CameraX</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">1C003E47A0324A58CC9F29E4F1D8553D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Two-keypoint descriptor</term>
					<term>visually impaired</term>
					<term>SIFT</term>
					<term>BRISK</term>
					<term>Android CameraX1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The traffic-sign recognition system developed in this study aims to assist the spatial cognition and mobility of visually impaired pedestrians in Kyrgyzstan. The system employs a two-keypoint binary descriptor that implements the BRISK algorithm to find sampling patterns on the image. Pairs of keypoints are localized using the SIFT method. The developed Java Android mobile application implements the SIFT and BRISK approaches in real-time on Android CameraX using AdaBoost classifiers and multithreading. With a knowledge base of 86 sampling patterns, the execution time is 0.1 s for an example with the traffic sign "Crosswalk left". In experiments conducted at distances 1.5 m to 3.5 m in the city of Naryn, Kyrgyzstan, the presented SIFT/BRISK detector demonstrated a true negative of 100 % and a true positive close to 100 % (Blackview BV6600 Pro and Doogee S96 Pro smartphones achieved 100 % and 75 %, respectively) rates at 3.5 m. This pilot project is expected to continue with more precise image descriptors for longer distances.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The visually impaired and blinds (VIBs) have made significant progress in social integration over the last five decades. This achievement is mostly based on inclusive smart technologies that create synergies between the community and VIBs <ref type="bibr" target="#b0">[1]</ref>. Despite numerous assistive mobile applications (e.g., BDS (BeiDou Navigation Satellite System) WeChat and Google Maps) for spatial cognition, VIB navigation <ref type="bibr" target="#b1">[2]</ref> remains problematic for the last mile, such as finding the entrance and identifying traffic signs <ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref>.</p><p>In this study, a Java Android mobile application was developed to detect and recognize Kyrgyz traffic signs using the SIFT and BRISK (Scale-Invariant Feature Transform and Binary Robust Invariant Scalable Keypoints) methods <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref> to localize keypoints and find sampling patterns with a two-keypoint binary descriptor, respectively. The true positive (i.e., recognition accuracy) and true negative (i.e., crucial mistakes) rates <ref type="bibr" target="#b13">[14]</ref> are expected to be near 100 % at distances from 1.5 m to 3.5 m from the traffic sign.</p><p>To support VIBs, a new mobile application was developed to recognize traffic signs in Kyrgyzstan. Two key problems were solved in this study:</p><p>1. A novel technique has been applied for image processing. In the preprocessing step, the method 𝐵𝑖𝑡𝑚𝑎𝑝. 𝑐𝑟𝑒𝑎𝑡𝑒𝑆𝑐𝑎𝑙𝑒𝑑𝐵𝑖𝑡𝑚𝑎𝑝 creates a new bitmap scaled to a maximum resolution of 500 pixels with bilinear filtering. From up to four hundred keypoints detected by the SIFT method, two keypoints are selected. Then, the BRISK binary two-keypoint descriptor is designed employing a 291-point shape. The Hamming distance threshold is 19600 after five AdaBoost weak classifiers, which shows a true positive rate close to 100 % (smartphone Blackview BV6600 Pro -100 %, Doogee S96 Pro -75 % at 3.5 m) and a true negative rate of 100 % at distances from 1.5 m to 3.5 m. 2. A multithreaded Java Android application was developed using the CameraX library <ref type="bibr" target="#b14">[15]</ref>. The image, smartphone soft-/hardware, and number of keypoints have an effect on the execution time. In the experiment with the traffic sign "Crosswalk left", the smartphone Doogee S96 Pro takes 0.1 s to find the sampling pattern using the knowledge base with 86 elements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>The World Health Organization showed that over 2.2 billion people experience vision impairment worldwide in 2021 <ref type="bibr" target="#b9">[10]</ref>, and hence assistive tools are in continuous demand. Traffic-sign recognition systems in cars <ref type="bibr" target="#b2">[3]</ref> have been widely proposed for the market. The leading approach is based on convolutional neural networks and specific datasets, e.g., Tunisian traffic signs <ref type="bibr" target="#b3">[4]</ref>.</p><p>The percentage of wrongly recognized signs can reach 25 % <ref type="bibr" target="#b2">[3]</ref>, which is unacceptable for VIBs. Analysis of existing commercial products for VIBs, such as those referenced in <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref>, shows that they do not support the recognition of traffic signs related to pedestrians. Hence, the development of a mobile application that includes this functionality is a crucial task that should be undertaken to support the VIBs navigation near roads. Moreover, the usage of existing technologies is reasonable since it speeds up the development process, as was done in this study. The distance between the VIB and the traffic sign is assumed to be up to 4 m, which is the estimated width of the pedestrian path. The analyzed traffic signs are supposed to be of good quality and produced according to state standards. Google's Android platform has been taking over 70 % of the market share last five years. The CameraX Android API (application programming interface) is a Google Android native approach to work with different cameras on Android smartphones. CameraX is a Jetpack support library, which is considered the easiest way to make the Android camera application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Architecture of two-keypoint SIFT detector and BRISK descriptor with</head><p>CameraX Android API Two-keypoint SIFT detector and BRISK descriptor with CameraX Android API employ the method presented in <ref type="bibr" target="#b15">[16]</ref>. It and consists of three steps (see Figure <ref type="figure" target="#fig_0">1</ref>):</p><p>1. Keypoints localization: The SIFT method is used to localize keypoints on the template image.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Image descriptor design:</head><p>The BRISK method is employed to design image descriptors using pairs of keypoints that are empirically determined by an expert. This procedure will be automated in the future.</p><p>3. Image matching: Image capturing is performed using an Android CameraX library. The image is then downsampled with bilinear filtering in the method 𝐵𝑖𝑡𝑚𝑎𝑝. 𝑐𝑟𝑒𝑎𝑡𝑒𝑆𝑐𝑎𝑙𝑒𝑑𝐵𝑖𝑡𝑚𝑎𝑝 and converted to grayscale from RGB (Red-Green-Blue) format using the luminosity function <ref type="bibr" target="#b16">[17]</ref>. The best pairs of keypoints calculated by the SIFT algorithm are selected for the design of the BRISK binary descriptor. Image matching uses AdaBoost classifiers and Hamming distance <ref type="bibr" target="#b17">[18]</ref> (see Figure <ref type="figure" target="#fig_1">2</ref>). Figure <ref type="figure" target="#fig_2">3</ref> presents two flowcharts: one for keypoints localization with the SIFT method and image descriptor design with the BRISK method (on the left), and another for the image matching algorithm (on the right).  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Classification of Kyrgyz traffic signs for pedestrians</head><p>As of August 2023, Kyrgyzstan had over 200 road signs <ref type="bibr" target="#b18">[19]</ref>, including 13 related to pedestrians (see Table <ref type="table">1</ref>).</p><p>The crosswalk signs "Crosswalk left", "Crosswalk right", and "Zebra crossing" are combined into a group "Crosswalk", as well as the signs "Emergency exit left/right" into "Emergency exit".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">SIFT keypoint localization</head><p>Regarding the SIFT keypoint localization, the AdaBoost cascade classifier is employed in this study. In this approach, scale-invariant locations of keypoints are searched in different scales. The convolution of two-dimensional Gaussian function G(x, y, ) and input grayscale image I(x, y) gives a filtered image: </p><p>where '*' is the convolution operation,  is the population standard deviation, x and y are the pixel coordinates, and a Gaussian function:</p><formula xml:id="formula_1">𝐺(𝑥, 𝑦, 𝜎) = 1 2𝜋𝜎 𝑒 ⁄ .<label>(2)</label></formula><p>In this study, the luminosity function <ref type="bibr" target="#b16">[17]</ref> converts the image from RGB to greyscale I(x, y)  </p><formula xml:id="formula_2">using</formula><p>Unstable extrema are rejected if |D(𝑥 )|&lt;0.03. The extremum D(𝑥 ) can be found by combining Eq. ( <ref type="formula" target="#formula_3">6</ref>) and Eq. ( <ref type="formula">5</ref>):</p><formula xml:id="formula_4">𝐷(𝑥 ) = 𝐷 + 1 2 𝜕𝐷 𝜕𝑥 𝑥 .<label>(7)</label></formula><p>In this study, edges are detected employing a 2×2 Hessian matrix <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b19">20]</ref>:</p><formula xml:id="formula_5">𝐻 = 𝐷 𝐷 𝐷 𝐷 ,<label>(8)</label></formula><p>where derivatives Dxx, Dxy, and Dyy are as follows:</p><formula xml:id="formula_6">Dxx = D(x+1, y, ) + D(x-1, y, ) -2*D(x, y, ),<label>(9)</label></formula><formula xml:id="formula_7">Dyy = D(x, y+1, ) + D(x, y-1, ) -2*D(x, y, ),<label>(10)</label></formula><formula xml:id="formula_8">Dxy = (D(x+1, y+1, ) -D(x+1, y-1, ) -D(x-1, y+1, ) -D(x-1, y-1, ))/4.</formula><p>(11) To eliminate the number of keypoints, the following inequality should be satisfied <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b19">20]</ref>:</p><formula xml:id="formula_9">0 &lt; 𝑇𝑟(𝐻) 𝐷𝑒𝑡(𝐻) &lt; 12.1 .<label>(12)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Two-keypoint BRISK descriptor design</head><p>In this study, the BRISK algorithm employs a binary descriptor with 291 points to depict the template image. The orientation and scale of the sample pattern are calculated using the positions of two keypoints. In the present software version, a human expert chooses two keypoints by examining keypoints with a consistent location at various octaves.</p><p>In the binary descriptor, 291 points are split as follows: 0-24 (1st group), 25-82 (2nd group), and 83-290 (3rd group). Figure <ref type="figure" target="#fig_4">4</ref> shows an example of the descriptor for the traffic sign "Above ground pedestrian crossing" (greyscale representation): (A) -points 0-24, (B) -25-82, (C) -83-290, (D) -0-290. The distance between any two points is computed via the Euclidean distance between two keypoints: for the 1st group point on the line connecting two keypoints, the distance to any nearest point is one-third of the distance between two keypoints. Each point n is associated with the mean pixel intensity I(n) in a circle of a radius 1/24, 1/16, or 1/12 of the Euclidean distance 𝐸 between two keypoints n1 and n2 <ref type="bibr" target="#b15">[16]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The Hamming distance is calculated via the binary string 𝐵𝑆</head><p>, which is based on the comparison of average pixel intensities I(n1) and I(n2) at points n1 and n2, respectively:</p><formula xml:id="formula_10">𝐵𝑆 = 1, 𝐷𝑆 &gt; 0, 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒,<label>(13)</label></formula><p>where the string 𝐷𝑆 is as follows: 𝐷𝑆 = 𝐼(𝑛 ) − 𝐼(𝑛 ). (14) For a specific point n1, absolute differences 𝑎𝑏𝑠 𝐷𝑆 are descending sorted within the appropriate group:</p><p>1. Points 0-24: the image binary descriptor considers the first 12 absolute differences for each point.</p><p>2. Points 25-82: the image binary descriptor considers the first 32 absolute differences for each point.</p><p>3. Points 83-290: the image binary descriptor considers the first 100 absolute differences for each point. Therefore, a total of 22956 absolute differences 𝑎𝑏𝑠 𝐷𝑆 are considered in the image binary descriptor, which can be calculated as follows:</p><p>25 * 12 + 58 * 32 + 208 * 100 = 300 + 1856 + 20800 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiment</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Knowledge base design</head><p>The knowledge base is stored on the smartphone internal storage and includes the following files <ref type="bibr" target="#b15">[16]</ref>: 'root.txt'; N descriptor files "dN.txt"; "Description.txt"; N audio files "NameN.mp3". The knowledge base includes 86 sampling patterns (N=86; see Table <ref type="table">1</ref>) and has a size of 44.1 MB on the internal storage (106 MB in Random Access Memory (RAM) along with other data and code of the Android application), which is available on any up-to-date Android smartphone with operating system (OS) version 10 (10th and 11th are discussed in this study) or newer. The execution time for processing a test image (taken at sunny weather on a campus in Naryn, University of Central Asia, Kyrgyzstan; see Figure <ref type="figure" target="#fig_5">5</ref>) on a Doogee S96 Pro smartphone is approximately 0.1 s.</p><p>In this study, the Hamming distance measures the similarity between two binary strings 𝐵𝑆 . The image-matching process employs five AdaBoost weak classifiers <ref type="bibr" target="#b20">[21]</ref> that use binary decision trees.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Experiment description</head><p>To minimize the execution time of Java Android application, the maximum number of keypoints and the image dimension size are 400 and 500, respectively, which is compatible with any modern camera smartphone since a 2-megapixel sensor captures images of 16001200 pixels. To avoid optical distortion effects <ref type="bibr" target="#b21">[22]</ref> and locate the binary descriptor within the borders, a new image is created by adding 50-pixel margins to the original image. The method 𝐵𝑖𝑡𝑚𝑎𝑝. 𝑐𝑟𝑒𝑎𝑡𝑒𝑆𝑐𝑎𝑙𝑒𝑑𝐵𝑖𝑡𝑚𝑎𝑝 is used to downsample the original image with bilinear filtering. Then, a greyscale representation is calculated in eight parallel computational threads. To find keypoints, the image with a maximum dimension size of 500 pixels is smoothed five times. This process generates five scales using three groups (groups are applied sequentially until the target object is detected or not found) of the square matrix orders and population standard deviations in the Gaussian blur operator:</p><p>1. r=7, =1; r=11, =1.414214; r=13, =2; r=19, =2.828427; =4. 2. r=11, =1.414214; r=13, =2; r=19, =2.828427; r=25, =4; r=35, =5.656854. 3. r=5, =0.5; r=7, =0.70711; r=9, =1; r=11, =1.414214; r=13, =2.</p><p>To reduce the execution time, four DoG images are calculated employing Eq. ( <ref type="formula">4</ref>) in four parallel computational threads.</p><p>Keypoints are identified in two parallel computational threads for the third/second/first and fourth/third/second scales. Only 400 keypoints, which are closest to the center of the image according to the Euclidean distance, are considered. Figure <ref type="figure" target="#fig_6">6</ref> presents the scheme of how points are analyzed -the number of side points at the current level is two pixels larger than at the previous one: 1, 3, 5, 7, 9, etc.</p><p>The AdaBoost classifier is applied after discarding keypoint pairs whose descriptor points fall beyond the image boundaries:</p><formula xml:id="formula_11">𝐹 (𝐵𝑆) = 𝑓 (𝐵𝑆) ,<label>(15)</label></formula><p>where ft(BS) is AdaBoost weak classifier and T=5 <ref type="bibr" target="#b15">[16]</ref>. If the Hamming distance surpasses the threshold 19600, the descriptor is of the template class. The threshold value was selected empirically based on the sum of the outcomes of the five weak classifiers mentioned above. Table <ref type="table" target="#tab_1">2</ref> summarizes the speedup techniques used in this study.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>In this study, a Java Android application "TrafficSignsKyrgyzstanWeCanSee" employs the proposed method to detect Kyrgyz traffic signs, and hence to support the spatial cognition and mobility of VIBs. Figure <ref type="figure">7</ref>  Two smartphones, Doogee S96 Pro and Blackview BV6600 Pro, were used in the experiment. The true positive rate was calculated at different distances from 1.5 m to 5 m in three attempts. The results are shown in Figure <ref type="figure">8</ref>. Figure <ref type="figure" target="#fig_5">5</ref> presents the photo taken during this experiment. The true positive rate was close to the required 100 % from a distance of 1.5 m to 3.5 m for the traffic sign "Crosswalk left": it was 100 % for the smartphone Blackview BV6600 Pro and 75 % for the smartphone Doogee S96 Pro at a distance of 3.5 m. The presented two-keypoint SIFT detector and BRISK descriptor showed that the true negative rate was 100 %.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>In this study, a two-keypoint SIFT detector and a BRISK descriptor with CameraX Android API compose the approach to support the navigation and mobility of VIB pedestrians in Kyrgyzstan. In this method, keypoints are localized via the SIFT algorithm, and then selected pairs of keypoints are employed to design the sample pattern, i.e., the binary BRISK descriptor. The image matching is based on the Hamming distance and AdaBoost cascade classifier. The real-life experiment showed test results: a true negative rate is 100 % (this is a crucial parameter for VIBs) and a true positive rate is close to 100 %. In general, the traffic-sign recognition system satisfies requirements and hence can be implemented in practice. However, some elements (e.g., square matrix orders and population standard deviations in the Gaussian blur operator) of the presented approach are empirical, and therefore discussable.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>In this study, a crucial VIB-assistive software, Java Android mobile application, was developed to recognize Kyrgyz traffic signs using two-keypoint SIFT detector, BRISK descriptor, CameraX Android API, and mp3 audio files to support the spatial cognition of VIBs near roads.</p><p>With a knowledge base of 86 sampling patterns, the mobile application shows the real-time performance: the execution time is 0.1 s for example with the traffic sign "Crosswalk left" (location is a campus in Naryn, University of Central Asia, Kyrgyzstan). In experiments on the distance from 1.5 m to 3.5 m, the presented SIFT/BRISK detector with two-keypoint descriptor showed 100 % true negative rate and a true positive rate close to 100 %: smartphone Blackview BV6600 Pro -100 %, Doogee S96 Pro -75 % on the distance 3.5 m.</p><p>The real-time performance is achieved using five AdaBoost classifiers for image matching and parallel computational threads for the greyscale representation of the color image (eight threads), calculation of DoG images (four threads), and localization of keypoints (two threads). Analysis of minimum requirements to the hardware shows that the mobile application is runnable on any up-to-date Android camera smartphone because it requires only 44.1 MB on the internal storage (106 MB in RAM along with other data and code) and a 2-megapixel sensor.</p><p>The most likely prospect for further development of this study is the design of an image descriptor that is geometrically close to the traffic signs. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Two-keypoint SIFT detector and BRISK descriptor with CameraX Android API: A diagram</figDesc><graphic coords="3,93.72,67.56,424.32,313.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The architecture of the proposed image matching</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Flowcharts of the keypoints localization with the SIFT method and the image descriptor design with the BRISK method (left flowchart) and the image matching algorithm (right flowchart)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Detection of keypoint candidates (i.e., local maxima and minima of DoG function) is similar to the presented in<ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b19">20]</ref> methodology. Keypoints are rejected using the Taylor expansion of 𝐷(𝑥, 𝑦, 𝜎): where x=(𝑥, 𝑦, 𝜎) is the offset from and D and its derivatives are computed at a particular point. To find the extremum 𝑥 , the equation 𝐷′(𝐱) = 0 should be solved (𝐷′(𝐱) is the derivative of D with respect to x): 𝑥 = 𝜕 𝐷 𝜕𝑥 𝜕𝐷 𝜕𝑥 .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The binary descriptor for the traffic sign "Above ground pedestrian crossing" (greyscale representation): 1st (A), 2nd (B), and 3rd (C) groups, all points (D)</figDesc><graphic coords="7,105.84,67.56,400.08,360.72" type="vector_box" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: An example of the traffic sign "Crosswalk left" successfully identified during the experiment (photo was taken by smartphone Doogee S96 Pro at sunny weather; author -Dmytro Zubov)</figDesc><graphic coords="8,175.92,67.80,259.80,194.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Scheme of the points analysis on the image</figDesc><graphic coords="8,198.00,608.16,215.52,91.92" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 7 :Figure 8 :</head><label>78</label><figDesc>Figure 7: An example of the screenshot (A), an original image taken by the smartphone Doogee S96 Pro at cloudy weather (B), and a greyscale image with keypoints (C)</figDesc><graphic coords="10,139.80,498.72,242.88,181.92" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>where 𝐶𝑜𝑙𝑜𝑟. 𝑟𝑒𝑑, 𝐶𝑜𝑙𝑜𝑟. 𝑔𝑟𝑒𝑒𝑛, and 𝐶𝑜𝑙𝑜𝑟. 𝑏𝑙𝑢𝑒 are Java methods, 𝑝𝑖𝑥𝑒𝑙 is the smallest element that can be addressed in a raster RGB image.The DoG (Difference of Gaussians) function 𝐷(𝑥, 𝑦, 𝜎) is employed to find keypoints that are stable across different scales. DoG is the result of subtracting two neighbour scales that are smoothed by Gaussian filters with a different weight k:The DoG function is the approximation of the scale-normalized Laplacian of Gaussian  2  2 G. =1.414214; r=13, =2; r=19, =2.828427; r=25, =4; r=35, =5.656854. 3. 680: r=19, =2.828427; r=25, =4; r=35, =5.656854; r=49, =8; r=69, =11.313708. 4. 1360: r=35, =5.636854; r=49, =8; r=69, =11.313708; r=97, =16; r=137, =22.627417.</figDesc><table><row><cell cols="2">Start Input image Grayscale image 2. 340: r=11, Table 1 Kyrgyz traffic signs related to pedestrians</cell><cell></cell><cell cols="2">Start Input image Grayscale image</cell></row><row><cell>No.</cell><cell>Designation</cell><cell cols="2">Traffic sign icon</cell><cell>No. of sampling patterns</cell></row><row><cell>1</cell><cell>Locate keypoints using SIFT method Above ground pedestrian crossing</cell><cell cols="3">Downsample image to maximum 500-pixel resolution with bilinear filtering 6</cell></row><row><cell>2</cell><cell>Bike crossing</cell><cell></cell><cell></cell><cell>7</cell></row><row><cell cols="2">Select pairs of keypoints to</cell><cell></cell><cell cols="2">Locate keypoints using</cell></row><row><cell>3</cell><cell>build BRISK descriptors Bike path</cell><cell></cell><cell>SIFT method</cell><cell>13</cell></row><row><cell></cell><cell>Build two-keypoint BRISK descriptors</cell><cell></cell><cell cols="2">Generate possible pairs: Maximum 400 keypoints</cell><cell>End</cell></row><row><cell></cell><cell>End</cell><cell cols="3">Build two-keypoint BRISK-like descriptor</cell></row><row><cell></cell><cell></cell><cell cols="3">Image matching using Hamming distance</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">and AdaBoost weak classifiers</cell></row><row><cell></cell><cell></cell><cell>No</cell><cell>Template</cell><cell>Yes</cell></row><row><cell></cell><cell></cell><cell></cell><cell>image found?</cell></row><row><cell></cell><cell cols="3">𝐷(𝑥, 𝑦, 𝜎) = 𝐿(𝑥, 𝑦, 𝑘𝜎) − 𝐿(𝑥, 𝑦, 𝜎).</cell><cell>(4)</cell></row></table><note>Android class 𝐶𝑜𝑙𝑜𝑟: 𝐼(𝑥, 𝑦) = 𝐶𝑜𝑙𝑜𝑟. 𝑟𝑒𝑑(𝑝𝑖𝑥𝑒𝑙) * 0.21 + 𝐶𝑜𝑙𝑜𝑟. 𝑔𝑟𝑒𝑒𝑛(𝑝𝑖𝑥𝑒𝑙) * 0.72 + +𝐶𝑜𝑙𝑜𝑟. 𝑏𝑙𝑢𝑒(𝑝𝑖𝑥𝑒𝑙) * 0.007,(3)Building DoG 𝐷(𝑥, 𝑦, 𝜎) follows the method proposed in<ref type="bibr" target="#b15">[16]</ref>. To generate five SIFT scales, four images (maximum dimension sizes are 180, 340, 680, and 1360 pixels, i.e., four SIFT octaves) are smoothed five times in the Gaussian blur operator with five square matrix orders r and population standard deviations :1. 180: r=5, =0.707107; r=7, =1; r=11, =1.414214; r=13, =2; r=19, =2.828427.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc></figDesc><table><row><cell>Speedup techniques in Java Android application</cell><cell></cell></row><row><cell>Operation</cell><cell>Speedup technique</cell></row><row><cell>Greyscale representation of the color image</cell><cell>Eight parallel computational threads</cell></row><row><cell>Calculation of DoG images</cell><cell>Four parallel computational threads</cell></row><row><cell>Localization of keypoints in third/second/first and</cell><cell>Two parallel computational threads</cell></row><row><cell>fourth/third/second scales</cell><cell></cell></row><row><cell>Image matching</cell><cell>Five AdaBoost classifiers</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>presents the screenshot, an original image taken by the smartphone Doogee S96 Pro at cloudy weather (location is a campus in Naryn, University of Central Asia, Kyrgyzstan), and a greyscale image with keypoints (fuchsia and turquoise colors are used for third/second/first and fourth/third/second DoG functions, respectively). Two other Java Android 10 applications (approximately 72 % of Android smartphones can run these applications as of August 2023) were designed in Android Studio 4.0:1. Localization of all keypoints using SIFT method. 2. Identification of keypoints' pairs and design of the sample pattern via the BRISK descriptor.</figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Exploring the Smart Future of Participation: Community, Inclusivity, and People with Disabilities</title>
		<author>
			<persName><forename type="first">John</forename><surname>Bricout</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nathan</forename><forename type="middle">W</forename><surname>Baker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bonita</forename><surname>Moon</surname></persName>
		</author>
		<author>
			<persName><surname>Sharma</surname></persName>
		</author>
		<idno type="DOI">10.4018/IJEPR.20210401.oa8</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of E-Planning Research</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="94" to="108" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Navigation Assistance for Blind Pedestrians: Guidelines for the Design of Devices and Implications for Spatial Cognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gallay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Denis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Auvray</surname></persName>
		</author>
		<idno type="DOI">10.1093/acprof:oso/9780199679911.003.0011</idno>
	</analytic>
	<monogr>
		<title level="m">Representing Space in Cognition: Interrelations of Behaviour, Language, and Formal Models</title>
				<editor>
			<persName><forename type="first">T</forename><surname>Tenbrink</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Wiener</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Claramunt</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford Academic</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="244" to="267" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Analysis of Market-Ready Traffic Sign Recognition Systems in Cars: A Test Field Study</title>
		<author>
			<persName><forename type="first">Darko</forename><surname>Babić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dario</forename><surname>Babić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mario</forename><surname>Fiolić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Željko</forename><surname>Šarić</surname></persName>
		</author>
		<idno type="DOI">10.3390/en14123697</idno>
	</analytic>
	<monogr>
		<title level="j">Energies</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">12</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An Efficient Implementation of Traffic Signs Recognition System Using CNN</title>
		<author>
			<persName><forename type="first">Hana</forename><surname>Ben Fredj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amani</forename><surname>Chabbah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jamel</forename><surname>Baili</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hassen</forename><surname>Faiedh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chokri</forename><surname>Souani</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.micpro.2023.104791</idno>
	</analytic>
	<monogr>
		<title level="j">Microprocessors and Microsystems</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Survey on Assistive Technology for Visually Impaired</title>
		<author>
			<persName><forename type="first">Kanak</forename><surname>Manjari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Madhushi</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gaurav</forename><surname>Singal</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.iot.2020.100188</idno>
	</analytic>
	<monogr>
		<title level="j">Internet of Things</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Efficacy and Patients&apos; Satisfaction with the ORCAM MyEye Device Among Visually Impaired People: A Multicenter Study</title>
		<author>
			<persName><forename type="first">Filippo</forename><surname>Amore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Valeria</forename><surname>Silvestri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Margherita</forename><surname>Guidobaldi</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10916-023-01908-5</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Medical Systems</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page">11</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Design, Development and Performance Analysis of Cognitive Assisting Aid with Multi Sensor Fused Navigation for Visually Impaired People</title>
		<author>
			<persName><forename type="first">Myneni</forename><surname>Madhu Bala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Vasundhara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Akkineni</forename><surname>Haritha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ch</forename><forename type="middle">V K N S N</forename><surname>Moorthy</surname></persName>
		</author>
		<idno type="DOI">10.1186/s40537-023-00689-5</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Big Data</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">An Eye for a Blind: Assistive Technology</title>
		<author>
			<persName><forename type="first">Sonal</forename><surname>Mali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Srushti</forename><surname>Padade</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Swapnali</forename><surname>Mote</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Revati</forename><surname>Omkar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Research Journal of Engineering and Technology</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="532" to="534" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Review of Substitutive Assistive Tools and Technologies for People with Visual Impairments: Recent Advancements and Prospects</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zahra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rami</forename><surname>Muhsin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Faruque</forename><surname>Qahwaji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Majid</forename><surname>Ghanchi</surname></persName>
		</author>
		<author>
			<persName><surname>Al-Taee</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12193-023-00427-4</idno>
	</analytic>
	<monogr>
		<title level="j">Journal on Multimodal User Interfaces</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="135" to="156" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A Navigation Tool for Visually Impaired and Blind People</title>
		<author>
			<persName><forename type="first">Adnan</forename><surname>Al-Smadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Talal</forename><surname>Al-Qaryouti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Abdurahman</forename><surname>Rehan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Homam</forename><surname>Assi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alhareth</forename><surname>Alsharea</surname></persName>
		</author>
		<idno type="DOI">10.55549/epstem.1338545</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eurasia Proceedings of Science, Technology, Engineering &amp; Mathematics</title>
				<meeting>the Eurasia Science, Technology, Engineering &amp; Mathematics<address><addrLine>Marmaris Turkey</addrLine></address></meeting>
		<imprint>
			<publisher>ISRES Publishing</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="119" to="126" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A Wearable Mobility Aid for the Visually Impaired based on Embedded 3D Vision and Deep Learning</title>
		<author>
			<persName><forename type="first">Matteo</forename><surname>Poggi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Mattoccia</surname></persName>
		</author>
		<idno type="DOI">10.1109/ISCC.2016.7543741</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Symposium on Computers and Communication</title>
				<meeting>the IEEE Symposium on Computers and Communication<address><addrLine>Messina Italy</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Publishing</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="208" to="213" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">An SIFT-Based Fast Image Alignment Algorithm for High-Resolution Image</title>
		<author>
			<persName><forename type="first">Zetian</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zemin</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wentao</forename><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2023.3270911</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="42012" to="42041" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">ALGD-ORB: An Improved Image Feature Extraction Algorithm with Adaptive Threshold and Local Gray Difference</title>
		<author>
			<persName><forename type="first">Guoming</forename><surname>Chu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yan</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xuhong</forename><surname>Luo</surname></persName>
		</author>
		<idno type="DOI">10.1371/journal.pone.0293111</idno>
	</analytic>
	<monogr>
		<title level="j">PLoS ONE</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">10</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Classification Assessment Methods</title>
		<author>
			<persName><forename type="first">Alaa</forename><surname>Tharwat</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aci.2018.08.003</idno>
	</analytic>
	<monogr>
		<title level="j">Applied Computing and Informatics</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="168" to="192" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Scaling Up Wearable Cognitive Assistance for Assembly Tasks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Iyengar</surname></persName>
		</author>
		<idno type="DOI">10.1184/R1/23302121.v1</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
			<pubPlace>Pittsburgh, PA, USA</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Carnegie Mellon University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD&apos;s thesis</note>
	<note>UMI order number</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Spatial Cognition by the Visually Impaired: Image Processing with SIFT/BRISK-like Detector and Two-keypoint Descriptor on Android CameraX</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zubov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Aljarbouh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kupin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shaidullaev</surname></persName>
		</author>
		<idno type="DOI">10.1049/PBHE049E_ch12</idno>
	</analytic>
	<monogr>
		<title level="m">Machine Learning in Medical Imaging and Computer Vision</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Nandal</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Dhaka</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Ganchev</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Nait-Abdesselam</surname></persName>
		</editor>
		<meeting><address><addrLine>Stevenage, UK</addrLine></address></meeting>
		<imprint>
			<publisher>IET</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="249" to="276" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A Novel Luminance-Based Algorithm for Classification of Semi-Dark Images</title>
		<author>
			<persName><forename type="first">Mehak</forename><surname>Maqbool Memon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ahmed</forename><surname>Manzoor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aisha</forename><forename type="middle">Zahid</forename><surname>Hashmani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Syed</forename><surname>Junejo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adnan</forename><surname>Sajjad Rizvi</surname></persName>
		</author>
		<author>
			<persName><surname>Ashraf Arain</surname></persName>
		</author>
		<idno type="DOI">10.3390/app11188694</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">18</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Target Matching Recognition for Satellite Image based on the Improved FREAK Algorithm</title>
		<author>
			<persName><forename type="first">Yantong</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yongjie</forename><surname>Piao</surname></persName>
		</author>
		<idno type="DOI">10.1155/2016/1848471</idno>
	</analytic>
	<monogr>
		<title level="j">Mathematical Problems in Engineering</title>
		<imprint>
			<biblScope unit="volume">2016</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<ptr target="https://en.wikipedia.org/wiki/Road_signs_in_Kyrgyzstan" />
		<title level="m">Road signs in Kyrgyzstan</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
		<respStmt>
			<orgName>Wikipedia</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Distinctive Image Features from Scale-invariant Keypoints</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Lowe</surname></persName>
		</author>
		<idno type="DOI">10.1023/B:VISI.0000029664.99615.94</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="91" to="110" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Improved AdaBoost Algorithm Using Misclassified Samples Oriented Feature Selection and Weighted Nonnegative Matrix Factorization</title>
		<author>
			<persName><forename type="first">Youwei</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lizhou</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianming</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yang</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fu</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2022.08.015</idno>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">508</biblScope>
			<biblScope unit="page" from="153" to="169" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Model-Independent Lens Distortion Correction Based on Sub-Pixel Phase Encoding</title>
		<author>
			<persName><forename type="first">Pengbo</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shaokai</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Weibo</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Qixin</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shujiao</forename><surname>Ye</surname></persName>
		</author>
		<idno type="DOI">10.3390/s21227465</idno>
	</analytic>
	<monogr>
		<title level="j">Sensors Journal</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">22</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
