<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Proposal Based on Computer Vision and IoT for the Development of an Ergonomic and Low-Cost Assistance Device for People with Visual Disabilities</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nicolás</forename><forename type="middle">E</forename><surname>Caytuiro-Silva</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Universidad Católica de Santa María</orgName>
								<orgName type="institution" key="instit2">Urb. San José s/n Umacollo</orgName>
								<address>
									<settlement>Arequipa</settlement>
									<country>Perú</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eveling</forename><forename type="middle">G</forename><surname>Castro-Gutierrez</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Universidad Católica de Santa María</orgName>
								<orgName type="institution" key="instit2">Urb. San José s/n Umacollo</orgName>
								<address>
									<settlement>Arequipa</settlement>
									<country>Perú</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Jackeline</forename><forename type="middle">M</forename><surname>Peña-Alejandro</surname></persName>
							<email>jackeline.pena@ucsm.edu.pe</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Universidad Católica de Santa María</orgName>
								<orgName type="institution" key="instit2">Urb. San José s/n Umacollo</orgName>
								<address>
									<settlement>Arequipa</settlement>
									<country>Perú</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Proposal Based on Computer Vision and IoT for the Development of an Ergonomic and Low-Cost Assistance Device for People with Visual Disabilities</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C009CC4575A3417DE06F8A15CD9C27C4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Computer Vision</term>
					<term>IoT</term>
					<term>Assistance Device</term>
					<term>Economical</term>
					<term>Low-cost</term>
					<term>Visual Impairment. 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The research focuses on addressing the challenges faced by visually impaired individuals in identifying banknotes in the city of Arequipa. The development of an assistance device based on computer vision and IoT is proposed to help these individuals recognize different denominations of banknotes, as well as nearby objects. The state of the art in banknote and object recognition systems is reviewed globally and nationally, highlighting advances in technologies such as machine learning and computer vision. The study follows a Design Thinking approach, including empathy, definition, ideation, prototyping, and evaluation phases. The actions for creating a dataset of banknote images and implementing the realtime vision module in the device are detailed. Although tests with end-users are pending, data has been collected to identify areas for improvement in banknote and nearby object recognition. The research aims to improve the quality of life for visually impaired people in Arequipa by facilitating the identification of banknotes and objects through an economical assistance device based on computer vision and IoT.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Visual impairment is a condition that presents significant challenges in the daily lives of those who experience it. In the city of Arequipa <ref type="bibr" target="#b0">[1]</ref>, as in many other regions, people with visual disabilities face additional difficulties when trying to identify banknotes and objects in their everyday environment. This project aims to address this issue by developing an economical assistance device based on computer vision and IoT. This device will enable visually impaired individuals to identify banknotes and objects, providing them with greater autonomy and independence in their daily lives. In this context, the design, implementation, and evaluation of this innovative device are presented, which has the potential to improve the quality of life for visually impaired people in Arequipa and serve as a model for similar solutions worldwide <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">State of the art</head><p>In the current context, technology plays a fundamental role in improving the quality of life for people with visual disabilities. One of the key challenges faced by these people in their daily lives is the identification and management of different denominations of banknotes, a crucial task in financial transactions and daily activities. In response to this need, various global and national research efforts have been carried out to develop banknote recognition systems using advanced technologies such as machine learning, computer vision, and image processing.</p><p>Globally, several research studies have focused on the development of banknote recognition systems to assist visually impaired individuals in identifying different denominations of currency.</p><p>In <ref type="bibr" target="#b2">[3]</ref>, they proposed developing a Malaysian banknote recognition system to assist visually impaired people. The study aimed to analyze the impact of region and orientation on the performance of Machine Learning and Deep Learning approaches. The results revealed that SVM and BC algorithms achieved 100% accuracy, while kNN and DTC achieved 99.7%. It was also concluded that orientation influences the performance of the AlexNet model, showing better execution with similar orientation data.</p><p>On the other hand, in <ref type="bibr" target="#b3">[4]</ref>, the accurate classification of Honduran banknotes was the focus, including a new L200 banknote, with an emphasis on adapting to the incorporation of new banknotes in circulation. Two high-performance methods were presented. The first relied on advanced local descriptors such as ORB or SIFT, generating feature vectors for algorithms like SVM and Random Forests. The second introduced the LempiraNet CNN, which used transfer learning to address data limitations. The results demonstrated outstanding accuracy of 98% or higher, with LempiraNet being significantly faster than the other method.</p><p>Conversely, <ref type="bibr" target="#b4">[5]</ref> proposed an innovative approach based on quaternionic wavelet transform (QWT) and a deep convolutional neural network for banknote classification. This methodology leveraged the multiscale structure and directional sensitivity of QWT. The results highlighted superior performance compared to other state-of-the-art banknote classification algorithms, as well as meeting real-time requirements for banknote classification systems.</p><p>In <ref type="bibr" target="#b5">[6]</ref>, the focus was on creating an Iraqi banknote classification system based on Deep Learning and computer vision technology. The central objective was to develop a multiclass classification model capable of distinguishing between different denominations of Iraqi banknotes and providing equivalent voice commands to inform visually impaired people about the value of the banknotes. The system achieved an impressive accuracy of 98.6%, demonstrating its viability and potential to enhance the financial independence of this user group.</p><p>In the research <ref type="bibr" target="#b6">[7]</ref>, they focused on the recognition of Ethiopian banknotes using a convolutional neural network (CNN). The research included comprehensive evaluations of different CNN architectures and optimization techniques. The highlighted architecture, MobileNetV2, implemented with RMSProp optimization, achieved outstanding accuracy of 96.4%. Additionally, the model was implemented on an integrated platform using Raspberry Pi, with potential applications in automatic monetary transactions.</p><p>In <ref type="bibr" target="#b7">[8]</ref>, they addressed the recognition of Colombian banknotes by visually impaired people. They developed a classification system for eleven denominations of banknotes using image processing techniques and MLP neural networks. The system achieved 95% accuracy after manipulating samples and expanding the dataset, underscoring its effectiveness in the autonomy of visually impaired individuals.</p><p>Finally, in <ref type="bibr" target="#b8">[9]</ref>, they introduced an AI-backed mobile application for the recognition of banknotes from the United Arab Emirates, aimed at visually impaired people. The application used a pre-trained convolutional neural network to detect and classify banknotes, in addition to providing auditory signals. Although the average accuracy reached 70% in tests and 88% in fivefold cross-validation, the application represents a step toward independence in daily financial transactions.</p><p>On a national level, in Peru, research has focused on the design and development of low-cost ergonomic tools to improve the mobility of people with visual disabilities.</p><p>In Lima, <ref type="bibr" target="#b9">[10]</ref> proposed the creation of an ergonomic GPS-enabled cane for blind people with the aim of increasing their autonomy. The methodology was based on systems engineering for monitoring and software development. The results indicated that the ergonomic cane improved the mobility of people with visual disabilities and allowed tracking by their family members. The importance of considering aspects such as the shape, size, and weight of the cane to achieve the desired ergonomics was highlighted.</p><p>On the other hand, in Chiclayo, <ref type="bibr" target="#b10">[11]</ref> focused on the development of an Intelligent Geolocating Sensor Cane to support blind people in their mobility. The methodology used was based on the Rational Unified Process (RUP) and embedded systems. The results demonstrated that this sensor cane could make life more dynamic and secure for people with visual disabilities. The fusion of the RUP and embedded systems methodologies contributed to the efficient design of the prototype.</p><p>Finally, in Arequipa, <ref type="bibr" target="#b11">[12]</ref>, the focus was on improving the quality of daily mobility for people with visual disabilities through an Electronic Cane with ultrasonic sensors. The methodology included a descriptive-explanatory study and the experimental design of sensors. The results highlighted that this Electronic Cane could reduce accidents in the daily mobility of people with visual disabilities.</p><p>In summary, notable advances have been made worldwide and nationally in creating tools and systems to improve the lives of people with visual disabilities, using technologies such as machine learning, computer vision, and ultrasonic sensors. These advances demonstrate the potential of technology to increase the independence and safety of people with visual disabilities. The research conducted in Peru emphasizes the importance of considering the specific needs of this group and the utility of combining various methodologies to create effective solutions. Together, these developments offer a promising outlook in which technology will continue to play a fundamental role in improving the quality of life for people with visual disabilities, with possibilities for application elsewhere and a focus on inclusion and autonomy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Materials and Methods</head><p>The study employed an iterative, action-oriented methodology and followed the five-step process of the Design Thinking approach (i.e., empathize, define, ideate, prototype, and evaluate) <ref type="bibr" target="#b15">[16]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Phase 01: Empathize</head><p>This phase was rigorously executed, encompassing a comprehensive literature review of available assistive tools and technologies. Requirements gathering was done through interviews and analysis of local statistics, the evaluation of Artificial Vision algorithms, a detailed comparison of IoT devices, and the meticulous selection of Computer Vision techniques. This ensured that the device would be designed with a closer understanding of the needs and conditions of visually impaired individuals from the Association of the Blind in Arequipa. Best practices and accessible technologies were leveraged to provide an effective solution tailored to their primary needs, which revolve around the recognition of banknotes and nearby everyday objects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Phase 02: Define</head><p>After gaining an empathetic understanding of the main needs of visually impaired individuals, the define phase involved categorizing the results obtained from the evaluation of techniques, Artificial Vision algorithms, and IoT devices. This classification identified and selected the most appropriate and effective approaches to address the needs of visually impaired individuals in the study region. Additionally, a comprehensive compilation of results related to the current situation of these individuals and available assistive tools in the context of Arequipa was conducted. These findings provided a solid foundation for decision-making in the design of the device. It is worth mentioning that all these results were thoroughly documented in the thesis report, thus establishing a solid and well-founded basis for the subsequent ideation and prototyping phases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Phase 03: Ideate</head><p>With a defined understanding of the problem, a creative flow of ideas is generated. In this phase, a plan is designed and developed to achieve the goal of creating a product that improves or solves the identified problem. To accomplish this, a set of actions was followed for creating the dataset of banknote images. This included setting up the necessary image capture equipment, involving the selection of appropriate cameras and sensors. A meticulous procedure for image capture was developed, considering a variety of relevant scenarios and situations reflecting the diversity of environments in which visually impaired individuals might use the device. Image capture was carried out extensively, covering banknotes representative of the local reality. Subsequently, labeling of these images was performed, preparing them for use in training the Artificial Vision module. The results obtained in this stage provide a solid dataset of banknote images, crucial for the development of the product. These achievements have been thoroughly documented, establishing a clear and well-founded direction for the subsequent prototyping and evaluation stages. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Phase 04: Prototyping</head><p>During this stage, the implementation of the vision module designed for real-time image capture and processing, a fundamental element of the proposal, has taken place. This achievement was not only crucial for the device's operation but also highlights the robustness of the strategy in combining Internet of Things (IoT) technologies with advanced Computer Vision techniques to accomplish the task of recognizing banknotes and everyday objects near the user. Additionally, exhaustive tests have been conducted, and the components of the vision module have been debugged. This has allowed for the effective identification and addressing of any obstacles or potential deficiencies, ensuring optimal system performance. Furthermore, acceptable communication has been achieved between the vision module and the IoT device, thereby reinforcing the reliability and effectiveness of the proposed solution. The design of the first prototype is presented below:  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Phase 05: Evaluation</head><p>In the evaluation phase, technical and functional tests have been conducted under controlled conditions to validate the integration of the system, measure effectiveness in banknote recognition, and collect technical feedback. Although end-users have not yet been included in these evaluations, crucial data has been gathered to identify areas for improvement in the design and operation of the device. These data and observations will serve as a foundation for future tests with end-users and contribute to the overall success of the project.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head><p>One of the main outcomes in the development of the project proposal is:</p><p>1. The elaboration of a systematic review of assistive tools for visually impaired individuals, conducted during the empathize stage, allowed for the identification of the main limitations and contributions in the last 5 years of research in the field. These are related to advances in assistive technologies, the Internet of Things, and Computer Vision. In figures 4 and 5, the main findings of the systematic review are presented, emphasizing the key limitations and contributions. Access link: https://ieee-dataport.org/documents/dataset-peruvian-banknotes 3. The construction of the first prototype of the proposed assistive device is a significant outcome of the project. With this, the overall objective of providing assistance in the recognition of Peruvian banknotes and nearby objects to the user would be achieved. As a key contribution within the construction of the first prototype, the effective use of pre-trained deep learning models for banknotes and models pre-trained on an ImageNet dataset for objects, the characterization of processing complexity, the design of an economical assistive system, and the empirical evaluation of accuracy in real-world conditions have been highlighted. Thus, achieving the design of the first prototype, the optimal processing of Peruvian banknote and object images, and the empirical evaluation of the accuracy of pre-trained deep learning algorithms with 68% for banknote recognition and 73% for object recognition with Fine-Tuning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>This study has focused on addressing the challenges faced by visually impaired individuals in identifying banknotes in the city of Arequipa and the development of an assistive device based on artificial vision and IoT to mitigate these challenges. From the research and work conducted, the following conclusions can be drawn:</p><p>• The need for technological solutions for visually impaired individuals is critical. The identification of banknotes and objects is essential in daily life, making the creation of economical and effective assistive devices imperative to improve their quality of life.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>The exhaustive review of the state of the art in global and national banknote recognition systems highlights that technology, particularly machine learning, computer vision, and sensors, can be a valuable ally for visually impaired individuals, providing them with independence and security.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>The Design Thinking approach, encompassing the stages of empathy, definition, ideation, prototyping, and evaluation, has proven effective for the development of an assistive device that caters to the specific needs and conditions of visually impaired individuals in Arequipa.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>The creation of a dataset of banknote images and the development of the real-time vision module are fundamental achievements supporting the viability and effectiveness of the proposed assistive device.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Although tests with end-users are still pending, the technical data collected so far will serve as a basis for future evaluations and refinements of the device. This project is anticipated not only to enhance the quality of life for visually impaired individuals in Arequipa but also to potentially serve as a model for similar solutions worldwide.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Design Thinking Methodology<ref type="bibr" target="#b15">[16]</ref> </figDesc><graphic coords="3,143.00,294.30,308.85,203.13" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Front Design of the Prototype</figDesc><graphic coords="4,198.30,625.61,198.38,130.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Back Design of the Prototype</figDesc><graphic coords="5,198.30,72.00,198.36,132.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :Figure 5 :</head><label>45</label><figDesc>Figure 4: Limitations from the Systematic Review</figDesc><graphic coords="5,84.90,477.97,425.20,200.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="6,84.90,72.00,425.20,212.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="6,84.93,413.06,425.11,198.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 Total Number of Captured Banknote Images</head><label>1</label><figDesc></figDesc><table><row><cell></cell><cell>10</cell><cell cols="2">Denomination 20 50</cell><cell>100</cell><cell>Total</cell></row><row><cell>New Banknote Family</cell><cell>728</cell><cell>894</cell><cell>1190</cell><cell>939</cell><cell>3751</cell></row><row><cell>Old Banknote Family</cell><cell>1025</cell><cell>1162</cell><cell>1831</cell><cell>1285</cell><cell>5303</cell></row><row><cell>Total</cell><cell>1753</cell><cell>2056</cell><cell>3021</cell><cell>2224</cell><cell>6054</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Propuesta para el diseño de un bastón electrónico para personas invidentes que mejorara la calidad de su desplazamiento diario</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lizarraga</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<pubPlace>Arequipa</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Universidad Continental</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Pulsera para guiar a personas con discapacidad visual recibe medalla de oro en Corea</title>
		<author>
			<persName><forename type="first">C</forename><surname>Killa</surname></persName>
		</author>
		<ptr target="https://andina.pe/agencia/noticia-pulsera-para-guiar-a-personas-discapacidad-visual-recibe-medalla-oro-corea-865204.aspx" />
	</analytic>
	<monogr>
		<title level="j">Agencia Peruana de Noticias</title>
		<imprint>
			<date type="published" when="2021">17 Octubre 2021. 15 Septiembre 2023</date>
		</imprint>
	</monogr>
	<note>En línea</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Vision Based System for Banknote Recognition Using Different Machine Learning and Deep Learning Approach</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sufri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Rahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ghazali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shahar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Asári</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 10th Control and System Graduate Research Colloquium (ICSGRC)</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="5" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Automated Honduran Banknote Image Classification using Machine Learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Castelar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Banegas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Mendoza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Soto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Davila</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 40th Central America and Panama Convention (CONCAPAN)</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Banknote Classification Based on Convolutional Neural Network in Quaternion Wavelet Domain</title>
		<author>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="162141" to="162148" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Deep Learning-Based Iraqi Banknotes Classification System for Blind People</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Awad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">T</forename><surname>Sharef</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Salih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">L</forename><surname>Malallah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Eastern-European Journal of Enterprise Technologies</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">115</biblScope>
			<biblScope unit="page" from="31" to="38" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Ethiopian Banknote Recognition Using Convolutional Neural Network and Its Prototype Development Using Embedded Platform</title>
		<author>
			<persName><forename type="first">C.-B</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Aseffa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kalla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mishra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Sensors</title>
		<imprint>
			<biblScope unit="volume">2022</biblScope>
			<biblScope unit="page">4505089</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Tamayo</surname></persName>
		</author>
		<title level="m">Sistema de Reconocimiento de Billetes para Personas con Discapacidad Visual Mediante Visión Artificial</title>
				<meeting><address><addrLine>Colombia</addrLine></address></meeting>
		<imprint>
			<publisher>Universidad EIA</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Mobile Deep Classification of UAE Banknotes for the Visually Challenged</title>
		<author>
			<persName><forename type="first">A</forename><surname>Khalil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yaghi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Basmaji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Faizal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Farhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghazal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">9th International Conference on Future Internet of Things and Cloud (FiCloud)</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="321" to="325" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Diseño e implementación de un bastón ergonómico con sistema de posicionamiento global para mejorar el desplazamiento de personas invidentes en el centro &quot;la unión nacional de ciegos del Perú</title>
		<author>
			<persName><forename type="first">E</forename><surname>Vela</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Ingeniería Electrónica</publisher>
			<pubPlace>Lima</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Bastón sensorial geolocalizador inteligente para apoyar en el desplazamiento de personas invidentes en la organización regional de ciegos del Perú -Chiclayo</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vilchez</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<pubPlace>Chiclayo</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Ingeniería de Sistemas y Computación</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Propuesta para el diseño de un bastón electrónico para personas invidentes que mejorara la calidad de su desplazamiento diario</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lizárraga</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Ingeniería Industrial</publisher>
			<pubPlace>Arequipa</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Real-Time Object Detection System with Voice Feedback for the Blind People</title>
		<author>
			<persName><forename type="first">H</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Amin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Dadwani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Desai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chatiwala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Senjyu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>So-In Y</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joshi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Smart Trends in Computing and Communications</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="683" to="690" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Network-Aware 5G Edge Computing for Object Detection: Augmenting Wearables to &quot;See&quot; More, Farther and Faster</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Azzino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">A L Y</forename><surname>Hao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Pei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Boldini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mezzavilla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beheshti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Porfiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Hudson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Seiple</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rangan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang Y J.-R</surname></persName>
		</author>
		<author>
			<persName><surname>Rizzo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="29612" to="29632" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Valipoor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>De Antonio</surname></persName>
		</author>
		<title level="m">Recent trends in computer vision-driven scene understanding for VI/blind users: a systematic mapping» Universal Access in the Information Society</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Rationale and development of an e-health application to deliver patient-centered care during treatment for recently diagnosed multiple myeloma patients: pilot study of the MM E-coach</title>
		<author>
			<persName><forename type="first">P</forename><surname>Geerts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Eijsink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Horst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Boersma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Postma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pilot and Feasibility Studies</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">DATASET OF PERUVIAN BANKNOTES</title>
		<author>
			<persName><forename type="first">N</forename><surname>Caytuiro-Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Peña-Alejandro Y</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Castro-Gutierrez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE DataPort</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
