<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Assistive Mobile Application for the Blind</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ismail</forename><surname>Sahak</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Computing</orgName>
								<orgName type="institution">University Malaysia of Computer Science &amp; Engineering</orgName>
								<address>
									<country key="MY">Malaysia</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Ong</forename><forename type="middle">Huey</forename><surname>Fang</surname></persName>
							<email>ong.hueyfang@monash.edu</email>
							<affiliation key="aff1">
								<orgName type="department">School of Information Technology</orgName>
								<orgName type="institution">Monash University Malaysia</orgName>
								<address>
									<country key="MY">Malaysia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Abdul</forename><surname>Syuhada</surname></persName>
						</author>
						<author>
							<persName><surname>Rahman</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Computing</orgName>
								<orgName type="institution">University Malaysia of Computer Science &amp; Engineering</orgName>
								<address>
									<country key="MY">Malaysia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Assistive Mobile Application for the Blind</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7E8EC5CB21582B4502DBAAC2AEA27EB2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T20:41+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>One of the challenges faced by blind people is the difficulty in identifying objects with concise information. They could only rely on the senses of hearing, smell, taste or touch to engage and get some perspectives of objects. Hence, this paper presents a mobile application called Iris to aid blind people in "visualising" their surroundings with descriptive objects. Iris combines the multiple object detection and optical character recognition capabilities of Microsoft Computer Vision API to turns smartphones into assistive devices for the blind to use in their daily activities.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>According to a study in 2010, visual impairment is a major global health issue affecting 285 million people in six World Health Organization regions. Approximately 39 million of them are blind, and 246 million with decreased visual acuity <ref type="bibr" target="#b8">(Pascolini &amp; Mariotti, 2012)</ref>. Another study reported that adults with visual impairment having difficulty performing their daily tasks and need assistance in activities such as reading, writing, shopping, driving and using the computer <ref type="bibr" target="#b11">(Riddering, 2016)</ref>. Along with that, the use of smartphones is prominent among the visually impaired population. The advent of computer vision technologies, such as object detection and optical character recognition (OCR), is also promising for creating more effective mobile applications to aid blind people dealing with problems of identifying objects, texts, and spatial locations without having to engage them <ref type="bibr" target="#b10">(Ramkishor &amp; Rajesh, 2017)</ref>.</p><p>Among the issues of current mobile applications for blind people is it could only detect a single object at a single frame. In a real-life environment, there is a tendency for objects to be close to each other. Therefore, it is crucial to develop a mobile application that could detect not only one but multiple objects in a single camera frame. Another issue is that most applications (e.g. BlindTool and Aipoly) respond without additional contexts or descriptions of objects with their surrounding environment. For instance, if an ap-ple is detected, the application speaks out the word "apple" to the user. Little does the user know, the apple may be on top of a table. Moreover, printed text is everywhere in our daily life, such as on reports, receipts, bank statements, product packages and medicine bottles. It is troublesome for blind people if they are unable to read these texts.</p><p>Therefore, this paper developed a mobile application called Iris for the utility of blind people. The user can tap on the screen to detect descriptive objects. A captured image is sent to Microsoft Computer Vision API as a parameter to retrieve values of detected objects and transcribed text in the image. The mobile application then sorts the retrieved object descriptions and text transcriptions based on confidence to form a sentence. Finally, the application speaks out the sentence to the user in a natural language.</p><p>The rest of the paper is organised as follows. First, section 2 gives an overview and discusses some of the related works. Section 3 provides an overview of Iris's design and its main components. Then, the proposed Iris's design is presented in section 4. Subsequently, section 5 concludes this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Works</head><p>In 1996, Malaysia's National Eye survey found that among 18,027 residents examined, the age-adjusted prevalence of blindness and low vision were 0.29% and 2.44% respectively. Females had a higher age-adjusted prevalence of low vision compared to males <ref type="bibr" target="#b12">(Zainal et al., 2002)</ref>. The authors also highlighted that there is a need to evaluate the accessibility and availability of eye care services and the barriers to eye care utilisation in the country.</p><p>With higher computational and storage capacity of mobile devices, as well as growing speeds and coverage of mobile internet, provide unique possibilities for the use of smartphones as universal assistive devices <ref type="bibr" target="#b9">(Punchoojit &amp; Hongwarittorrn, 2017)</ref>. The promising development in computer vision, such as in optical character recognition (OCR), makes it possible to create assistive devices with camera-based products systems <ref type="bibr" target="#b3">(Dharmale, &amp; Ingole, 2015)</ref>. Text is widely used in our daily life and an important form of communication. For example, different signboards with directions and shop names contain important textual or symbolic information to facilitate human's knowledge and perception of the environment and in performing activities, such as for navigation. The need to read textual or symbolic information is essential in the case of blind or visually challenged persons. Having the ability to determine what objects precisely are in front of them, along with any additional information is indeed helpful for the blind <ref type="bibr" target="#b1">(Brady, Morris, Zhong, White, &amp; Bigham, 2013)</ref>.</p><p>This study had benefitted from the use of existing accessibilities technologies. People with visual impairment can easily browse and navigate through their smartphones with accessibility features to use the proposed mobile application. Table <ref type="table" target="#tab_0">1</ref> shows some of the accessibility features for blind and low vision people, which are available in mobile devices using iOS <ref type="bibr" target="#b0">(Apple, 2018)</ref>  Object recognition brings forth a multitude of possibilities in the modern world. This study had also implemented the use of multiple object detection technology. An object detection algorithm typically creates a bounding box around the object of interest to locate it within the image. However, the algorithm might not necessarily draw just one bounding box in an object detection case, there could be many bounding boxes representing different objects of interest within the image, and it would not know how many beforehand <ref type="bibr" target="#b6">(Khurana &amp; Awasthi, 2013)</ref>.</p><p>Faster R-CNN (Region Convolutional Neural Network) was developed by researchers at Microsoft, which is based on R-CNN with a multi-phased approach to object detection. R-CNN used selective search to determine region proposals, pushed these through a classification network and then used a Support Vector Machine (SVM) to classify the different regions <ref type="bibr" target="#b4">(Hulstaert, 2018)</ref>. However, a selective search is a slow and time-consuming process affecting the performance of the network. Therefore, Faster R-CNN algorithm elimi-nates the selective search algorithm and lets the network learn the region proposals <ref type="bibr" target="#b5">(Kawazoe et al., 2018)</ref>. This object recognition technology is provided as an API service by Microsoft, known as the Computer Vision API. One can use Computer Vision in their application by using either a native SDK or invoking the REST API directly (Microsoft Docs, 2020).</p><p>Table <ref type="table">2</ref>: Accessibility features on mobile operating systems Some of the existing mobile applications based on object detection for the blind are such as BlindTool <ref type="bibr" target="#b2">(Cohen, 2015)</ref> and Aipoly Vision (Aipoly, 2018). Table <ref type="table">2</ref> presents a comparison of these mobile applications with the proposed. Iris is seen to be better than others in terms of its output to the users. Iris serves to give a better insight of the objects detected by describing them. For instance, if in the captured picture there is a mug on a table, Iris would describe it as "Mug on a table" instead of just saying "Mug". In addition to that functionality, Iris can read any text that is present with an object. If per se there are two drink cans, which could be similar in dimension but different in brand, Iris would come useful because it could tell the user what they are holding, and what texts on the products using the OCR functionality. Ultimately, the proposed mobile application helps in giving blind people a much better perspective of objects around them and is useful for their day-to-day activities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Requirement Analysis</head><p>Requirement analysis is done to understand what will be built, why it should be built, and in what order it should be built. This section explains in detail about the needs of the target users of this application, which are blind people. Interviews were conducted with three voluntary respondents, courtesy of Malaysian Association for the Blind in Kuala Lumpur. Two of them are males, and one is female. They are aged between 24 and 31 years old. The purpose of the interviews was to understand the challenges that blind people face in identifying objects, to get their perception regarding mobile applications, and get their inputs on the development of Iris. The following subsections discussed the results of the interviews.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Challenges in Identifying Objects</head><p>Most of them have problems identifying new objects even by touching them. It means that when a product that they have been buying had changed the packaging, then they would have difficulty to identify the product without someone telling them what it is. Problems also arise when they are unable to touch something, and they could not determine what objects are present in their proximity, or even the ronment or location they are situated in. They would need to depend on other senses, such as sound and smell, which could be tricky if it is a new environment. The respondents agreed that they prefer to know what environment they are in without having to touch around.</p><p>Besides, the respondents have trouble distinguishing two physically similar objects. They are not able to tell one object from the other when the objects have the same textures or properties when touched. Hence, they need the ability to differentiate objects. For example, they might want to differentiate between a can of tomato soup from a can of evaporated milk. This problem also relates to their inability to read texts if they are not in braille.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Perceptions of Using Mobile Applications</head><p>All of the respondents have their personal smartphones. However, their primary uses of a smartphone merely to text and call. They use assistance features that come with their phones to navigate and interact with the phone's interfaces. The respondents did not use any assistance applications with their smartphones, but they surely welcome any applications that could support their visual needs. One of the respondents shared that they wanted a mobile application that could tell if their hair or cloth is messy. Another respondent shared that he wanted an application that would tell him if there are things in front of him and to tell him the description of cer-tain items. The responded also added that he wanted an application to manage his medications because it could be confusing sometime.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Features in Iris</head><p>Due to usability reasons, all of the respondents agreed that tapping the screen would be the best way to take a picture. It presumably because it is easier to just tap anywhere on the screen rather than having to locate a button. One key feature suggested by a respondent was an auto-flash feature. This feature proves to be useful to blind people because they would not know if the scene is dark. Hence, having the autoflash feature helps with the overall usability of the application. Another key feature suggested was repeating the application's instructions. This feature was suggested to allow repetition of instructions for new users and without having to go through the process of retaking the picture. This part of the interview had helped in designing the application to be more user-friendly to blind people.  After uploading an image, Computer Vision API's output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets and others. Computer Vision API's algorithms analyse the content in an image. This analysis forms the foundation for a "descriptive" displayed in complete sentences. Computer Vision API's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. An ordered list is then generated from the highest confidence score to the lowest.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">The Proposed Mobile Application</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Architecture</head><p>OCR technology detects text content in an image and extracts the identified text into a machine-readable character stream <ref type="bibr" target="#b10">(Ramkishor &amp; Rajesh, 2017)</ref>. This technology can be used for search and numerous other purposes like medical records, security and banking. It automatically detects the language. OCR supports 25 languages, and the accuracy of text recognition depends on the quality of the image. Inaccurate detections may be caused by blurry images, handwritten or cursive text, artistic font styles, small text size, complex backgrounds, shadows, or glare over text or perspective distortion, oversized or missing capital letters at the beginnings of words, subscript, superscript, or strikethrough text.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">User Interfaces</head><p>Figure <ref type="figure" target="#fig_2">3</ref> (a) shows the main interface for Iris. The main interface serves as the only interaction screen of the application to make it easy to use for users. It is applied to reduce short-term memory load, especially for blind people. The interface consists mostly of the camera view, followed by the result panel below it; where results are shown in text, along with the spoken output that the application produces.</p><p>Users have to tap anywhere on the screen to quickly take a picture for it to produce descriptions of objects and texts in the picture. To cancel a request, the user would only have to swipe the screen while the application is loading a request or speaking an output. Another feature is the users can tap and hold anywhere on the screen for the application to repeat a previous description. The control conventions were utilised based on the ease-of-use for blind people, and at the same time reducing the short-term memory load that was mentioned. is also spoken to the user. As previously discussed, in this specific case, the API had not been able to return a result in descriptions. Hence, the tags from the result's JSON are accessed and mapped; objects to their characteristics. It makes it possible for the application to output a result even when the Computer Vision API could not construct a description from the picture.</p><p>Figure <ref type="figure" target="#fig_3">4</ref> (a) shows an output when there are texts associated with the object detected. It can be seen that the outputs from two of the APIs used are combined within the application to create an output that is intuitive to the user. While Figure <ref type="figure" target="#fig_3">4</ref> (b) shows an exception case when Iris is opened in a poorlylit environment. The figure depicts that the light source is coming from the smartphone's flashlight. In this case, the application detected that the ambience value in the darkroom is too low. To counter that issue, Iris automatically toggles the smartphone's flashlight on. With this feature, it is more accessible to blind people in the sense that they could not tell if they are in a dark environment while using Iris. Therefore, it is a feature that helps illuminate objects while being in the dark. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In a nutshell, this paper adopted multiple object recognition and OCR technologies to develop an assistive mobile application for blind people. Iris helps to identify objects and produce descriptive texts from a picture taken by a camera. It serves to describe and distinguish objects so that the blind can have a better insight into the objects in their surroundings. The overall outcome was satisfactory, considering that blind people can make use of the proposed mobile application to tackle problems in their daily activities, hence, aiding them towards independence. However, there are plenty of features that could be implemented to improve the application, such as lower latency and face detection. With 5G technology a major availability in the future, the application could access advanced algorithms and image processing services in the cloud and retrieve the results almost instantaneously. Moreover, the application could be updated to de-tect and describe human faces, which will be a great support in the communication of blind people with others.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1</head><label>1</label><figDesc>Figure 1 shows the overall system architecture of Iris. It interfaces with Microsoft Cognitive Service and Android OS APIs in order to take advantage of the service that they offer in complement with the application's features. The architecture of the application follows the design of Mobile-View-Controller.</figDesc><graphic coords="3,308.70,393.55,252.55,200.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The system architecture of Iris</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: (a) Iris's main interface (b) Describing multiple objects</figDesc><graphic coords="4,54.00,503.50,242.65,137.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: (a) Iris's main interface (b) Describing multiple objects</figDesc><graphic coords="4,315.10,318.50,242.80,149.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="2,315.10,146.05,242.80,376.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>and Android (Google, 2017) operating systems. Accessibility features on mobile operating systems</figDesc><table><row><cell>iOS</cell><cell>Android</cell></row><row><cell>VoiceOver</cell><cell>TalkBack</cell></row><row><cell>Speak Screen</cell><cell>Select to Speak</cell></row><row><cell>Captioning &amp;</cell><cell>Audio &amp; on-screen</cell></row><row><cell>audio descriptions</cell><cell>text</cell></row><row><cell>Dark mode and smart</cell><cell>Contrast and colour</cell></row><row><cell>invert colours</cell><cell>options</cell></row><row><cell cols="2">Zoom and font adjustment Change display and</cell></row><row><cell></cell><cell>font size</cell></row><row><cell>Magnifier</cell><cell>Magnification</cell></row><row><cell>Accessibility Shortcuts</cell><cell>Interaction controls</cell></row><row><cell>Dictation</cell><cell>Voice dictation</cell></row><row><cell>Braille entry &amp; display</cell><cell>BrailleBack</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><surname>Apple</surname></persName>
		</author>
		<ptr target="https://itunes.apple.com/us/app/aipoly-vision-sightfor-blind-visually-impaired/id1069166437?mt=8" />
		<title level="m">Aipoly Vision: Sight for Blind &amp; Visually Impaired on the App Store</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note>Accessibility on iOS</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Visual challenges in the everyday lives of blind people</title>
		<author>
			<persName><forename type="first">E</forename><surname>Brady</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Morris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>White</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Bigham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</title>
				<meeting>the SIGCHI Conference on Human Factors in Computing Systems<address><addrLine>Paris, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">BlindTool -A mobile app that gives a &quot;sense of vision&quot; to the blind with deep learning</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Cohen</surname></persName>
		</author>
		<ptr target="https://github.com/ieee8023/blindtool" />
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Text Detection and Recognition with Speech Output in Mobile Application for Assistance to Visually Challenged Person</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D</forename><surname>Dharmale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P V</forename><surname>Ingole</surname></persName>
		</author>
		<ptr target="https://support.google.com/accessibility/android/answer/6006564" />
	</analytic>
	<monogr>
		<title level="m">Android accessibility overview -Android Accessibility Help</title>
				<imprint>
			<publisher>Google</publisher>
			<date type="published" when="2015">2015. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">A Beginner&apos;s Guide to Object Detection</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hulstaert</surname></persName>
		</author>
		<ptr target="https://www.datacamp.com/community/tutorials/object-detection-guide" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Faster R-CNN-based glomerular detection in multistained human whole slide images</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Kawazoe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shimamoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Yamaguchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shintani-Domoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Uozaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fukayama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ohe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Imaging</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">7</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Techniques for Object Recognition in Images and Multi-Object Detection</title>
		<author>
			<persName><forename type="first">K</forename><surname>Khurana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Awasthi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home" />
		<title level="m">What is Computer Vision? -Computer Vision -Azure Cognitive Services</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
		<respStmt>
			<orgName>Microsoft Docs</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Global estimates of visual impairment: 2010</title>
		<author>
			<persName><forename type="first">D</forename><surname>Pascolini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">P</forename><surname>Mariotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">British Journal of Ophthalmology</title>
		<imprint>
			<biblScope unit="volume">96</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">614</biblScope>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Usability Studies on Mobile User Interface Design Patterns: A Systematic Literature Review</title>
		<author>
			<persName><forename type="first">L</forename><surname>Punchoojit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Hongwarittorrn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Human-Computer Interaction</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page">6787504</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Artificial Vision for Blind Peoples using OCR Technology</title>
		<author>
			<persName><forename type="first">V</forename><surname>Ramkishor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rajesh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Emerging Trends &amp; Technology in Computer Science</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="30" to="33" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Visual Impairment and Factors Associated with Difficulties with Daily Tasks</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Riddering</surname></persName>
		</author>
		<ptr target="https://scholarworks.wmich.edu/dissertations/2465" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
		<respStmt>
			<orgName>Western Michigan University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Doctoral dissertation</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Prevalence of blindness and low vision in Malaysian population: results from the National Eye Survey 1996</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zainal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Ismail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Ropilah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Elias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Arumugam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Alias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">.</forename><forename type="middle">.</forename><surname>Goh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">P</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The British journal of ophthalmology</title>
		<imprint>
			<biblScope unit="volume">86</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="951" to="956" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
