<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Software Module for Unmanned Autonomous Vehicle&apos;s On-board Camera Faults Detection and Correction</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Egor</forename><surname>Domnitsky</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vladimir</forename><surname>Mikhailov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Evgeniy</forename><surname>Zoloedov</surname></persName>
							<email>evgenijzoloedov@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Danila</forename><surname>Alyukov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sergey</forename><surname>Chuprov</surname></persName>
							<email>chuprov@itmo.ru</email>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Egor</forename><surname>Marinenkov</surname></persName>
							<email>egormarinenkov@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ilia</forename><surname>Viksnin</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University</orgName>
								<address>
									<addrLine>Kronverksky Pr. 49, bldg. A</addrLine>
									<postCode>197101</postCode>
									<settlement>St. Petersburg</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Software Module for Unmanned Autonomous Vehicle&apos;s On-board Camera Faults Detection and Correction</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">B7B0E6A6EFB10DA39C0FD0C132E7F02F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T00:34+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>UAV</term>
					<term>Fault detection</term>
					<term>Fault correction</term>
					<term>On-board camera</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Sensor devices proper operation is crucial for the localization and movement of unmanned autonomous vehicles. On-board cameras and computer vision technologies are used in many models of unmanned vehicles and robotic devices for recognizing surrounding objects. However, malfunctions in the procedures of receiving or processing a video stream can significantly affect vehicle's safety and endanger other road users. In this paper, we review existing methods for detecting and correcting faults occurring in video stream from on-board camera. A real-time fault detection and correction software based on existing solutions is proposed. Moreover, we perform demo-setup with a test video fragment to assess the software performance in different light conditions. The video of the software operation in the demo-setup process is provided. The proposed approach and software developed on its basis showed appropriate performance in daylight conditions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Nowadays, with the intensive technology development, the urban population and the private transport amount are expected to continuously grow in the coming years. In perspective, congested urban traffic will require precise management in automation and optimization aspects. As stated in <ref type="bibr" target="#b0">[1]</ref>, the popular "Smart City" concept implementation poses a variety of challenges for transportation area, such as ensuring the safety of road users, traffic optimization, accident prevention and other significant issues.</p><p>One of the possible solutions to meet these challenges is the unmanned autonomous vehicles (UAVs) integration. However, such UAVs should be reliable and conform the functional and information security and safety requirements. UAVs on-board devices for collecting and trans-mitting data, and performing localization and movement (sensors, cameras, transmitters, etc) are needed to be supervised by a special subsystem that is capable of performing real-time fault detection procedures. By the detection of faulty, defective or maliciously attacked elements, such subsystem prevents negative effects on joint on-board systems and on other vehicles. For example, it is critical for an on-board camera to have a full view on the road. Especially, when it is responsible for providing other joint systems with environmental information used for orientation. In vehicular ad hoc networks (VANETs) disinformation can lead to critical consequences, such as traffic accidents, human casualties or deaths, and financial losses.</p><p>In the present paper, we analyzed algorithmic methods for detecting and correcting possible malfunctions in the on-board camera video stream, which is used by the UAV for perception and localization purposes. Moreover, we develop and assess our custom software that allows to process, detect and correct the malfunctions in on-board camera's video stream in real-time conditions.</p><p>The paper is organized as follows. Section 2 contains an overview of existing video stream analyzing and processing algorithms for identifying and correcting on-board camera faults. Section 3 describes the goals and objectives of the present study. Section 4 contains a description of chosen methods for detecting and correcting selected malfunctions. Section 5 contains a demonstration of the developed software, testing approach description, and the results overview for each implemented malfunction. Section 6 states the conclusions and plans for further research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>Since autonomous technology started to actively develop, various solutions for camera malfunction detection have been introduced. In <ref type="bibr" target="#b1">[2]</ref> the authors proposed a frame-by-frame video stream processing algorithm in a video surveillance system. Each frame is processed bodily and divided into blocks. From each frame (and frame blocks) an image of brightness, brightness gradient, borders, intensity and borders direction (for example, using the Sobel operator), RGB of frame, HSV of frame, as well as mean values of all named parameters are obtained. It detects objects movement in the frame and creates a motion picture for each frame. Then, the analyzed input frame is compared with the previous "saved" frame, which is also fully analyzed. A "saved" frame is a blocks' array on which no movement or deviations (malfunctions) were recorded. When comparing the input and "saved" frames, a malfunction candidate image is formed -the block of the saved frame and corresponding one of input -their parameters and mean values are compared, and if the difference exceeds a certain threshold, this block is considered as a malfunction candidate. Blocks that were not identified as faulty and did not participate in the motion picture renew the corresponding one's in "saved" frame. Further, the motion picture is applied to the formed picture of candidates, and the blocks participated in the motion are excluded from the picture of candidates. Thus, the malfunction picture is formed. In turn, based on the frames parameters comparison and the set of compared frames, several fault patterns are formed, one for each fault type. The sets of compared parameters responsible for certain malfunctions are also presented in the paper. The proposed algorithm is computing-power consuming and can be used effectively only on stationary cameras.</p><p>In <ref type="bibr" target="#b2">[3]</ref> a morphological analysis for simple malfunctions, and machine learning approach to deal with detecting complex issues are proposed. The idea is to detect: lack/excess of brightness by counting number of gray-level pixels; saturation error by counting pixel number with high saturation; freezing by counting identical frames; frame loss by counting blue/black frames; broken frame by gradient mapping evaluation; excess of palette colors (color cast) by color space deviation evaluation. A convolutional neural network is used to detect frame banding malfunctions, overlaps, and image blur. In terms of mobile cameras, morphological analysis may operate rather effectively due to its simplicity. The use of convolutional neural network is a perspective approach, however, according to the hardware restrictions and deep learning requirements, it might not be an effective solution.</p><p>In <ref type="bibr" target="#b3">[4]</ref> the authors looked into the problem of stereoscopic 3D (S3D) color correction in terms of visual inconsistency, which leads to faulty frame perception. In the paper, a color correction algorithm for S3D images and videos is proposed that simultaneously deal with global, local, and temporal color inconsistencies. Algorithm is split into three steps:</p><p>• coarse-grained color grading for global color matching; • fine-grained color correction; • local color correction.</p><p>These steps allow to ensure structural consistency before and after the color correction procedure. Moreover, the display functions for each color channel are changed gradually with the video stream to avoid abrupt temporal color deviations. Experimental results showed that the proposed algorithm is superior to many modern image and video color correction techniques.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Problem Statement</head><p>The aim of this work is to develop a multiple faults detection software that allows real-time processing and correcting of UAV on-board camera's video stream. The software should detect the malfunctions and apply correction measures (if possible), and notify an operator on it.</p><p>To reach the research aim, following tasks are introduced:</p><p>1. to examine, which video stream properties can be obtained for further processing; 2. to define the approach for video stream processing; 3. to determine most common camera malfunctions; 4. to analyze the proposed malfunction detection methods and define the most appropriate; 5. to analyze existing correction methods and define the most appropriate; 6. to implement selected methods in a software to perform video stream real-time detection and correction and test it; 7. to provide conclusions on the study performed. Maintaining UAV's functionality is a complex challenge. An integral part of this issue is the initial problem detection. Before the safety responsible subsystem apply measures to control the functionality, it is necessary to determine possible damage, as this characteristic allows to determine possible measures for its mitigation or harmful effect minimization. Accordingly, the on-board systems' functionality control is divided into two stages:</p><p>1. problem detection and determining its nature. It is necessary to determine the on-board system's parameters for its further self-diagnosis and malfunctions detection; 2. applying corrective measures. If it is possible, return the system to normal operation without physical interaction via software correction algorithms application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Correction Measures Overview</head><p>In the present study, we consider three common on-board camera faults: color cast, image blur, and lens overlap by another objects or substance, e.g. dirt. The approach for a video stream processing is a frame-by-frame analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Color cast 4.1.1. Detection</head><p>To detect this malfunction, an approach proposed in <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref> is used. In the RGB color space, it is difficult to determine the color deviation of a frame in color space, due to all three pixel "coordinates" in the color space are responsible for color. The researchers proposed a solution that allows translating the image into the Lab color space, where 𝐿 is responsible for the pixel color brightness but not for the color component; two other channels are responsible for the color components: 𝑎 -in the positive semi-axis (up to +127) for the magenta color, in the negative (−128) for green; and 𝑏 component -in the positive semi-axis for yellow, and in negative semi-axis for blue. Due to the color space change we can place a point (pixel) on the color plane (𝑎, 𝑏), and therefore determine the deviation of the points density relative to the axes 𝑎 and 𝑏 intersection ((0, 0) point). Density here means the pixels concentration in some point area in the color space (in the (0, 0) point area in normal case). The density can be characterized using two calculated parameters: 𝐷 -the average chromaticity (distance from the (0, 0) point to the averaged density "center") defined by ( <ref type="formula" target="#formula_0">1</ref>) and ( <ref type="formula" target="#formula_1">2</ref>), and 𝑀 -the average chromacity momentum defined by ( <ref type="formula" target="#formula_3">3</ref>) and (4), i.e. average distance from the averaged density center to the points surrounding it, namely forming the density itself (average radius of the density). The factor (Cast Factor) 𝐾 = 𝐷/𝑀 indicates the color cast presence: the larger it is (i.e., the larger 𝐷 and the less 𝑀 ), the more distinguishable the color deviation is.</p><formula xml:id="formula_0">𝑚𝑒𝑎𝑛 𝑎 = 𝑖=𝐻 ∑︀ 𝑖=1 𝑗=𝑊 ∑︀ 𝑗=1 𝑎(𝑖, 𝑗) 𝐻 × 𝑊 , 𝑚𝑒𝑎𝑛 𝑏 = 𝑖=𝐻 ∑︀ 𝑖=1 𝑗=𝑊 ∑︀ 𝑗=1 𝑏(𝑖, 𝑗) 𝐻 × 𝑊<label>(1)</label></formula><formula xml:id="formula_1">𝐷 = √︁ 𝑚𝑒𝑎𝑛 2 𝑎 + 𝑚𝑒𝑎𝑛 2 𝑏 (<label>2</label></formula><formula xml:id="formula_2">)</formula><formula xml:id="formula_3">𝑀 𝑎 = 𝑖=𝐻 ∑︀ 𝑖=1 𝑗=𝑊 ∑︀ 𝑗=1 |𝑎(𝑖, 𝑗) − 𝑚𝑒𝑎𝑛 𝑎 | 𝐻 × 𝑊 , 𝑀 𝑏 = 𝑖=𝐻 ∑︀ 𝑖=1 𝑗=𝑊 ∑︀ 𝑗=1 |𝑏(𝑖, 𝑗) − 𝑚𝑒𝑎𝑛 𝑏 | 𝐻 × 𝑊<label>(3)</label></formula><formula xml:id="formula_4">𝑀 = √︁ 𝑀 2 𝑎 + 𝑀 2 𝑏<label>(4)</label></formula><p>In <ref type="bibr" target="#b6">[7]</ref> the authors proposed to use the interval from 1 to ≈2 as normal values of the 𝐾 factor. If 𝐾 &gt; 2, then a fault is detected and warning message in the operator interface is displayed. Cases when 𝐾 &lt; 1 are considered as normal depending on the on-board camera device characteristics and overall luminance. The precise 𝐾 factor's thresholds are required to be set according to a specific camera model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.2.">Correction</head><p>In <ref type="bibr" target="#b7">[8]</ref> Gasparini and Schettini proposed several methods for color cast correction. The measured values of the RGB frame are different for various viewing conditions, however human's eyes are capable of compensating the light source chromacity and approximately retain the scene colors. This phenomenon is known as chromatic adaptation. Digital imaging systems cannot account for these shifts in its color balance. In order to restore the original frame chromacity under different lighting and viewing conditions, the measured RGB channels' values need to be converted. These conversions are called chromatic adaptation models. The chromatic adaptation model converts RGB channel viewing condition set values to such matching the required ones.</p><p>The gray world algorithm assumes that if there is an image with enough color variations, the average values of its RGB channels is equal to the gray value. Thus, in an image taken with a digital camera in a particular lighting environment, the color cast caused by this lighting via this algorithm. After the gray value is selected, each color channel is scaled by applying a Von Kries transformation adapted to RGB space, which is represented by <ref type="bibr" target="#b4">(5)</ref>. Von Kries transformation coefficients are defined by <ref type="bibr" target="#b5">(6)</ref>. The averages of the RGB channel are calculated according to <ref type="bibr" target="#b6">(7)</ref>. The gray value is defined according to <ref type="bibr" target="#b7">(8)</ref>.</p><formula xml:id="formula_5">𝑅 𝑛𝑒𝑤 = 𝑘 𝑅 × 𝑅 𝐺 𝑛𝑒𝑤 = 𝑘 𝐵 × 𝐺 𝐵 𝑛𝑒𝑤 = 𝑘 𝐵 × 𝐵 (5) 𝑘 𝑅 = 𝐺𝑟𝑎𝑦 𝑅 /𝑅 𝑎𝑣𝑔 𝑘 𝐺 = 𝐺𝑟𝑎𝑦 𝐺 /𝐺 𝑎𝑣𝑔 𝑘 𝐵 = 𝐺𝑟𝑎𝑦 𝐵 /𝐵 𝑎𝑣𝑔 (6) 𝑅 𝑎𝑣𝑔 = ∑︁ 𝑅 𝑖 /𝑛 𝐺 𝑎𝑣𝑔 = ∑︁ 𝐺 𝑖 /𝑛 𝐵 𝑎𝑣𝑔 = ∑︁ 𝐵 𝑖 /𝑛<label>(7)</label></formula><formula xml:id="formula_6">𝐺𝑟𝑎𝑦 𝑅 = 𝐺𝑟𝑎𝑦 𝐺 = 𝐺𝑟𝑎𝑦 𝐵 = 𝑅 𝑎𝑣𝑔 + 𝐺 𝑎𝑣𝑔 + 𝐵 𝑎𝑣𝑔 3<label>(8)</label></formula><p>In fact, most color balancing/restoring algorithms work well only in the certain accepted assumptions conditions. For gray world algorithm correct operation, the frame/image need to be sufficiently colorful, otherwise the results can be distorted or gray-prevailing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Blur</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1.">Detection</head><p>To detect frame blur, we adopt and apply the algorithm described in <ref type="bibr" target="#b8">[9]</ref>. The main idea of this approach is to calculate the frame edge dispersion and compare it with a threshold value. It can be done via the 2nd derivative utilization: if the derivative changes its sign in some point, this point is a function graph inflection point. The number of black to white transitions (dispersion) is counted in the algorithm.</p><p>The algorithm steps for image blur detection are described below.</p><p>1. Get the input frame.</p><p>2. Convert the input frame from the RGB to the GRAY color space to avoid possible interference in the estimation. 3. Apply the Laplace Operator. At this stage, all object's edges are outlined in the frame. 4. Count the transitions number (dispersion). 5. Compare the obtained value with the predefined threshold. The threshold is calculated experimentally as it depends on many factors, such as illumination and objects number in the frame. If the value is greater than the threshold, the image is not blurry, otherwise blur is detected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2.">Correction</head><p>The blur correction algorithm steps are provided below.</p><p>1. Calculate the absolute difference between the current and the next GRAY frames.</p><p>2. Count the number of pixels above and below normal.</p><p>3. If the threshold is exceeded, apply the Sobel Operator, where 𝐴 is the input frame matrix, 𝐺 𝑥 is the 𝑥's derivative (9), 𝐺 𝑦 is the 𝑦's derivative (10) and 𝐺 is the 𝑥𝑦's derivative (11).</p><formula xml:id="formula_7">𝐺 𝑥 = ⎡ ⎣ −1 0 +1 −2 0 +2 −1 0 +1 ⎤ ⎦ × 𝐴,<label>(9)</label></formula><formula xml:id="formula_8">𝐺 𝑦 = ⎡ ⎣ −1 −2 −1 0 0 0 +1 +2 +1 ⎤ ⎦ × 𝐴,<label>(10)</label></formula><formula xml:id="formula_9">𝐺 = √︁ 𝐺 2 𝑥 + 𝐺 2 𝑦 ,<label>(11)</label></formula><p>4. Display borders in the frame.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Dirt detection</head><p>To detect interfering objects and substances overlapping the lens, the following algorithm based on calculating the difference between adjacent frames is applied. It was proposed by one of authors of the present paper. The algorithm is organized as follows:</p><p>1. Get the input frame. 2. Convert the input frame from the RGB to the GRAY color space to avoid possible interference in the estimation. 3. Calculate the absolute difference between the adjacent frames. 4. If the difference exceeds certain threshold, the percentage of different pixels between the adjacent frames is calculated. 5. If this percentage exceeds certain threshold, the Sobel Operator is applied to outline the overlap edges.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>In this section we provide a demonstration of on-board camera's faults detection and correction by the developed software. The program contains several modules, each of which includes a class with variables and methods necessary for the corresponding detection/correction algorithm.</p><p>For developing purposes C++ programming language was used, along with the OpenCV library for frame-by-frame video stream processing, and the Qt library for interface part (handy track bars to manually set and apply artificial malfunctions on a video-stream). Figure <ref type="figure" target="#fig_0">1</ref> represents the graphical user interface (UI) of the developed software. On the left top side of the interface one can observe the input frame with artificial malfunctions applied, on the right top side -an output frame with color cast corrected or lens overlapping marked. On the left side of the UI buttons for basic file opening and image rotating are placed. In the middle of UI the control track bars for artificial malfunctions: color channel balance (color cast), dirt (for overlapping), and blur respectively are placed. On the right side one can observe a fault indication panel. For testing purposes, artificial faults were manually applied to the original frame: image blur, artificial spots (overlapping), and a change in the frame color balance. The conducted demo-setup of the developed software was recorded and can be accessed publicly <ref type="foot" target="#foot_0">1</ref> , further we will reference to the video time-codes. The testing was performed on the video stream fragment, which also can be found in a public access <ref type="foot" target="#foot_1">2</ref> . Module was tested on a computer equipped with eight-core processor, able of 150 GFlop/s computing performance, and ran effectively with FPS of 80-100. For example, processors with 200 TFlops/s are already available on the market, and even designed specially as platforms for UAV's systems development <ref type="foot" target="#foot_2">3</ref> . This allows us to consider our software module not demanding in terms of computing resources.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Color cast</head><p>The conducted testing showed that fault detection algorithm performed well in daylight conditions and was able to detect even slight color deviations. The correction algorithms also perform well in daylight conditions; even if a slight deviation remains, it is almost indistinguishable to the human eye and incapable of disrupting the correct perception of color by machine. However, the algorithm loses its effectiveness in low light conditions, as can be seen from the testing video 1 on 2:23. To increase the algorithm's efficiency in low light conditions, it was decided to increase the input frame's brightness and contrast so that the algorithm would work correctly and the image would not be overly lightened. According to the OpenCV library documentation <ref type="foot" target="#foot_3">4</ref> , the cv::Mat::convertTo processes the values of each pixel according to (12).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑔(𝑖, 𝑗)</head><formula xml:id="formula_10">= 𝛼 • 𝑓 (𝑖, 𝑗) + 𝛽,<label>(12)</label></formula><p>where 𝑔(𝑖, 𝑗) is the output pixel value, 𝑓 (𝑖, 𝑗) is the input value, 𝑖 and 𝑗 are the pixel row and column numbers, 𝛼 is the contrast ratio (from 1 to 3), and 𝛽 brightness coefficient (from 0 to 100). It is necessary to calculate the 𝛼 and 𝛽 coefficients depending on the average 𝐿 channel value (responsible for luminance) of the frame converted to the Lab color space. In daylight the 𝐿 average channel value is approximately 130 (𝐿 value is in the interval from 0 to 255). Thus, let us define this value 𝐿𝑖 = 130 -average daylight luminance. The frame highlighting need to be occurred with an average of 𝐿 &lt; 100. To slightly increase the color cast detection efficiency, we introduced a trial experimental dependence, which is calculated according to (13).</p><p>𝑎 0 = 33</p><formula xml:id="formula_11">𝑏 0 = 0 𝑏 = 𝑏 0 + 𝐿𝑖 − 𝐿 𝐿𝑖 • 20 𝑎 = 𝑎 0 • (1.0 + 1.3 • 𝐿𝑖 − 𝐿 𝐿𝑖 ) 𝛼 = 3.0 • 𝑎 • 0.01 𝛽 = 𝑏 𝐾 = 2.0 − 𝐿𝑖 − 𝐿 2𝐿𝑖<label>(13)</label></formula><p>Under the conditions of 30-35% illumination, the color cast is detected by overly illuminating the frame with graininess side increase. However, in the conditions of a very low illumination level (about 20% of 𝐿𝑖), even a boost in brightness and contrast does not help to significantly increase the sensitivity of the detection algorithm. 𝐾 factor threshold value needs to be decreased in a leap, what is a doubtful measure, as we have no information on this algorithm's applicability limits. Such measure might result in more false positive errors in various conditions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Blur</head><p>Blur detection showed satisfying performance in daylight. However, in low light conditions most of objects' edges fade, what leads to false detections, as can be seen from the testing video 1 on 2:08. In addition, false detections occur when applying the algorithm to a lens overlapped and uncorrected color cast frames (from 2:25 on the video 1 ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Dirt detection</head><p>It should be noticed that this fault cannot be corrected without cleaning the lens or disassembling the camera, so the algorithm is focused only on unwanted objects and spots detection. For UAV's correct operation, it is vital to know if camera has an incomplete view in order to prevent accidents. Our demo-setup showed that dirt detection algorithm performs well even in dim luminance. In low light conditions, a false detection can occur -algorithm marks all dark parts in the frame as unwanted objects (dirt), as one can see from 3:00 on the video 1 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>In this paper, we proposed and developed a custom software to process and correct video stream's quality from the UAV's on-board camera in real-time conditions. Initially, we analyzed and briefly described the existing approaches to detecting and correcting faults. Then, we implemented these approaches in the developed custom software as a frame-by-frame video stream processing algorithm, and conducted several experimental demo-setups to assess its effectiveness. As the results showed, the algorithm performs well in daylight conditions, manually introduced video stream faults were detected, processed, and corrected. However, in low light conditions some faults were detected improperly due to the lack of accuracy. At this stage, the software and algorithms require improvement and revision for low light conditions depending on relations between faults, which are a future prospects for this research, as well as the implementation and testing of the proposed approach on real UAV's physical model.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Demonstration of the developed software's graphical user interface</figDesc><graphic coords="8,89.29,84.19,420.91,279.82" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://youtu.be/PdSda2QE1yg</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://bdd-data.berkeley.edu/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://www.nvidia.com/ru-ru/self-driving-cars/drive-platform/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://docs.opencv.org/3.4/d3/d63/classcv_1_1Mat.html</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Smart City&quot;: development perspectives and tendencies</title>
		<author>
			<persName><forename type="first">O</forename><surname>Ganin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ganin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ars Administrandi</title>
		<imprint>
			<biblScope unit="page" from="124" to="135" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Surveillance camera system having camera malfunction detection function to detect types of failure via block and entire image processing</title>
		<author>
			<persName><forename type="first">M</forename><surname>Itoh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Saeki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Suda</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">US Patent</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page">30</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Camera anomaly detection based on morphological analysis and deep learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Digital Signal Processing (DSP)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="266" to="270" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Visually consistent color correction for stereoscopic images and videos</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Niu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Circuits and Systems for Video Technology</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="697" to="710" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An approach of detecting image color cast based on image semantic</title>
		<author>
			<persName><forename type="first">F</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No. 04EX826)</title>
				<meeting>2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No. 04EX826)</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="3932" to="3936" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A color cast detection algorithm of robust performance</title>
		<author>
			<persName><forename type="first">F</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Fifth International Conference on Advanced Computational Intelligence (ICACI), IEEE</title>
				<imprint>
			<date type="published" when="2012">2012. 2012</date>
			<biblScope unit="page" from="662" to="664" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Lab-space-based detection method based on image color cast</title>
		<author>
			<persName><forename type="first">G</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>You</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Cheng</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Color correction for digital photographs</title>
		<author>
			<persName><forename type="first">F</forename><surname>Gasparini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Schettini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">12th International Conference on Image Analysis and Processing</title>
				<imprint>
			<date type="published" when="2003">2003. 2003</date>
			<biblScope unit="page" from="646" to="651" />
		</imprint>
	</monogr>
	<note>Proceedings., IEEE</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Blur image detection using laplacian operator and open-cv</title>
		<author>
			<persName><forename type="first">R</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Raj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Choudhury</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2016 International Conference System Modeling &amp; Advancement in Research Trends (SMART), IEEE</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="63" to="67" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
