<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Building Change Detection in Aerial Images</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fatima</forename><surname>Mroueh</surname></persName>
							<email>fatima.mroueh249@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution">Lebanese University</orgName>
								<address>
									<settlement>Beirut</settlement>
									<country key="LB">Lebanon</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ihab</forename><surname>Sbeity</surname></persName>
							<email>ihab.sbeity@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution">Lebanese University</orgName>
								<address>
									<settlement>Beirut</settlement>
									<country key="LB">Lebanon</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mohamad</forename><surname>Chaitou</surname></persName>
							<email>mohamad.chaitou@ul.edu.lb</email>
							<affiliation key="aff2">
								<orgName type="department">Computer Science Department</orgName>
								<orgName type="institution">Lebanese University</orgName>
								<address>
									<settlement>Beirut</settlement>
									<country key="LB">Lebanon</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Building Change Detection in Aerial Images</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">25A9A6125CFDB6EE15BEDD06F89FA7AC</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>change detection</term>
					<term>aerial images</term>
					<term>image segmentation</term>
					<term>image matching algorithms</term>
					<term>SIFT</term>
					<term>feature detection</term>
					<term>feature description</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper, we provide an approach that detects the changes in buildings between two multi-temporal aerial images of different sources. Since the images in most cases are not perfectly aligned, our approach takes into consideration the differences in the geometric aspects of the images. Differences in scale, view point or overlapping regions may be present between the pair of images. Our approach relies on segmentation to extract building masks from the original aerial images. Changes are then found by comparing the features of the pair of masks using image matching algorithms. This procedure is applied on a set of 80 pairs of aerial images of different sizes and with different applied transformations, and an evaluation has been considered in comparison with the corresponding ground truth references. The evaluation yields buildings change detection rate of 92.7%. The results of our proposed approach suggest that automatic building change detection is possible, but further research should include improvement of the segmentation phase to better distinguish buildings and enhancement of the change detection method. Real time application of the process is also a challenging perspective.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>Aerial imagery is -as it sounds -the process of taking images from the air. It is a subset of a larger domain called Remote Sensing. It consists of acquiring data without making physical contact with the objects in study <ref type="bibr" target="#b0">[1]</ref>.</p><p>Aerial images such as satellite imagery or drone imagery are considered one of the richest sources of data that can be used in various applications. Change detection in aerial images is detecting new or disappeared objects in images registered at different moments of time and possibly in various lighting, heights and camera calibrations <ref type="bibr" target="#b1">[2]</ref>. Detecting the changes in aerial images of the same region and taken at different times is useful and important in many domains such as: automatic map updating, field change after catastrophic events, detecting illegal buildings areas and undeclared refugees camps, analysis of urban and suburban areas, a base for automatic monitoring system and some other military applications. For these reasons, detecting changes in aerial images has thus become an important research topic <ref type="bibr" target="#b2">[3]</ref>.</p><p>In fact, several techniques and approaches are designed and implemented to detect changes in aerial images. However, all these techniques were motivated by the availability and fusion of different types of useful and profitable remote sensing data, such as data generated from Digital Elevation Models (DEM), Light Detection and Ranging (LiDAR) technology and other kinds of remote sensing technologies <ref type="bibr" target="#b0">[1]</ref>  <ref type="bibr" target="#b3">[4]</ref>, or limited to work with specific types of images such as GeoTiff images that contain accurate geographic information such as coordinates is the global coordinates system. Furthermore, they are also limited to aligned images that are of the same scale and view point (same height, same camera calibrations, same coordinates...).</p><p>The main problem with these techniques is that they rely too much on the information provided with the images, and therefore they cannot be applied to any image that is not enriched with any information such as geo-spatial information.</p><p>Nowadays, the automatic analysis techniques of images are very essential. Machine Learning and Computer Vision techniques, and more specifically the image matching algorithms, has proven to be very efficient in the field of image processing and comparison. Furthermore, there are still poor scientific methodologies for detecting changes in aerial images, especially those that differ in geometric aspects such as scale and orientation, and without being limited to additional information about the images. Going deeper into the topic is essential to introduce new efficient insights in the field of change detection in aerial images.</p><p>Accordingly, this research provides a complete procedure for building change detection in aerial images using machine learning and computer vision techniques and algorithms.</p><p>The main advantage of our approach is that it does not take benefit from any of the information derived from the aerial images. It deals with aerial images as simple PNG or JPG formats without any enrichment. More importantly, it can detect changes in aerial images that differ in scale and view point and images that have overlapped regions. This way, our approach can be applied to any pair of aerial images despite of their related information or their geometric aspects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. PREVIOUS STUDIES</head><p>Detecting the changes in aerial images has been an old and long journey. In particular, changes in buildings, is an essential part of this journey.</p><p>Looking at the previous studies related to our topic, one can see that most of these studies rely on data fusion; they integrate multiple data sources to produce more consistent and accurate information than that provided by any individual data source.</p><p>For example, in their work, Nebiker et al. used image-based dense digital surface models (DSMs) in order to compute a depth value for every pixel of an image, combined with the aerial images for the detection of individual buildings. They used these models with object-based image analysis to detect changes <ref type="bibr" target="#b4">[5]</ref>. As well, the study of Chen Lin was based on multi-source data. They pre-processed the data using triangulation of an irregular network of data points collected by Light Detection And Ranging (LiDAR) technology, and then, the changes were detected by finding differences in height by comparing the LiDAR point measurements and the estimates of the building models <ref type="bibr" target="#b5">[6]</ref>. Furthermore, Alonso et al. applied the support vector machine (SVM) classification algorithm to a joint satellite and laser data set for the extraction of buildings. For change detection, they suggested to compare an old map with more recent spatial information instead of comparing a pair of images <ref type="bibr" target="#b6">[7]</ref>. Many other studies took benefit from data sources other than the aerial image itself such as Digital Elevation Models (DEMs), laser scanner data, indicator of vegetation (NDVI), the relationship between the buildings and their shadows, and high resolution aerial images in order to detect changes in buildings <ref type="bibr" target="#b7">[8]</ref> [9] [10] <ref type="bibr">[11] [12]</ref>. Most of these studies suffered from significant problems with small buildings and with buildings surrounded by high trees.</p><p>Talking about extracting the buildings before detecting changes, this step was included in numerous studies. Some of them went for region-based classification where each small region was classified to "building" or "no-building" based on a decision tree induced from training data (edge recording of the buildings), and then classified to "change" or "no change" based on some conditions <ref type="bibr" target="#b12">[13]</ref>. Other used the indicator of vegetation NDVI for distinguishing buildings from trees since both have similar height information <ref type="bibr" target="#b13">[14]</ref>. Neural network classifier also was employed in order to classify the regions in an aerial image into multiple classes (grove, building, tree, shadow . . . ) by feeding the neural network with many inputs such as area, average gray level, shape factor and compactness <ref type="bibr" target="#b2">[3]</ref>. Region-based segmentation was also applied using a decision tree that rely on the geometric properties of the land cover objects such as elevation, spectral information, texture and shape <ref type="bibr" target="#b14">[15]</ref>. The most important and precise segmentation was applied using Convolutional Neural Networks where the large imagery is divided into small patches, and then CNN is trained with those patches and their corresponding threechannel map patches (building, road and background) <ref type="bibr" target="#b15">[16]</ref>. However this work was not including change detection.</p><p>As for detecting changes in aerial images that have different views, Bourdis et al. stated that camera motion and viewpoint differences introduce parallax effects. Therefore, in order to be robust to viewpoint differences, they introduce an algorithm to distinguish between real changes and parallax effects based on optical flow constrained with epipolar geometry <ref type="bibr" target="#b16">[17]</ref>. In other works concerning this point, knowing the calibration of the camera or the spatial information about the geographic area were essential in order to achieve the goal <ref type="bibr" target="#b12">[13]</ref>  <ref type="bibr" target="#b16">[17]</ref>.</p><p>Furthermore, ArcGIS Pro offers a tool that detect feature changes by finding where the update line features spatially match the base line features and detects spatial changes, attribute changes, or both, as well as no change. However, all inputs to this tool must be in the same coordinates system <ref type="bibr" target="#b17">[18]</ref>. While in our case, we aim to detect changes even if we do not know the spatial location of the geographic region we are working on.</p><p>To the best of our knowledge, we did not find any study that processes aerial images independently from any other source of information to extract buildings from. Moreover, computer vision techniques such as image matching algorithms are not employed to detect changes although they proved to be very efficient in the comparison of images.</p><p>To overcome the two problems cited above, our approach works in three steps. As we are interested in small-scale change detection (buildings), the first step is the segmentation phase, in which we eliminate a large part of the scene without losing any actual building. This is possible by extracting buildings' footprints from the aerial images. Second, we use the SIFT image matching algorithm to check the correspondence of the pair of images, i.e. to make sure that the images taken correspond to the same geographic region. Third, we detect the type of transformation applied to one of the images with respect to the other (scale, rotation, overlap). The detected transformation is then reversed to get two images of the same scale and view. The last step, the difference image can be computed and post-processed. In this step, the changes in the buildings are detected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. BACKGROUD</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Image Segmentation</head><p>Computer vision is a field that is intended to make computers accurately understand and efciently process visual data like images. Extracting information from images and understanding image information is very critical in many applications in this domain. Computer vision helps in extracting features of an image in order to simplify image analysis <ref type="bibr" target="#b18">[19]</ref>.</p><p>In several cases, we may not be interested in all the components of the image, but only for some areas or objects that have certain characteristics related to our task. Image segmentation is one of the best techniques to handle this issue. This technique works by isolating objects from the rest of the image <ref type="bibr" target="#b19">[20]</ref> [21] <ref type="bibr" target="#b21">[22]</ref> [23] <ref type="bibr" target="#b23">[24]</ref>. Image segmentation mainly has the role of classifying each pixel of an image into meaningful classes that refer to specific objects. It involves grouping of the elements of an image by certain criteria of homogeneity <ref type="bibr" target="#b3">[4]</ref>. It does not only make a prediction for an input providing classes, but also provides additional information regarding the location of such classes.</p><p>Deep learning techniques have proven to be very efficient in solving such problems. These techniques can learn patterns in order to predict classes. The main deep learning architecture that is used for image segmentation, and generally speaking for image processing, is the Convolutional Neural Network (CNN).</p><p>Frameworks like MaskRCNN, RetinaNet allow to apply image segmentation using deep learning. However the domain of application of some of them is restricted to scene images, and they cannot be used in case of aerial images <ref type="bibr" target="#b24">[25]</ref>  <ref type="bibr" target="#b25">[26]</ref>. Other frameworks that work with aerial images such as ENVI, ERDAS Imagine, eCognition and others are also available <ref type="bibr">[27] [28]</ref>. Nevertheless they have many limitations. Some of them do not have any vectorization tool to convert the segmented result to use them in further analysis, others are confused with images where the building roofs were dark and having intensities very less as compared to other building objects <ref type="bibr" target="#b28">[29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Image Matching</head><p>In order to compare the images, we look for specific patterns or specific features that are unique in the images and that can be easily compared. A feature is a relevant piece of information. It is a specific structure in the image such as a point, an edge or a corner. The operation of finding the features of an image is called Feature Detection.</p><p>Feature detection is the process of transforming the visual information of the image into the vector space. It is basically finding keypoints (or interest points) in the image. A keypoint is a unique point in the local area around it. A keypoint can be matched to a corresponding point in another image. The main purpose of detecting features is giving us the possibility to perform mathematical operations on them, and thus to find similar vectors that lead us to similar objects or scenes in the images. Ideally, this information should be invariant under image transformations, so we can find the same features again even if the image is transformed in some way.</p><p>Using a specific feature detection algorithm, we search for such features in the first image and then we look for the same features in the other image. As a result, we get a set of points (x i , y i ) for each image, where x i and y i are the coordinates of the point i detected as a feature in the image. After detecting interest points, we continue to compute a descriptor for each one of them. The regions around the features should be described so that the algorithm can find the similar features in the other image. This is called the Feature Description.</p><p>The local appearance around each feature point is described in some way that is invariant under changes in translation, scale and rotation. Therefore, we end up with a descriptor vector for each feature point. Feature descriptors encode interesting information into a series of numbers and act as a sort of numerical 'fingerprint' that can be used to differentiate one image from another. Once the features and the descriptors are extracted and computed, some preliminary feature matches between these images will be established.</p><p>Feature matching, or more generally image matching, is the task of establishing correspondences between two images. Keypoints between two images are matched by identifying their nearest neighbors. This is achieved by comparing the descriptors across the images to identify similar features. For any two images, we get a set of pairs (x i , y i ), (x 0 i , y 0 i ) where (x i , y i ) is a feature in one image and (x 0 i , y 0 i ) is its IV. METHODOLODY Fig. <ref type="figure" target="#fig_0">1</ref> represents the overall process of our approach: First, the buildings footprints from the acquired aerial image will be extracted in order to use them in detecting changes instead of the original aerial images. To achieve this step, a segmentation model for extracting buildings masks from aerial images is built. Second, we suppose that a database is already prepared containing preprocessed aerial images' masks of the region of interest. In this step, we look into in the database for the mask that corresponds to our input mask. This step is achieved by computing a similarity measure between each couple of images using SIFT image matching algorithm. Finally, after aligning the couple of masks, we detect changes by filtering their difference image.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Buildings' Footprints Extraction</head><p>Extracting buildings' footprints from aerial images is a kind of preprocessing of the images before matching. It helps us to get better results in detecting changes, since by segmenting the images, we get rid of every element that is considered noise for us (not an object of interest). A segmentation model is needed for this purpose. Many tools that implement this techniques are available. The used tool to achieve our goal is RoboSat <ref type="bibr" target="#b33">[34]</ref>. RoboSat is an end-to-end pipeline written in Python3 for feature extraction from aerial and satellite imagery. Features can be anything visually distinguishable in the imagery such as buildings, roads or cars <ref type="bibr" target="#b33">[34]</ref>. We chose to work with RoboSat since it is specially designed to work with aerial images and it has shown important results in this domain.</p><p>The data preparation tools in RoboSat help us to create and prepare the dataset for training feature extraction models. Also, the modelling tools in RoboSat help with training fully convolutional neural networks for segmentation <ref type="bibr" target="#b33">[34]</ref>.</p><p>Fig. <ref type="figure" target="#fig_1">2</ref> represents an aerial image with its corresponding buildings mask. a) Data Preparation: We first walk through creating a dataset for training feature extraction model. Such dataset consists of satellite imagery combined with their corresponding masks for the feature we want to extract, which is building in our case. We can think of these masks as binary images which take the value zero where there is no building and one for building areas. This dataset will serve as training set for the segmentation model. The goal is to have a model that accepts an aerial image and outputs its corresponding buildings footprints. As mentioned before, the footprints will be used to detect changes instead of original aerial images, and this is to reduce all kinds of noise that may affect the accuracy of our application. Our objects of interest are only buildings.</p><p>We start by extracting geometries from OpenStreetMap (OSM) project. We try then to figure out where we need satellite imagery in order to complete the training set <ref type="bibr" target="#b34">[35]</ref>. OpenStreetMap (OSM) project creates and provides free geographic data. The OpenStreetMap Foundation is an international not-to-profit organization supporting the OpenStreetMap project. This project maintains data about roads, buildings, trails, railway stations and much more, all over the world. OSM maps are saved on the internet and they are totally free. But the most important thing is that OSM is accurate and up to date (normally updated every day) <ref type="bibr" target="#b34">[35]</ref>.</p><p>There are two reasons for which we are building our own segmentation model instead of using OSM data directly. The first reason is that OSM data do not cover all the regions we are interested in. In Lebanon for example, buildings masks are not provided for all the country. So, we take benefit from the available geometries provided by OSM in order to build the segmentation model. Later, this model will provide us with the buildings footprints for regions that are not covered by OSM. The second reason, which is the most important one, is that we may not be aware of the exact location of the image in the global coordinates system. In such case, we cannot use OSM extracts.</p><p>GeoFabrik server from OSM provides a convenient and updated extracts which we can work with <ref type="bibr" target="#b35">[36]</ref>. GeoFabrik team extract, select and process free geodata for everyone. They create shape files, maps, and map tiles with a free of charge download service. The geometries extracted from GeoFabrik server are shape files of extension .shp. A shape file is a simple format that is used for storing the geometric location and attribute information of geographic features that can be represented by points, lines or polygons. We are only interested in polygon representation of the buildings. These shape files can be visualized as vector layer in GIS tools, which help us to decide at what locations we need to download satellite imagery to complete the dataset.</p><p>Although the masks are not always perfect, but a slightly noisy dataset will still work fine with training the model on thousands of images and masks.</p><p>The next step is to download the corresponding aerial imagery. Our aerial imagery is downloaded from Mapbox <ref type="bibr" target="#b36">[37]</ref>. Mapbox satellite is a full global base map. It uses global satellite and aerial imagery from commercial providers such as NASA and USAS. Mapbox provides an API that allows us to download the needed satellite imagery <ref type="bibr" target="#b36">[37]</ref>.</p><p>RoboSat works with the Slippy Map tile format to abstract away georeferenced imagery behind tiles of the same size. A Slippy Map is, in general, a term referring to modern web maps which let you zoom and pan around. By default, the Slippy Map renders tiles. Tiles are 256 x 256 pixel PNG files. Each tile is a file in a directory representing a column, and each column is a subdirectory that represents the zoom level. RoboSat offers the tool that is responsible for tiling the collected aerial images as well as extracted geometries.</p><p>With downloaded satellite imagery and rasterized corresponding masks, our dataset is complete and ready. Fig. <ref type="figure" target="#fig_2">3</ref> shows the downloaded aerial imagery tiles with their corresponding buildings footprints.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>b) Training and Modelling:</head><p>The RoboSat segmentation model is a kind of fully convolutional neural network which we train on pairs of aerial images and corresponding masks. The training process takes place within a GeForce GTX 1080 platform. When picking up the best checkpoint, the model allows to predict the segmentation probabilities for every pixel in an image. These segmentation probabilities indicate how likely each pixel is a background or a building. These probabilities are then turned into discrete segmentation masks. The same segmentation model is used for extracting buildings footprints from old imagery as well as the input aerial image.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Image Correspondence</head><p>At this point, after extracting buildings footprints from the original input aerial image, we need to find its corresponding mask from the already prepared dataset. The pair of masks will not be perfectly aligned. Many types of transformations may be applied to one of the images with respect to the other. Different scales, different views, overlapping regions between images are examples of such transformations.</p><p>Here, and because we do not know the exact location of the images in the global coordinates system, some similarity measure is needed to find the mask from the dataset that best matches with our input image. This similarity measure will help us in deciding whether the two images are for the same scene or not. For this purpose, SIFT image matching algorithm is used.</p><p>The objective here is to find a similarity measure that helps us to know that the masks are extracted from the same geographic region despite of the applied transformation.</p><p>First, we use SIFT image matching algorithm to detect the interest points in both masks (having different transformations). Then, we compute the descriptors for each one of the images in order to use them in the matching process. SIFT algorithm provides us with the coordinates of the detected keypoints, the set of matched keypoints between the pair of images and many other useful information.</p><p>Fig. <ref type="figure" target="#fig_3">4</ref> represents the matching points between pairs of images having different transformations. For visualization, the original image is put on the left side and the other image is put on the right side and the matches are drawn as lines between both images.</p><p>Let n and m be the number of keypoints in the first mask and second mask respectively. And let S = {P i /i 2 1, 2, . . . , n} be the set of detected keypoints in the first mask and S 0 = {P 0 i /i 2 1, 2, . . . , m} be the set of detected keypoints in the second mask. Let M be the set containing the pair of keypoints indices that match with each other. Then M = {(i, j)/P i 2 SandP 0 j 2 S 0 } are found as matched keypoints. This naming will be used in all next sections.</p><p>Both images are of the same scene then there must be proportionality between the relative distances of the keypoints. Thus in all cases this condition must be satisfied:</p><formula xml:id="formula_0">d(P a , P b ) d(P c , P d ) ⇡ d(P 0 e , P 0 f ) d(P 0 g , P 0 h )<label>(1)</label></formula><p>such that a, b, c, d 2 {1, 2, . . . , n} and e, f, g, g 2 {1, 2, . . . , m} and {a, e}, {b, f }, {c, g}, {d, h} 2 M . We compute this factor for the matched keypoints found for the pair of images. In some cases, there might be false matches which lead to some disparity in the values of the factor between the matching pairs. To remove this inconsistency, we remove all the matching pairs that give a factor which is far from the most frequent factor. Then we compute the ratio of the number of the remaining matching pairs over the total number of good matches. We rely on this ratio as a similarity measure between the two images.</p><p>In other cases, this similarity factor can vary. In order to have a threshold that can be used in any other case, we compute the similarity factor for 408 pair of masks with different sizes and different applied transformations. We computed the average of the proportionality factor of the 408 pairs of masks and we got 0.88685 as an average factor. But since we are assuming that the pair of masks that we need to compare have differences in buildings, we accept 0.7 as a threshold.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Change Detection</head><p>After finding the corresponding masks, SIFT matching algorithm is very efficient in detecting the type of the transformation applied to one of the images with respect to the other. We differentiate between three main types of transformations: masks that have overlapping regions, masks that are different in scale and masks that are different in rotation angle. We will explain in details how to detect each type of these transformations by applying simple mathematics on the information provided by SIFT algorithm. a) Overlapping Regions: For this type of transformation, we use template matching algorithms. This algorithm is available in OpenCV library for computer vision <ref type="bibr" target="#b19">[20]</ref>. This algorithm proved to be very efficient in detecting the overlapping regions between two images.</p><p>After computing the similarity measure between the pair of masks and checking that the views correspond to the same scene, we try to find the overlapping regions between the pair of masks. The bigger image is then cropped to be aligned with its overlapping region. We apply template matching algorithm to search for the small mask in the bigger one. Although there are some differences in the buildings, the template matching algorithm gives us an accurate result. Now, we have two aligned pair of masks that are ready to detect changes between them.</p><p>b) Scale Transformation: In this type of transformation, we aim to find the scale factor between the pair of masks. When we get the scale ratio , we can transform both masks to be in the same scale. The process now is very similar to the one performed in computing the similarity measure since the ratio of distances computed there was in fact the scale factor. So</p><formula xml:id="formula_1">= d(P a , P b ) d(P c , P d ) ⇡ d(P 0 e , P 0 f ) d(P 0 g , P 0 h )<label>(2)</label></formula><p>for all a, b, c, d 2 {1, 2, . . . , n} and e, f, g, g 2 {1, 2, . . . , m} and {a, e}, {b, f }, {c, g}, {d, h} 2 M . We also remove inconsistencies because of the presence of false matches. Now, we have two aligned pair of masks that are ready to detect changes between them. c) View Point Transformation (Orientation: In this type of transformation, we aim to find the rotation angle between the pair of masks. When we get the rotation angle, we can transform both masks to be in the same orientation. To calculate the angle of rotation between the two masks, we </p><p>We also remove inconsistencies because of the presence of false matches. Now, we have two aligned pair of masks that are ready to detect changes between them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D. Difference Image</head><p>Whatever was the transformation applied to one of the images, at this point we have two aligned images. All what we have to do is to find the difference image. Of course, the difference image will contain some noise because of the differences in the resolution of the pair of masks, the fact that drives us to filter the difference image.</p><p>Filtering the noise in the difference image consists of finding the contours in it. Contours are curves joining all the continuous points (along the boundary) having same color or intensity. Contours are very helpful for shape detection or recognition. Since we are using binary images, we have more chance to get a better accuracy. Finding such contours rely on detecting Canny Edges <ref type="bibr" target="#b37">[38]</ref>.</p><p>Fig. <ref type="figure" target="#fig_4">5</ref> represents the noisy difference image and the filtered one.</p><p>The contours are projected finally onto one of the original images to show the differences clearly.  For change detection, we used the accurate buildings extracted from OSM to evaluate the change detection procedure, in order to guarantee that the results of the image segmentation do not affect our evaluation.</p><p>In order to show the results of the whole workflow, refer to Fig. <ref type="figure" target="#fig_1">2</ref> that shows an aerial image and its corresponding mask. We suppose that this image is acquired now from an aircraft. We also suppose that we have a database containing old masks (extracted from old aerial images). The goal is to find the mask in the database that corresponds to the mask of this aerial image by computing the similarity measure between each pair of masks.</p><p>After extracting the buildings' footprints from the image, we manually apply different transformations to the mask in order to evaluate our procedure. Table <ref type="table">1</ref> shows the description of the applied transformations. We also apply manually some changes between the masks.</p><p>First, we apply SIFT algorithm to the original mask with each one of the applied mask. Table <ref type="table">2</ref> represents the results of SIFT algorithm, and Fig. <ref type="figure" target="#fig_7">8</ref> shows the resulting matching points found by SIFT for each pair of masks. Now, we compute the similarity measure and the geometric parameters of the pair of masks to compare them with the ground truth shown in Table <ref type="table">1</ref>. The results are shown in Table <ref type="table">3</ref>.</p><p>As shown in the table, all the similarity measure for the transformed mask with respect to the original mask are greater or equal to the threshold. As for scale factor, the difference between the computed scale factor and the real one for the four pairs of masks does not exceed 0.1. As well, the difference between the computed rotated angle and the real one does not exceed 0.1°. Now, the difference image is computed for each  pair of masks after aligning them. Fig. <ref type="figure" target="#fig_7">8</ref> shows the difference image of each of the four pair of masks. The procedure was applied on a test set of 80 pairs of aerial images with different characteristics and different applied transformations in order to evaluate our procedure. The following histograms show the accuracy rate of the results of the change detection as well as for the geometric parameters for each type of transformation.</p><p>It is clear from the obtained results that our procedure works the best with the scale transformation as well as overlapping regions. However, some errors were encountered with rotation and mixed transformations. The results are expectable since SIFT algorithm is designed to be robust with scale transformation.</p><p>As an overall rate, our procedure gives 92.7% of true change detection for different types of transformations.</p><p>The strengths of our procedure can be summarized by the following points: (1) this procedure works with simple PNG aerial images without any additional metadata, (2) if the shape  However two main limitations encounter this procedure which are (1) this procedure has expensive computation time so it cannot act as real time application and (2) the final results are always dependent to the accuracy of the segmentation phase.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. CONCLUSION</head><p>Building change detection in aerial images that differ in many geometric aspects such as scale and view point is a challenging research topic nowadays. A complete solution for this problem is not yet found and developed. This work has presented a complete procedure to detect new and demolished buildings in two aerial images taken at different times. Our procedure worked in three steps. The first step which is extracting building footprints from original aerial images was accomplished using a segmentation model. Using machine learning, specifically convolutional neural network, this model was built by training on a large number of aerial images coupled with their buildings masks. The second step which is image correspondence was done by calculating a similarity factor between each pair of images. At this point, the pair of images that represent the same geographic area is found. The last step, which is change detection, benefits from image matching algorithms in particular SIFT algorithm. This algorithm is applied to align the pair of the images and then compute their difference in order to detect the changed buildings. This procedure showed a change detection rate of 92.7% for different types of transformations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VII. CHALLENGES AND FUTURE WORK</head><p>A big challenge is faced in our approach which is inability to be implemented as a real time system. The image segmentation phase as well as searching in a database for the mask that corresponds with the input image are computationally expensive, although building the model and preparing the dataset are carried out only once. Further studies must be accomplished in order to find suitable solutions for this critical issue.</p><p>Moreover, future work can have a more specific design for the experiment. The overall findings that emerged from our experiments gave us some promising directions to follow for building an optimal, operative, complete and automatic system in the future. Furthermore, points of interest other than buildings can be taken into consideration in the process of change detection. It can also include roads, vegetation and any other class of objects that can be present in aerial images.</p><p>Additionally, enhancing the segmentation model with a larger and more suitable dataset is essential in a further research due to its considerable and significant effect in the improvement of the overall results of the approach, since detecting changes rely truly on the extracted buildings' footprints.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Worflow of our approach.</figDesc><graphic coords="3,301.42,85.14,247.86,146.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Aerial image and its corresponding buildings mask.</figDesc><graphic coords="4,105.69,179.64,127.91,94.34" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Aerial images tiles with their corresponding footprints.</figDesc><graphic coords="5,103.23,215.52,132.83,129.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. SIFT matches between pairs of masks having different transformations.</figDesc><graphic coords="5,312.33,198.30,226.04,110.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Difference image before and after filtering.</figDesc><graphic coords="6,313.35,215.03,224.00,168.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. Evaluation metrics for each epoch of the training process.</figDesc><graphic coords="7,29.65,85.14,280.00,210.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 7 .</head><label>7</label><figDesc>Fig. 7. Comparison between ground truth buildings masks and the predicted ones.</figDesc><graphic coords="7,39.62,327.79,260.05,228.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Fig. 8 .</head><label>8</label><figDesc>Fig. 8. SIFT matches between each pair of masks.</figDesc><graphic coords="8,59.57,346.24,220.15,122.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Fig. 9 .</head><label>9</label><figDesc>Fig. 9. Difference image between each pair of masks.</figDesc><graphic coords="8,318.02,487.48,214.67,185.27" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>TABLE I DESCRIPTION</head><label>I</label><figDesc>OF THE TRANSFORMED MASKS</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell>Description</cell></row><row><cell></cell><cell>A</cell><cell cols="2">Scale factor = 1.56, 4 changes</cell></row><row><cell></cell><cell>B</cell><cell cols="2">Rotation angle = 73°, 2 changes</cell></row><row><cell></cell><cell>C</cell><cell cols="2">Have overlapping region, 3 changes</cell></row><row><cell></cell><cell></cell><cell></cell><cell>TABLE II</cell></row><row><cell cols="5">RESULTS OF SIFT ALGORITHM APPLIED ON DIFFERENT PAIRS OF MASKS</cell></row><row><cell></cell><cell cols="2">No. of keypoints</cell><cell>No. of keypoints</cell><cell>No. of matches</cell></row><row><cell></cell><cell cols="3">in the original mask in the transformed mask</cell></row><row><cell>A</cell><cell>261</cell><cell></cell><cell>216</cell><cell>169</cell></row><row><cell>B</cell><cell>261</cell><cell></cell><cell>779</cell><cell>171</cell></row><row><cell>C</cell><cell>261</cell><cell></cell><cell>121</cell><cell>91</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>set, simply anyone can train his own dataset and then use the same procedure to detect changes, (3) this procedure can be extended to point of interests other than buildings, finally (4) this procedure is robust against different types of transformations.</figDesc><table><row><cell></cell><cell></cell><cell>TABLE IV</cell><cell></cell><cell></cell></row><row><cell cols="5">ACCURACY RATE OF THE RESULTS OF THE CHANGE DETECTION WITH</cell></row><row><cell cols="4">DIFFERENT TYPES OF TRANSFORMATIONS</cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">Transformations</cell><cell></cell></row><row><cell></cell><cell>Scale</cell><cell cols="3">Orientation Overlapping Mixed</cell></row><row><cell>Accuracy (%)</cell><cell>100</cell><cell>85.5</cell><cell>99</cell><cell>86.3</cell></row><row><cell cols="5">of buildings in another region are different from the buildings</cell></row><row><cell>in the training</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Aerial Photography And Image Interpretation</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Kiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Paine</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>John Wiley Sons, Inc</publisher>
			<pubPlace>Canada</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Computer Vision in Control Systems</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Favorskaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Jain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Aerial and Satellite Image Processing</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Favorskaya</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Lakhmi</surname></persName>
		</editor>
		<meeting><address><addrLine>Canberra</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">135</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Building Detection and Reconstruction from Mid-and High-Resolution Aerial Imagery</title>
		<author>
			<persName><forename type="first">N</forename><surname>Paparoditis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jordan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Cocquerez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Vision And Image Understanding</title>
		<imprint>
			<biblScope unit="volume">72</biblScope>
			<biblScope unit="page" from="122" to="142" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Comparison of Object Oriented Classification Techniques and Standard Image Analysis For the Use of Change Detection Between SPOT multispectral Satellite Images and Aerial Photos</title>
		<author>
			<persName><forename type="first">G</forename><surname>Wilhauck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Archives of Photogtammetry and Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">XXXIII</biblScope>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Building change detection from historical aerial photographs using dense image matching and objectbased image analysis</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nebiker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Lack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Deuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="8310" to="8336" />
			<date type="published" when="2014-09">September 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Detection of building changes from aerial images and light detection and ranging (LIDAR) data</title>
		<author>
			<persName><forename type="first">L.-C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L.-J</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Change detection of buildings from satellite imagery and lidar data</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Alonso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Malpica</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Papi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Arozarena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Martinez-Agirre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">1652</biblScope>
			<date type="published" when="2013-03">March 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A building extraction approach for airborne laser scanner data utilizaing the object based image analysis paradigm</title>
		<author>
			<persName><forename type="first">I</forename><surname>Tomljenovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tiede</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Blaschke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Applied Earth Observation and Geoinformation</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="page" from="137" to="148" />
			<date type="published" when="2016-10">October 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Methods for exploiting the relationship between buildings and their shadows in aerial imagery</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">B</forename><surname>Irvin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Mckeown</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man and Cybernetics</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">6</biblScope>
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Automatic extraction of building outline from high resolution aerial imagery</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno>XLI-B3</idno>
	</analytic>
	<monogr>
		<title level="j">The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Automatic detection of changes from laser scanner and aerial image data for updating buildings map</title>
		<author>
			<persName><forename type="first">M</forename><surname>Leena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hyyppa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kaartinen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="434" to="439" />
			<date type="published" when="2004-07">July 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Automatic detection of earthquakedamaged buildings using DEMs created from pre-and post-earthquake stereo aerial photographs</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C A</forename><surname>Turker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cetinkaya</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="823" to="832" />
			<date type="published" when="2006-08-16">16 August 2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Detecting building changes from multitemporal aerial stereopairs</title>
		<author>
			<persName><forename type="first">F</forename><surname>Jung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISPRS Journal of Photogrammetry and Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="187" to="201" />
			<date type="published" when="2004-01">January 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Fusing airborne laser scanner data and aerial imagery for the automatic extraction of buildings in densely built-up areas</title>
		<author>
			<persName><forename type="first">F</forename><surname>Rottensteiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Clode</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Trinder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kubik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Fusion of LIDAR data and optical imagery for building modeling</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-A</forename><surname>Teo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-C</forename><surname>Shao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-C</forename><surname>Lai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Building and road detection from large aerial imagery</title>
		<author>
			<persName><forename type="first">S</forename><surname>Saito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Aoki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Image Processing: Machine Vision Applications VIII</title>
				<meeting><address><addrLine>San Francisco</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Constrained optical flow for aerial image change detection</title>
		<author>
			<persName><forename type="first">N</forename><surname>Bourdis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Denis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Sahbi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Geoscience and Remote Sensing Symposium (IGARSS)</title>
				<meeting><address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<ptr target="https://pro.arcgis.com/en/pro-app/tool-reference/data-management/detect-feature-changes.htm" />
		<title level="m">Arcgis.om</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Image Segmentation Algorithms Overview</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yuheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hao</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Learning OpenCV</title>
		<author>
			<persName><forename type="first">G</forename><surname>Bradski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kaehler</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016-12">December 2016</date>
			<publisher>O&apos;Reilly Media, Inc</publisher>
			<pubPlace>United States</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A Review on Image Segmentation Techniques</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Pal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1277" to="1294" />
			<date type="published" when="1993-09">September 1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A Survey on Image Segmentation</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Mui</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recoginition</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="3" to="16" />
			<date type="published" when="1981">1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Image Segmentation</title>
		<author>
			<persName><forename type="first">P.-G</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ed</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Image Segmentation</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Dhawan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Medical Image Analysis</title>
				<imprint>
			<publisher>Wiley-IEEE Press</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="229" to="264" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">fizyr/keras-retinanet</title>
		<ptr target="https://github.com/fizyr/keras-retinanet" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow</title>
		<author>
			<persName><forename type="first">W</forename><surname>Abdullah</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">GitHub Repository</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<ptr target="https://www.harrisgeospatial.com/Software-Technology/ENVI" />
		<title level="m">ENVI -The Leading Geospatial Analytics Software</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Canty</surname></persName>
		</author>
		<title level="m">Image Analysis, Classification and Change Detection in Remote Sensing</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Taylor Francis Group</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Evaluation of various segmentation tools for extraction of urban features using high resolution remote sensing data</title>
		<author>
			<persName><forename type="first">V</forename><surname>Srivastava</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">XXX</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Comapring Several Implementations of Two Recently Published Feature Detectors</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sunderhauf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Protzel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Intelligent and Autonomous Systems (ICAS)</title>
				<meeting><address><addrLine>Toulouse, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">A Comparison of SIFT and SURF</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Panchal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Panchal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Shah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Innovative Research in Computer and Communication Engineering</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2013-04">April 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Feature Based Correpondence: A Comparative Study on Image Matching Algorithms</title>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">M</forename><surname>Babri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tnavir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Khurshid</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Advamced Computer Science and Applications (IJACSA)</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images</title>
		<author>
			<persName><forename type="first">E</forename><surname>Karami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Prasad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shehata</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>Newfoundland Electrical and Computer Engineering Conference</publisher>
			<pubPlace>Canada</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Github -RoboSat</title>
		<ptr target="https://github.com/mapbox/robosat" />
	</analytic>
	<monogr>
		<title level="m">Mapbox</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">OpenStreetMap OSM</title>
		<ptr target=":www/openstreetmap.org" />
	</analytic>
	<monogr>
		<title level="m">OpenStreetMap Foundation OSMF</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note>Online</note>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title level="m" type="main">GeoFabrik</title>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>OpenStreetMap</publisher>
		</imprint>
	</monogr>
	<note>geofabrik.de</note>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Mapbox</title>
		<ptr target="www.mapbox.com" />
	</analytic>
	<monogr>
		<title level="m">Mapbox</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">A Computational Approach to Edge Detection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Canny</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Readings in Computer</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Vision</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><surname>Fischler</surname></persName>
		</editor>
		<editor>
			<persName><surname>Firschein</surname></persName>
		</editor>
		<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="1987">1987</date>
			<biblScope unit="page" from="184" to="203" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
