=Paper=
{{Paper
|id=Vol-1107/paper13
|storemode=property
|title=Automated Landslide Monitoring through a Low-Cost Stereo Vision System
|pdfUrl=https://ceur-ws.org/Vol-1107/paper13.pdf
|volume=Vol-1107
|dblpUrl=https://dblp.org/rec/conf/aiia/AntonelloGCM13
}}
==Automated Landslide Monitoring through a Low-Cost Stereo Vision System==
Automated Landslide Monitoring through a
Low-Cost Stereo Vision System
Mauro Antonello1 , Fabio Gabrieli2 , Simonetta Cola2 , and Emanuele Menegatti1
1
Department of Information Engineering (DEI), via Gradenigo 6/B, Padova
2
Dip. di Ingegneria Civile Edile ed Ambientale (ICEA), via Ognissanti 39, Padova
Abstract. In this paper we introduce an inexpensive yet efficient pho-
togrammetry system that takes advantage of state of art computer vision
techniques to monitor large natural environments. Specifically, our sys-
tem provides a precise evaluation of the terrain flow in wide landslides
through optical flow applied to 2D image sequences and a back-projection
of the resulting motion gradients to a 3D model of the landslide. Pro-
viding such a wide 3D model is one of the key issues and is addressed
relying to a wide baseline stereo vision system. To initialize the stereo
vision system, we propose an effective multiview calibration process.
Keywords: Photogrammetry, Stereo Vision, Multiview Calibration
1 Introduction
In last few years, the significance of environmental monitoring for natural haz-
ards prevention and mitigation has been constantly growing. Responsiveness
requirements along with an increased amount of data coming from ambient sen-
sors led to the necessity of automated systems able to detect critical situations
and alert authorities.
In this work we address the problem of landslide monitoring, that means
detecting the flow of landslide material, a motion that is very limited: usually
only few meters over several weeks. The slipping of the landslide material is often
monitored analyzing image sequences exploiting optical flow techniques. One of
the main limitations of such process is that the direction and intensity of the
flow are the projection in the camera plane of the real world flows. Therefore, in
order to obtain a correct estimation of the material motion, flow gradients must
be back-projected to the landslide 3D model. Unfortunately, this 3D model is
hard to obtain due to the wideness of the monitored area.
Several works presented in the literature rely on expensive laser-scanner or
aerial photogrammetry systems. Differently, our work propose an innovative
stereo vision system that requires only two cameras, which ultimately makes
it a low-cost alternative for large environmental 3D reconstruction. Stereo vision
is one of the most widely used techniques for outdoor 3D reconstruction [1],
especially in low cost solutions. Stereo vision only requires two cameras and can
retrieve the distance of an area observed by both cameras from its slightly dif-
ferent appearance in the two images (see fig. 1). In our case, the large distance of
monitored areas and the presence of adverse natural conditions, like heavy wind
or bad weather, make the calibration of a stereo vision system an hard task.
In order to maintain a good reconstruction quality at farthest monitored areas,
the distance between monitoring cameras (baseline, see sec. 2) needs to be a lot
higher than what is commonly the case when calibrating a stereo camera pair.
Indeed, the common way to calibrate a stereo vision system and find its extrinsic
patameters (i.e. cameras mutual position) has been proposed by [2] and makes
use of a small pattern with known geometry; however, this pattern needs to be
observed by both camera during the calibration process making this technique
infeasible when cameras are too far from each other. In our work we deal this
issue through a robust multiview calibration system which lesser constraints in
terms of maximum baseline allow us to extend the distance between cameras
without loss of precision in the calibration process.
Fig. 1. The distance Z of a point P is retrieved from the disparity d = XR − XT . The
distance the left and right optical centers (OT and OR ) is called baseline B.
2 System Overview
The main issue connected to the reconstruction of wide areas is the sensitivity
of the stereo matching with respect to the distance of the target area. In stereo
vision, the error in the detected distance z is related to the quantization error
in digital images [3] and is computed by means of the following equation [4]:
z2
z = d , (1)
bf
where b [m] is the baseline between left and right cameras, f [px] is the focal
length, z [m] is the target distance and d [px] is the quantization in the disparity
map (usually one pixel).
Equation 1 shows that there is a quadratic dependency of the depth error
to the distance between camera and observed region. Since farthest areas of the
landslide are located at more than 700 m from the cameras, a wide baseline is
needed in order to keep low the depth error. In table 2 estimated depth er-
rors with respect to target distance and baseline are reported for our camera
installation (18M px and 30 mm focal length).
distance baseline [m]
[m] 10 12 14 16 18 20
200 0,59 0,49 0,42 0,37 0,33 0,30
400 2,37 1,97 1,69 1,48 1,31 1,18
600 5,32 4,43 3,80 3,33 2,96 2,66
800 9,46 7,88 6,76 5,91 5,26 4,73
Table 1. Estimated depth errors with respect to the target distance and cameras
baseline.
2.1 Extrinsic Multiview Calibration
To correctly process a couple of stereo images and obtain observed area distances,
the roto-translation between the two stereo cameras (extrinsic parameters) needs
to be known. The calibration process is usually performed observing a pattern
with a known geometry [2] but this simple method is not applicable in our case:
the wide baseline and the terrain conformation do not allow the observation of
the calibration pattern from both cameras.
We obtained a good estimation of the extrinsic parameters exploiting a mul-
tiview calibration [5]. This technique makes use of a large set of images of the
same area taken from several viewpoints. Visual correspondences between all
possible couples of images are matched in order to impose constraints on the
roto-translation between each viewpoint couple; this way it is possible to per-
form calibration exploiting the features available in the framed scene without
the need of dedicated patterns. In our work we collected a large set of images of
the landslide, taken from a number of different viewpoints; we then added such
images to those acquired by the stereo camera pair. Exploiting the multi-view
calibration it is possible to obtain the mutual position between all couples of
views, including the extrinsic calibration of the stereo couple.
2.2 Landslide 3D Reconstruction
Once obtained the extrinsic calibration of the stereo system, images taken from
the two cameras are rectified (see fig. 2) and processed by a stereo matching
algorithm called Semi Global Block Matching [6] in order to produce a disparity
map. From the disparity map we retrieve the distance of each observed point
and create a dense 3D point cloud representing the landslide reconstruction (see
fig. 3).
c
Fig. 2. Images processed by the stereo matching algorithm are first rectified. After
rectification all correspondent points are located on the same row so the matching
algorithm can search left-right correspondences in the same row.
Fig. 3. Left-right disparity map and 3D point cloud reconstruction of the landslide.
For image clarity the point cloud has been down-sampled so that only one point every
10cm is kept.
3 Results
The 3D reconstruction of the landslide allows us to precisely evaluate sliding of
the ground. We detect particle flows in the image sequence using Normalized
Cross-Correlation and then back-projecting the 2D flow onto the 3D landslide
model, obtaining the motion flow of the rocks. The monitoring system proposed
in this paper is completely autonomous and scalable and it is designed to issue
an alert when the sliding effect exceeds a given threshold.
As a future work, we will employ a continuous camera calibration algo-
rithm [7] to prevent the system to lose its calibration. This way it will be possi-
ble to obtain good performance over time, since the capability of the multiview
stereo calibration of providing good estimation of extrinsic parameters is strongly
dependent on the mutual position of the cameras.
References
1. Strecha, C., Von Hansen, W., Van Gool, L., Fua, P., Thoennessen, U.: On bench-
marking camera calibration and multi-view stereo for high resolution imagery. In:
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on.
(2008) 1–8
2. Zhang, Z.: A flexible new technique for camera calibration. Pattern Analysis and
Machine Intelligence, IEEE Transactions on 22(11) (2000) 1330–1334
3. Chang, C., Chatterjee, S.: Quantization error analysis in stereo vision. In: Sig-
nals, Systems and Computers, 1992. 1992 Conference Record of The Twenty-Sixth
Asilomar Conference on. (1992) 1037–1041 vol.2
4. Gallup, D., Frahm, J.M., Mordohai, P., Pollefeys, M.: Variable baseline/resolution
stereo. In: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Conference on. (2008) 1–8
5. Hiep, V.H., Keriven, R., Labatut, P., Pons, J.P.: Towards high-resolution large-scale
multi-view stereo. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009.
IEEE Conference on. (2009) 1430–1437
6. Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information.
Pattern Analysis and Machine Intelligence, IEEE Transactions on 30(2) (2008) 328–
341
7. Dang, T., Hoffmann, C., Stiller, C.: Continuous stereo self-calibration by camera
parameter tracking. Image Processing, IEEE Transactions on 18(7) (2009) 1536–
1550