=Paper=
{{Paper
|id=Vol-3248/paper13
|storemode=property
|title=Accurate Extrinsic Calibration of LiDAR and Camera with Refined Vertex Features
|pdfUrl=https://ceur-ws.org/Vol-3248/paper13.pdf
|volume=Vol-3248
|authors=Shuo Wang,Zheng Rong,Pengju Zhang,Yihong Wu
|dblpUrl=https://dblp.org/rec/conf/ipin/WangRZ022
}}
==Accurate Extrinsic Calibration of LiDAR and Camera with Refined Vertex Features==
Accurate Extrinsic Calibration of LiDAR and Camera with Refined Vertex Features Shuo Wang1 , Zheng Rong2 , Pengju Zhang2 and Yihong Wu1,2,* 1 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 2 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China Abstract LiDARs and cameras are widely used in many research fields, due to the complementarity of their data. Calibrating the extrinsic parameters of LiDAR frame and camera frame is essential for fusing the two kinds of data. Calibration methods based on a calibration board extract geometry features from point clouds and images, then build geometry constraints to estimate the extrinsic parameters. In this paper, we exhaustively analyze the measurement characteristics of LiDARs that would introduce notable negative effect on the calibration, including the noise of depth measurement and the divergence of laser beams. Therefore, we propose a refining method for vertex features from LiDAR point clouds using the prior information of the board, which can effectively mitigate the effect of systematic measurement errors and improve the accuracy of the calibration results. Meanwhile, our calibration method reduces the minimal number of calibration datasets to one, which promotes the efficiency of calibration processes. Besides, we propose an objective and independent evaluation method for target-based calibration methods. Extensive experiments and comparisons with state-of-the-art methods show that using refined vertex features can notably improve the accuracy and efficiency of extrinsic parameter calibration. Keywords Extrinsic Calibration, Vertex Feature, Calibration Evaluation 1. Introduction LiDARs and cameras are indispensable and ubiquitous in positioning and navigation, which are used in many fields of applications, such as autonomous drivings and industrial robotics. LiDARs measure the structure of surroundings in the form of point clouds and cameras measure the texture of surroundings in the form of images. The information fusion of LiDAR and camera can strengthen the images with accurate depth information and provide color information for point clouds. The prerequisite of information fusion is the calibration of extrinsic parameters between a LiDAR frame and a camera frame. Calibration board-based methods are the most popular calibration methods nowadays [1, 2, 3, 4, 5, 6, 7, 8]. In these methods, one or more boards are deployed in a static environment. By extracting and matching geometry features of the calibration board from point cloud and image, IPIN 2022 WiP Proceedings, September 5 - 7, 2022, Beijing, China * Corresponding author. $ wangshuo2020@ia.ac.cn (S. Wang); zheng.rong@nlpr.ia.ac.cn (Z. Rong); pengju.zhang@nlpr.ia.ac.cn (P. Zhang); yhwu@nlpr.ia.ac.cn (Y. Wu) 0000-0003-1269-7506 (S. Wang); 0000-0002-9096-6049 (Z. Rong); 0000-0001-8245-0205 (P. Zhang); 0000-0003-2198-9113 (Y. Wu) Β© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) the constraints of extrinsic parameters between LiDAR and camera can be constructed, and the extrinsic parameters can be estimated using these constraints. In most of existing calibration methods, geometry features are extracted directly from point clouds and refined with fitting algorithms, such as plane fitting algorithms and line fitting algorithms, to reduce sensor noises. But these fitting methods only focus on the geometry characteristics of calibration boards and ignore the measurement characteristics of LiDARs. In this paper, we analyze the measurement characteristics of LiDARs and the systematic errors raised by these characteristics, including the noise of range measurements and the divergence of LiDAR beams. Then we propose a refining method for vertex features to reduce the negative impact aroused by the systematic errors. We calibrate the extrinsic parameters of LiDAR and camera using the refined vertex features to improve the accuracy of calibration results. Due to the high accuracy of refined vertex features, our proposed method can calibrate the extrinsic parameters using only one calibration dataset. To evaluate calibration results, some papers compare their estimated extrinsic parameters with the ground truth, for example, simulated values. Some papers calculate the re-projection errors of geometry features for evaluation, such as point features or line features from calibration boards. However, in real world, the ground truth is hard to get and different kinds of re- projection errors are not holistic measurements for two 3D frames. To evaluate our calibration method effectively and compare our method with other calibration methods fairly, we propose an evaluation method using raw LiDAR measurements and 3D space distances of point-to-line that are logically independent of any method used in the calibration process. The contributions of this paper are as follows: 1) A refining method for vertex features extracted from LiDAR point clouds yielding notable accuracy improvement of calibration results; 2) A fair and independent evaluation method using 3D space distances; 3) Extensive experiments and comparisons between our proposed method and other state-of- the-art calibration methods. The rest of this paper is organized as follows. Section 2 reviews calibration methods. Section 3 analyzes measurement characteristics of LiDARs, proposes a refining method for vertex features, and introduces a calibration method based on refined vertex features. An evaluation method using 3D space distances is also presented in this part. Section 4 shows the experimental results. Section 5 concludes this paper. 2. Related Work Extrinsic calibration methods between LiDAR and camera can be classified as three kinds, target-based calibration methods, target-less calibration methods and motion-based calibration methods. In target-based calibration methods, there are two kinds of targets: calibration boards [1, 2, 3, 4, 5, 6, 7, 8] and other artificial markers that are different from typical calibration boards, such as a board with different patterns like a circle [9, 10], or a target with different shapes like a ball [11, 12]. The calibration boards are main calibration targets. With explicit data association between LiDAR and camera, the calibration board-based methods usually use the corresponding relationship between features extracted from point clouds and images to estimate the extrinsic parameters. Calibration board-based methods can be classified as plane-based, edge-based and point- based methods, according to their used features. The plane-based constraint uses the plane features of calibration boards. This kind constraint can be constructed by the points on the board extracted from point clouds and plane parameters calculated from images in the camera frame [1, 2, 3]. Another way to construct the plane-based constraint is to compute the plane parameters from point clouds [1, 4, 8]. The edge-based constraint uses the edge features of calibration boards. For point clouds, a board edge can be modeled as a series of points on the edge [1] or line parameters fitted by these points [2, 4, 5]. For cameras, a board edge can be modeled as a back-projected plane[1] or a 3D line in the camera frame [2, 4, 5]. The point-based constraint uses the point features of board vertices. The vertices in the LiDAR frame can be computed by the points on the calibration board. From images, the projection of board vertices can be extracted [6, 7], and the coordinates of board vertices in the camera frame can be derived according to the relative pose between the calibration board and the camera [4, 8]. The calibration board-based methods usually perform better than other calibration methods because the extraction and matching of features are explicit and clear. Target-less calibration methods depend on the information extracted from the environ- ment rather than artificial targets. Structure information [13, 14, 15], semantic information [16] and mutual information [17, 18] are widely used in the target-less calibration methods. The weakness of the target-less calibration methods is that there is no explicit corresponding rela- tionship between point clouds and images compared with target-based methods. Motion-based calibration methods [19, 20, 21, 22, 23, 24] leverage odometry information to estimate the extrinsic parameters. Motion-based methods simplify the procedure of data collections but the accuracy of extrinsic parameters greatly depends on the performance of the odometry algorithms. In this paper, we focus on calibration board-based methods. Vertex features extracted from point clouds are refined according to the prior information of the calibration board to reduce the systematic errors of LiDAR measurements and improve the accuracy and efficiency of calibration results. We compare our calibration method with other state-of-the-art calibration methods using our proposed fair evaluation method based on space distances. 3. Calibration Method 3.1. Overview of Our Calibration Method We denote the coordinates of a point in the LiDAR coordinate frame {πΏ} as π πΏ = (ππΏ , ππΏ , ππΏ )T , and the coordinates of the identical point in the camera coordinate frame {πΆ} as π πΆ = (ππΆ , ππΆ , ππΆ )T . The transformation between π πΆ and π πΏ can be written as π πΆ = π πΆ πΆ πΏ π πΏ + π‘πΏ , (1) where π πΆπΏ is the rotation matrix, π‘πΏ is the translation vector. The transformation matrix π πΏ πΆ πΆ can be written as (οΈ πΆ π πΏ 3Γ3 π‘πΆ )οΈ πΆ ππΏ = πΏ 3Γ1 . (2) 0T3Γ1 1 4Γ4 The goal of the extrinsic calibration is to estimate π πΆ πΏ and π‘πΏ . πΆ In this paper, two ArUCO markers are used as calibration targets to implement the extrinsic calibration between a LiDAR and a camera, which are pasted on two separate boards. A calibration board coordinate frame {π΅} is built to represent the calibration features as Figure 1. The πππ plane of the calibration board frame is the board plane. The π axis is perpendicular to the πππ plane and points out of the marker plane. Figure 1: The configuration of two board frames, a LiDAR frame and a camera frame. There is no specific spatial relationship between the two board frames. The blue box indicates the image. The golden points indicate the LiDAR points on the calibration boards. The whole calibration process is described below. First, we analyze the measurement charac- teristics and systematic errors of LiDARs. Second, in the LiDAR frame, according to extracted points on the board from point clouds and the known size of the calibration board, the points on vertices are refined. Third, the relative transformation between the calibration board frame and the camera frame is calculated with the known size of the marker, and the vertices of calibration board are estimated in the camera frame. Finally, with the information of vertices in these two sensor frames, the corresponding relationship between frames is constructed and the extrinsic parameters are calibrated. The details of each stage are introduced in the following paragraphs. 3.2. Measurement Characteristics and Systematic Errors of LiDARs There are two kinds of systematic errors of LiDARs. 3.2.1. Range Error The LiDAR measures the surroundings in spherical coordinates with the range π, elevation π and azimuth πΌ. Its Cartesian coordinates are calculated as ππΏ = π * πππ π * π πππΌ, ππΏ = π * πππ π * πππ πΌ, (3) ππΏ = π * π πππ. The elevation π and azimuth πΌ are accurate, but there is error in the range π. This paper uses a Velodyne VLP-16 LiDAR sensor, whose range error can reach up to Β±3ππ typically. As we can see from Figure 2, when we project LiDAR points on a board plane about 2π away from the LiDAR, the thickness of the point cloud can be up to 4ππ. The true thickness is zero. (a) Front View (b) Top View Figure 2: The point cloud of a calibration board about 2π away from the LiDAR. The thickness of the point cloud is about 4ππ, which is caused by the range error in the measurement of every LiDAR point. 3.2.2. Divergence Error As shown in Figure 3a, the LiDAR beam becomes divergent and the LiDAR spot becomes larger as distance becomes larger. For example, the horizontal size of spot is about 18.2ππ and the vertical size is about 12.5ππ when the range is 2π. Therefore, when we project a LiDAR point on the edge of the calibration board, part of the LiDAR spots is on the board and other part is on the background. For these edge points, when the reflectivity of the calibration board is stronger than that of the background, the returned range measurement is the distance between the LiDAR and the board edge, even if the centerline of the LiDAR beam is out of the board (the LiDAR spot is mostly on the background). The resulting point measurement is called ghost point and makes the point cloud of the calibration board larger than its true size. When the reflectivity of the calibration board is similar with that of the background, the returned ranges of the edge points will be a weighted average of the range measurement of the calibration board and the background. This kind of LiDAR point is also the ghost point and makes a series of points behind the calibration board, whose shape is like a streamline. As shown in Figure 3b, we can see that the points in the highlight ellipse are the points near the board edge which is represented by the bright line. Due to the divergence error of LiDAR (a) Divergence of LiDAR Beam (b) Ghost Points Caused by Divergence Figure 3: The illustration of divergence of the LiDAR beam, which makes the boardβs point cloud becomes larger than its truth size. The ghost points can be seen in the highlight ellipse. The bright line represents the true edge of the board. beams, the LiDAR points on the edge are off the true edge of the calibration board and there are a series of points behind of the board edge. Because of the range error and divergence error of LiDAR measurements, the points on the board plane are not co-planar and the points on the edge are off the true edge, which makes the calculation results of the points on the calibration board vertex not accurate. These problems further affect the calibration results between LiDAR and camera. To solve these problems, we use the prior information of the calibration board to find the true position of the board in LiDAR point clouds. As a board can be defined by its four vertices, we propose a refining method to extract the accurate vertices from LiDAR point clouds and use the vertex information to calibrate the extrinsic parameters between LiDAR and camera. 3.3. Calibration Information in LiDAR Frame 3.3.1. Extraction of Plane Points With the known size of a calibration board and the relative pose between calibration board and LiDAR, the points located on the board can be segmented out from the LiDAR point cloud. The parameters of board plane in the LiDAR frame, π πΏ,πππππ = (π΄, π΅, πΆ, π·)T , are fitted by RANSAC method with the fitting threshold of 0.01π in this paper. The inlier points of the fitted plane parameters are considered as the LiDAR points on the board plane. We denote these points as π ππΏ,πππππ (1 β€ π β€ π ), where π is the number of inlier points. 3.3.2. Extraction of Edge Points The points lying on the edge are extracted from the board plane points and divided into four groups which corresponds to the four edges of the board. We denote the edge points in LiDAR frame as π π,π πΏ,ππππ (π = 1, 2, 3, 4, 1 β€ π β€ ππ ), where ππ is the number of points on each edge. With the points on the edge, the line parameters of calibration board edges in the LiDAR frame are fitted by RANSAC method with the threshold of 0.01π. The line parameters can be denoted as πΏππΏ,ππππ = ((πππΏ,ππππ )T , (π ππΏ,ππππ )T )T , where πππΏ,ππππ is the direction vector of the edge line and π ππΏ,ππππ is a point on the edge. The inlier points of the edge line fitting are considered as the edge points. 3.3.3. Extraction and Refinement of Vertex Points With the above resulting edge line parameters, the coordinates of board vertices in the LiDAR frame π ππΏ,π£πππ‘ππ₯ (π = 1, 2, 3, 4) can be calculated by the parameters of edge lines. Due to the LiDAR measurement errors discussed in Section 3.2, the two neighboring edge lines are not co-planar and not intersecting in fact. We calculate the shortest segment line between the two neighboring edge lines and the midpoint of the segment line is regarded as the vertex of the calibration board. These coordinates can be used as initial values in the further refinement using the prior information of the calibration board. The proposed method uses two kinds of prior information: geometry-based information and pose-based information. Geometry-based constraint can be used to construct two kinds of constraints. The first one is the constraint of the length of board edges, i.e. the distance of each pair of neighboring board vertices should be equal to the length of board edges πΏ. We denote the neighboring vertices as the πth, πth (1 β€ π β€ 4, 1 β€ π β€ 4, π ΜΈ= π) vertices, and this constraint can be constructed as πΏ = ||π π π πΏ,π£πππ‘ππ₯ β π πΏ,π£πππ‘ππ₯ ||. (4) The second one is the constraint of the perpendicular relationship between neighboring edges. We denote the two perpendicular edge lines is determined by the πth,πth,πth(1 β€ π β€ 4, 1 β€ π β€ 4, 1 β€ π β€ 4, π ΜΈ= π ΜΈ= π) vertex points, and the constraint relationship can be written as π 0 = (π π π T π πΏ,π£πππ‘ππ₯ β π πΏ,π£πππ‘ππ₯ ) (π πΏ,π£πππ‘ππ₯ β π πΏ,π£πππ‘ππ₯ ). (5) Using these constraints, the four vertices can be fully guaranteed co-planar and correct size. The shape and size of the board determined by these four vertices are the same as the actual calibration board. Consequently, the measurement errors caused by the LiDAR are eliminated, including the range error and divergence error. The Geometry-based information can constrain the accurate geometry of the vertices, but during the optimization the pose of the vertices in the LiDAR frame may drift. Pose-based constraint can be used to further constrain the pose of the board determined by the four vertices in the LiDAR frame. We use the laser points on the edge to constrain the pose of the board. The edge point π π,π πΏ,ππππ (π = 1, 2, 3, 4, 1 β€ π β€ ππ ) should be on the edge line determined by the vertex points, which can be formulated as ||(π ππΏ,π£πππ‘ππ₯ β π π,π π πΏ,ππππ ) Γ ππΏ,ππππ || 0= , (6) ||πππΏ,ππππ || where πππΏ,ππππ (π = 1, 2, 3, 4) is the direction vector of the πth edge line in the LiDAR frame and can be calculated as πππΏ,ππππ = π π π πΏ,π£πππ‘ππ₯ β π πΏ,π£πππ‘ππ₯ . (7) π and π are the indices of vertex points that are on the end of the πth board edge. The vertex points are refined with these constraints in the LiDAR frame. We solve this optimization problem using Levenberg-Marquardt algorithm implemented by Ceres. 3.4. Calibration Information in Camera Frame According to ArUCO Library [25, 26], with known camera intrinsic matrix πΎ and distortion coefficients π·, the relative pose π πΆ π΅ between calibration board frame and camera frame can be determined as (οΈ πΆ πΆ )οΈ π π΅ π‘π΅ πΆ ππ΅ = . (8) 0T 1 Denoting the coordinates of the four board vertices in the calibration board frame as π ππ΅,π£πππ‘ππ₯ (π = 1, 2, 3, 4), their corresponding coordinates in the camera frame can be written as π ππΆ,π£πππ‘ππ₯ = π πΆ π π΅ π π΅,π£πππ‘ππ₯ , π = 1, 2, 3, 4. (9) 3.5. Extrinsic Parameter Estimation After the vertex information of the calibration board accurately extracted from point clouds (Section 3.3) and images (Section 3.4), the extrinsic parameters between the LiDAR frame and the camera frame can be estimated using the point-to-point correspondence. With the calculated coordinates of vertex points in LiDAR frame and camera frame, the point-to-point constraint can be constructed as π βοΈ 4 π π βοΈ πΆ= ||π πΆ πΏ π πΏ,π£πππ‘ππ₯,π β π πΆ,π£πππ‘ππ₯,π ||, (10) π=1 π=1 where π is the number of the calibration datasets, π(1 β€ π β€ 4) is the index of board vertices. The optimization problem is solved using Levenberg-Marquardt algorithm implemented by Ceres. We summarize our proposed calibration method in Algorithm 1. Algorithm 1 Calibration of extrinsic parameters between LiDAR and camera Input: Camera intrinsic matrix πΎ and distortion coefficients matrix π·; Calibration data collected from π board poses; Output: Extrinsic parameters π πΆπΏ; 1: Extract the LiDAR points on board and edge from point clouds; π 2: Calculate and refine the board vertices π πΏ,π£πππ‘ππ₯,π , π = 1, 2, 3, 4; π 3: Calculate the vertices in the camera frame π πΆ,π£πππ‘ππ₯,π , π = 1, 2, 3, 4; 4: Estimate the extrinsic parameters π πΆ πΏ using (10); πΏ. 5: return π πΆ 3.6. Evaluation Method of Calibration Results Because the ground truth is not available in most cases, some papers used re-projection errors to evaluate the accuracy of calibration results. By transforming the points lying the edge or vertex from LiDAR frame to camera frame with the estimated extrinsic parameters and projecting them on images, the re-projection error can be defined as the distance between the projected point coordinates and its corresponding line or vertex. However, the re-projection error is not an independent method for the evaluation because it is relative with the specific calibration method. For example, the evaluation using re-projection error of vertex points is prone to better results with point-based calibration methods. Intuitively, the re-projection error reflects the alignment between the objects in 3D space and 2D image space, but it is not a sufficient and necessary condition for the alignment in two 3D frames. To evaluate various calibration methods fairly, we propose a 3D space distance based method to evaluate the calibration results as (11), which is independent of the specific calibration method. π πΆ π 1 βοΈ ||(π πΏ π πΏ,ππππ β π πΆ,ππππ ) Γ ππΆ,ππππ || π·= , (11) π ||ππΆ,ππππ || π=1 where π πΆ πΏ is the calibration result, π πΏ,ππππ is the coordinates of a LiDAR point on the edge in π the LiDAR frame, π πΆ,ππππ is the coordinates of a point on the edge in the camera frame, ππΆ,ππππ is the direction vector of the edge in the camera frame and π is the point number. π ππΏ,ππππ is directly extracted from the point cloud as introduced in Section 3.2. Although these laser points on the edge are not the ideally accurate points on the edge as we discussed in Section 3.2, using these raw sensor measurements to evaluate the calibration results is fair and direct to reflect calibration quality. This 3D space distance canβt be zero even if we calculate it with ideal calibration results because of the systematic errors of LiDAR measurements, but smaller space distance directly represents better calibration results. 4. Experiment 4.1. Calibration Data Collection In this paper, we use two multi-sensors device sets to collect calibration data, which are denoted as Device-1 and Device-2. Each device set is comprised of a Velodyne VLP-16 LiDAR and an RGB camera with the resolution of 752 Γ 480. The LiDAR and camera are mounted on a base using rigid connection. As introduced in Section 3.1, two boards with different ArUCO markers are used as calibration targets to make full use of the camera field of view and shorten the tedious and laborious calibration process. For each device, we collect the calibration datasets from 27 different poses of calibration boards with respective to the sensor, extract the feature information of two calibration boards as introduced in Section 3.3 and Section 3.4, and finally get 54 sets of board information for experiments. 4.2. Experiment of Vertex Refinement The refinement results of vertex points are evaluated by computing the length of each board edge and the angle between neighboring edges, before and after the refinement introduced in Section 3.3. We denote the true length of board edges as πΏππ‘ , the calculated length using the vertices as πΏππ π‘ , the true angle of neighboring edges as πΌππ‘ , and the calculated angle using the vertices as πΌππ π‘ . Then the error can be formulated as ππΏ = ||πΏππ‘ β πΏππ π‘ ||, (12) ππΌ = ||πΌππ‘ β πΌππ π‘ ||. (13) We compute the board edge length and the board angle of all 54 board poses for each device with and without the vertex refinement, and show the mean and standard deviation of the errors in Table 1 and Table 2. Table 1 The length error between the calculated value and the ground truth. βwβ means the error calculated by the vertices with the refinement and βw/oβ means calculated by the vertices without the refinement. Device Length Error Mean and STD (m) Refinement w/o w β2 β1 β4 Device-1 4.23 Γ 10 , 1.90 Γ 10 6.55 Γ 10 , 1.72 Γ 10β4 Device-2 4.36 Γ 10β2 , 1.89 Γ 10β1 6.87 Γ 10β4 , 1.79 Γ 10β4 Table 2 The angle error between the calculated value and the ground truth. βwβ means the error calculated by the vertices with the refinement and βw/oβ means calculated by the vertices without the refinement. Device Angle Error Mean and STD (deg) Refinement w/o w β3 Device-1 2.12, 5.66 2.78 Γ 10 , 1.91 Γ 10β3 Device-2 2.19, 5.82 2.92 Γ 10β3 , 2.09 Γ 10β3 After refining the vertex of calibration boards using the prior information, the error caused by the measurement characteristics of LiDARs can reach to a remarkably low level. Without the vertex refinement, the length error can be up to 4ππ and the angle error can be up to 2πππ, which will introduce error into the points extraction from board edges and the calculation of vertex points, and deteriorate the extrinsic calibration between LiDAR and camera. Eliminating the error of points extracted from point clouds can effectively improve the accuracy of extrinsic parameters calibration, and we use the refined vertex points to implement the following extrinsic calibration. 4.3. Experiment Results of Calibration We evaluate the performance of our proposed method and compare it with other state-of-the-art methods [1, 6]. [1] introduced two kinds of geometry constraints to calibrate the extrinsic parameters. The first one is the point-to-plane correspondences which uses the LiDAR points on the board plane and the plane parameters estimated in the camera frame to construct the constraint. We denote this calibration method as Mishraβs plane method. The second one is the point-to-line correspondences which uses the LiDAR points on the edge and the back-projected plane parameters estimated in the camera frame to construct the constraint. We denote this calibration method as Mishraβs edge method. [6] used the point-to-point correspondences to calibrate the extrinsic parameters. The points used in [6] are the vertex points of the board, which is denoted as Ankitβs vertex method, but this paper only calculates the vertices by the edge line parameters without the refinement like our methods. 0 L V K U D V S O D Q H P H W K R G 0 L V K U D V S O D Q H P H W K R G 0 L V K U D V H G J H P H W K R G 0 L V K U D V H G J H P H W K R G 6 W D Q G D U G ' H Y L D W L R Q P $ Q N L W V Y H U W H [ P H W K R G $ Q N L W V Y H U W H [ P H W K R G 2 X U S U R S R V H G P H W K R G 2 X U S U R S R V H G P H W K R G 0 H D Q P 1 X P E H U R I &