Abstract

Calibration of a vehicle camera is a key technology for advanced driver assistance systems (ADAS). This paper presents a novel estimation method to measure the orientation of a camera that is mounted on a driving vehicle. By considering the characteristics of vehicle cameras and driving environment, we detect three orthogonal vanishing points as a basis of the imaging geometry. The proposed method consists of three steps: i) detection of lines projected to the Gaussian sphere and extraction of the plane normal, ii) estimation of the vanishing point about the optical axis using linear Hough transform, and iii) voting for the rest two vanishing points using circular histogram. The proposed method increases both accuracy and stability by considering the practical driving situation using sequentially estimated three vanishing points. In addition, we can rapidly estimate the orientation by converting the voting space into a 2D plane at each stage. As a result, the proposed method can quickly and accurately estimate the orientation of the vehicle camera in a normal driving situation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Autonomous driving systems need various types of sensors such as color, radar, and light detection and range (LiDAR) sensors for accurate, integrated analysis of driving situation to guarantee human safety and convenience. Especially, a complementary metal-oxide semiconductor (CMOS) imaging sensor is widely used in video recording systems and advanced driver assistance systems (ADAS) for around-view monitoring (AVM) because of low cost and the similar characteristics to the human vision [1,2]. Recently, advanced technologies using a CMOS sensor are being studied to realize autonomous vehicles [3]. A number of deep learning-based object detection algorithms were developed for the next generation vehicles to provide visual intelligence. Three-dimensional (3D) imaging techniques ranging from stereo matching to dense 3D reconstruction are another important technical basis for autonomous driving systems [4]. Image sensor-based approaches can exploit various advantages that have been developed in image processing and computer vision fields, including a pre-processing algorithm to enhance the quality of input image [5], image-based depth-map estimation [6], and automatic calibration using a single camera [7], to name a few.

Camera calibration is the most important task in 3D imaging technology since it provides both intrinsic and extrinsic camera parameters associated with the geometric relationship between 3D world space and 2D imaging sensor. Conventional calibration methods used a special pattern such as a checkerboard [8] or orthogonal array of dots [9]. However, in-vehicle camera calibration is a challenging problem since a very large calibration pattern is needed and the camera is frequently dislocated due to the nature of dynamic driving.

To estimate and correct camera orientation during operation, a number of online camera calibration methods using vanishing points (VPs) were proposed [10–12]. In general, a VP can be extracted by finding an intersection of the lines projected from parallel structures in a 3D world. More specifically, a VP extraction process is performed in either the image space or Gaussian unit sphere. Image-based approaches use line segments in the image [13–18]. Wu et al. proposed a voting method in the image space using a weight for robust estimation [14]. Elloumi et al. used a random sample consensus (RANSAC) algorithm to estimate three VPs for camera orientation estimation by separately considering both infinite and finite VPs [15]. J-linkage algorithm uses a modified random sampling method [16]. To reduce the computational complexity of the J-linkage algorithm, fast J-linkage algorithm was proposed by setting the initial hypothesis considering the length of the line [18]. Although the image-based approach can be used even when intrinsic parameters are unknown, its accuracy is low because of an inaccurate approximation of infinite parallel lines. On the other hand, the Gaussian sphere-based approach transforms 2D data into a spherical surface [19–23]. Since the Gaussian sphere is a finite space, both finite and infinite VPs are treated as the same in the sphere. 3-line RANSAC algorithm estimates an orthogonal VP triplet using the Gaussian sphere [21]. Although the RANSAC algorithm is fast and robust to noise, classification result depends on the random selection process, and therefore it does not guarantee the optimal solution. Another approach uses a branch-and-bound (BnB) algorithm to estimate the optimal camera orientation. It considers the rotation estimation problem as a convex problem that can be solved using interval analysis [22] and parametric space [23]. Lu et al. combined 2-line RANSAC with an exhaustive search scheme to find global solutions without significantly increasing the computational burden [24]. There was an approach to use the dual space that does not need camera calibration. More specifically, Lezama et al. used PCLines to transform lines to points in the dual space [25]. Furthermore, tracking-based methods that trace lines, motions or planes and estimates the relationship between two adjacent frames were proposed for stability of the estimated angles [15,26–29].

In this paper, we present a novel in-vehicle camera orientation estimation method by finding three orthogonal VPs under assumption that the vehicle drives straight ahead in the Manhattan world [30,31]. To ensure the orthogonality of the detected VP, lines in the image are converted to the corresponding plane normal vector on a spherical space and the voting algorithm is used. In order to efficiently estimate the vanishing point in the vehicle environment, the proposed method first estimates the VP along the driving direction which is the Z-axis of the vehicle coordinate system. The VP along the driving direction is extracted using linear Hough transform. In this step, unit plane normal vectors are scaled to make the problem into the 2D line fitting problem. Next, the rest VPs are selected from the circular histogram, and they are orthogonal to the VP along the driving direction. Finally, we estimate the camera orientation using three orthogonal VPs.

The proposed method is designed to speed up the camera orientation estimation process in the context of the a real vehicle driving environment. Specifically, the proposed step-by-step VP estimation process using Hough transform and circular histogram decreases the computational time to estimate the orientation angles. The circular histogram also provides a clear standard to vote other VPs since each bin of angle has the number of normal vectors of lines orthogonal to the driving direction. For that reason, the proposed method ensures the orthogonality of the VPs. The proposed method can be applied to automatic camera calibration for ADAS because it can accurately estimate the camera orientation while the vehicle drives straightforward.

This paper is organized as follows. After introducing theoretical background in section 2, the proposed camera orientation estimation method is presented in section 3. The performance of the proposed method is verified by experimental results in section 4, and section 5 concludes the paper.

2. Theoretical background

2.1. Properties of vehicle camera geometry

A digital image acquired by an imaging sensor is defined as a set of 2D points projected from a 3D world space. Given a point in the 3D homogeneous coordinate XW = [Xw  Yw  Zw  1]T ∈ ℙ3 and a projected point in the 2D planar space xi = [u  v  1]T ∈ ℙ2, the camera projection model is defined as

xi=PXW,
where P represents the camera projection matrix, Xw, Yw, and Zw respectively x, y, and z value of 3D world coordinate, u and v respectively x and y value of 2D planar coordinate. The camera projection matrix is defined as
P=K[R|T]=[fxscx0fycy001][r11r12r13r21r22r23r31r32r33][txtytz],
where K represents the camera matrix containing specifications of lens and sensor, R the rotation matrix, T= [tx  ty  tz]T the translation vector, fx and fy respectively the focal lengths in x and y directions, (cx, cy) the principal points or the center of projection, and s the skew value of the camera. K consists of intrinsic parameters while [R | T] contains extrinsic parameters including orientation and position of the camera.

Figure 1 shows a camera geometry model in the vehicle coordinate system, where the camera calibration is performed on the vehicle’s 3D coordinate system. For that reason, there are no changes in camera parameters in terms of the vehicle coordinate system when a vehicle moves straight ahead on a flat, well-paved road.

 

Fig. 1 Camera geometry in vehicle: (a) six extrinsic parameters including roll, pitch, yaw, and three-dimensional coordinates and (b) the origin and axis of the vehicle’s 3D coordinate system. The Y-axis indicates the upward direction from the origin.

Download Full Size | PPT Slide | PDF

2.2. Orientation estimation using Gaussian sphere

A Gaussian sphere is defined as a unit sphere, and the principal point of a camera is mapped to the center of the Gaussian sphere. In transforming a 2D image plane to the 3D Gaussian sphere, the coordinate is shifted to the principal point and normalized by the focal length. Given a point xi in the image space, the point xg projected onto the Gaussian sphere can be obtained using the intrinsic matrix [27] as

xg=K1xi.
Also, xg can be represented using the corresponding point XW in the world space as
xg=K1PXW=[R|T]XW.
Since the intrinsic parameters are removed from the transformation process, it is convenient to consider the orthogonality property of VPs. In addition, rotation can be directly estimated using the orthogonal VP triplet.

Figure 2 shows the relationship between the image plane of a camera and the corresponding Gaussian sphere. The image plane contains a 2D edge or line and the principal point of the camera lies on the edge plane. The intersection of the edge plane and Gaussian sphere generates a great circle that represents the line in the Gaussian sphere. The normal vector of the edge plane defines the plane normal of the line [32].

 

Fig. 2 Relationship between the image plane of a camera and the corresponding Gaussian sphere. XG, YG, and ZG represent axes of the Gaussian space, and XI and YI represent axes of the image plane.

Download Full Size | PPT Slide | PDF

When parallel lines in the world coordinate are projected onto the Gaussian sphere, they generate VPs at antipodal points in the Gaussian sphere as shown in Fig. 3(a). The line passing the principal point and VPs is called a vanishing direction (VD). Since plane normals of the parallel lines generate a distribution shaped like a great circle, the normal of the great circle coincides with the VD of the parallel lines as shown in Fig. 3(b).

 

Fig. 3 Vanishing direction (VD) representations using the Gaussian sphere model: (a) VD representation using plane normals and (b) VD representation using the intersection of great circles.

Download Full Size | PPT Slide | PDF

Given a VD V = [vx  vy  vz  o]T ∈ ℙ3, the transformed coordinate by the extrinsic parameters is obtained as

V=[R|T]=R[vxvyvz]T,
which states that the transformation of VD is influenced by only rotation, and it is invariant to translation. As a result, orientation can be estimated according to VD. Especially when the orthogonal VD triplet Vc = [V1  V2  V3] is known, rotation matrix R about the world coordinate can be simply obtained from the following equation [10]
Vc=RI,
where I is the 3×3 identity matrix about the world coordinate.

3. Proposed method

In a vehicle camera system, the camera orientation can be estimated using a rotation vector. Therefore, it is necessary for online calibration to quickly analyze input images while driving and calculate the optimum angle of rotation. In this section, we present a novel camera orientation estimation method for online calibration in vehicle camera systems.

3.1. Overview

The proposed method estimates the orientation of the camera in a vehicle that moves straight ahead in the Manhattan world [30]. The world coordinate system is aligned with the direction of the Manhattan world. Specifically, the moving direction of the vehicle becomes the Z-axis, the vertical direction becomes the Y-axis, and the horizontal direction becomes the X-axis. As shown in Fig. 4(a), we could detect a sufficient number of the horizontal and vertical groups of lines. In particular, the straightforward movement of the vehicle makes many blue lines in the z direction, and the rectangular structure of the Manhattan world makes green and yellow lines in the x and y directions, respectively. Figure 4(b) shows that normal vectors corresponding to the three groups of lines are distributed on three great circles in the Gaussian sphere. The three great circles are mutually orthogonal in the spherical space under the Manhattan world assumption. We assume that: i) the vehicle moves straight ahead to avoid the angle variation due to non-straight motions, and ii) the intrinsic matrix used in the projection onto the Gaussian sphere is a known constant because an in-vehicle camera commonly uses a fixed-focus camera and because intrinsic parameters are a priori determined.

 

Fig. 4 Relationship between Manhattan world and Gaussian sphere: (a) a street image that satisfies Manhattan world assumption and detected line segments and (b) the corresponding plane normal vectors distributed in the Gaussian sphere.

Download Full Size | PPT Slide | PDF

Figure 5 shows the block diagram of the proposed camera orientation estimation algorithm. We first detect line segments L from the input image f(x, y) using the line segment detection (LSD) algorithm and the corresponding plane normal vectors N are computed in the Gaussian sphere. After projecting normal vectors into a plane of a cube, the proposed method performs the linear Hough transform to obtain the Z-axis representing vehicle’s moving direction. To estimate two other axes, we compute a circular histogram using the orthogonal property to the Z-axis. Since each estimated axis are represented as the vanishing point, the proposed method finally obtains three camera orientation angles such as pitch, yaw, and roll.

 

Fig. 5 Block diagram of the proposed camera orientation estimation algorithm.

Download Full Size | PPT Slide | PDF

3.2. Line segment detection and normal vector generation

Given an input image f(x, y), the proposed method starts from line detection for the computation of three main axes of the camera coordinate system. The input image has structures satisfying the geometric orthogonality by projection from the Manhattan world. For camera orientation estimation, it is necessary to detect solid lines from structures, rather than gradient information. For that reason, the proposed method detects lines using the line segment detection (LSD) algorithm [33]. After preprocessing for noise reduction using a simple Gaussian filter, line candidates of local regions are detected by calculating angles θ as

θ=arctan(fx(x,y)fy(x,y)),
where
fx(x,y)=[1111]*f(x,y),fy(x,y)=[1111]*f(x,y),
and ∇fx(x, y) and ∇fy(x, y) respectively represent the horizontal and vertical gradients. Finally, a line is extracted by searching line candidates with similar θ. As a result, we obtain major structures by lines as shown in Fig. 6.

 

Fig. 6 Line segment detection result: (a) input image and (a) the result of LSD.

Download Full Size | PPT Slide | PDF

Next, detected lines are projected into the Gaussian sphere to estimate the vanishing direction. From lines in 2D image L = {l1, l2, ..., ln}, the correspondingly projected 3D lines are given as

lGi=K1li,
where lGi denotes the line projected version of li into the surface of the Gaussian sphere. A line in the 3D sphere represents a great circle whose center is the origin of the sphere. Likewise, two and more lines satisfying the parallelism pass through two antipodal intersections. However, incorrect intersections are actually created by various potential problems such as camera jittering and image noise. In addition, many false candidates for the vanishing direction (VD) are generated since every two lines have an intersection even if they are not parallel.

For efficient computation of VDs, the proposed method uses the unit plane normal of the great circle. Given projected lines LG={lG1,lG2,,lGn}, a unit vector of plane normal N = {n1, n2, ..., nn} is computed as

ni=nyi|nyi|ni,
where ni represents normal vector of the lGi with the coordinate (nxi,nyi, nzi) and computed as
ni=psi×pei|psi×pei|,
and psi and pei respectively represent the start and end point of lGi. In Eq. (10), all plane normal vectors are projected into a hemisphere satisfying y > 0 to prevent the direction ambiguity. Because a plane normal vector is projected onto antipodal points in the Gaussian sphere.

3.3. Vanishing direction estimation using linear Hough transform

Detected line segments in the 2D image are mapped to unit plane normals in the Gaussian sphere as shown in Fig. 7. The set of plane normals form a great circle whose normal vector represents the vanishing direction. Unfortunately, it is difficult to determine the great circle since the set of unit plane normal, N = {n1, n2, ..., nn}, contains outliers that do not satisfy the orthogonality property in the Manhattan world. To solve this problem, the proposed method uses the unit cube whose centroid is the same to the center of the Gaussian sphere [34]. A plane normal distribution generated by a certain VD is projected onto the adjacent plane to form a line as shown in Fig. 7. In addition, lines in the Z-direction are mostly detected by the LSD algorithm while the vehicle is driving straightforward. A step-by-step vanishing direction process is illustrated in Fig. 8.

 

Fig. 7 Relationship between the Gaussian sphere and unit cube.

Download Full Size | PPT Slide | PDF

 

Fig. 8 Vanishing direction estimation using the linear Hough transform: (a) distribution of unit plane normals in the Gaussian sphere, (b) projection of plane normals into a 2D plane of the unit cube, (c) the strongest line estimation by linear Hough Transform, and (d) result of VD estimation.

Download Full Size | PPT Slide | PDF

Given unit plane normals that are transformed from the detected line segments are shown in Fig. 8(a), the proposed method estimates the VD by extracting a strongest line using the linear Hough transform. We then project plane normals on the 3D spherical surface into the 2D plane satisfying y = 1 to define the Z-axis as vehicle’s driving direction. A projected point of ni is defined as

ui={ni/nyi||nxi|1,|nzi|1}.

Next, the line for ni is estimated using the linear Hough transform. To generate the accumulated space, we use some parameters such as angle θ in the range of (−π/2, π/2), offset μ in the range of (−1, 1), and interval between adjacent bins of 0.01. Therefore, the proposed method obtains the Z-axis VD VZ by computing maximum values of two parameters, θmax and μmax, as

VZ=v1×v2.
where v1 and v2 represent end points of the extracted line by the linear Hough transform as
v1=[11θmax+μmax],v2=[11θmax+μmax].

Figures 8(c) and 8(d) show the Z-axis VD estimation result by extracting the strongest line using the linear Hough transform. As a result, the estimated line as shown in Fig. 8(c) is re-projected onto a great circle with the Z-axis VD into 3D sphere as shown in Fig. 8(d).

3.4. Voting for the vanishing direction using circular histogram

Although the linear Hough transform can determine a main axis by searching the strongest line distribution, it is not easy to determine two other axes if plane normals are not sufficiently detected as shown in Fig. 9(a). For that reason, the proposed method casts a vote for two VDs using the geometric orthogonality with the Z-axis. If we have the reliable Z-axis VD, corresponding two great circles are orthogonal to VZ and meet the great circle of VZ on a pair of antipodal points as shown in Fig. 9(b). Given a set of two VD candidates C with the great circle for the Z-axis, an element c(ω) in the continuous range ω of [0, 2π) lies on C. Therefore, c(ω) is located on the intersection between C and a great circle satisfying the orthogonality with C in ω, denoted as Nω, as shown in Fig. 9(b).

 

Fig. 9 Estimation of X and Y-axes: (a) distribution of plane normal on the spherical surface and the estimated Z-axis and (b) VD candidate c(ω).

Download Full Size | PPT Slide | PDF

To vote for two main axes, we generate a circular histogram for VZ. Given a plane normal ni, the proposed method computes a normal vector mi by computing the cross-product with VZ as

mi=VZ×ni|VZ×ni|.
As a result of Eq. (15), all unit plane normals lie on C.

Since C represents a plane with the normal vector VZ, it can be considered as the rotated circle as shown in Fig. 10(a). To simplify the accumulation of c(ω), all mi in C are rotated in the plane with z = 0 as

mi=RCmi,
where m′i represents the normal vector in the plane with z = 0, and RC the rotation matrix to transform mi in C as
RC=RCxRCz=[1000cosαsinα0sinαcosα][cosβsinβ0sinβcosβ0001],
where RCz is the Z-axis rotation to transform the vector to the plane for x = 0, RCx is the X-axis rotation to transform the vector to the plane for y = 0, and α and β respectively denote the angles of RCx and RCz that can be computed using Vz = (vzx, vzy, vzz)T as
α=arccos(vzz),β=arctan(vzx/vzy).

Next, c(ω) is accumulated in the range of ω ∈ [0, 2π) from 3600 bins with the interval of π/1800. Figure 10(b) shows the circular histogram represented by the rose diagram as shown in Fig. 10(b). From the histogram, we vote the maximum index ωmax as

ωmax=maxωh(ω),
where h(ω′) represents the histogram for ω′ with the range of [0, π/2) which is accumulated from four bins of c(ω) with the interval of 90° as
h(ω)=i=03c(ω+90i).

Since ωmax contains two orthogonal points by voting from the circular histogram, it has the information of two VDs satisfying the geometrical orthogonality with V. For that reason, the proposed method finally estimates two VDs by transformation of the inverse of Rc as

Vx=RCVrot,
where Vrot represents the vector with ωmax in a plane for z = 0. Vy is also computed by the cross-product between Vx and Vz as
Vy=Vx×Vz.

Consequentially, the camera orientation ρ3D is finally obtained by computing the rotation matrix and Euler’s formula from three VDs representing as the XYZ axis of the camera [10].

 

Fig. 10 Estimation of X- and Y-axes: (a) rotated vectors onto the XY-plane, (b) the corresponding circular histogram and (c) the finally estimated VD.

Download Full Size | PPT Slide | PDF

4. Experiment results

In this section, we demonstrate the feasibility of the proposed method by comparing the performance with existing methods. The experiments were performed using a personal computer with a i7-7700 4.20 GHz processor and 16 GB RAM. For performance comparison, we used 3-line RANSAC [21], J-linkage [16], and the dual space-based [25] methods. 3-line RANSAC algorithm obtains three orthogonal VDs in a Gaussian sphere and to determine the direction of the axis by making a minimal set using three randomly selected lines with repetitive random sampling. J-linkage algorithm estimates multiple VPs in the image space by creating a relationship with the edges by randomly creating a hypotheses of VPs. Dual space-based method also computes VPs using PCLines transformation to classify lines. For quantitative evaluation of three methods, we measured the mean and standard deviation of the angle in each frame.

To test the performance under the actual driving environment, we acquired three videos from a CMOS camera employed in the front side of a vehicle. More specifically, a fish-eye lens camera of 1280 × 720 resolution and 60 frames per second is equipped in the vehicle system to generate a top-view image for AVM. We assume that all intrinsic parameters including some distortion factors are known. We used a fixed focal length within a moderate range which is commonly used in a vehicle camera. In addition, the controller area network (CAN) data about steering information was used to perform the orientation estimation while the vehicle is driving straightforward. Figure 11 shows three sets of frames used in the first experiment and the line detection results, respectively. We acquired the first set of 600 frames, Video 1, on a two-lane road environment (see Visualization 1). In the first set, all the extracted lines do not correspond to the Manhattan world, especially in regions including trees and bushes. We acquired another 600 frames, Video 2, on a six-lane road (see Visualization 2). Many lines satisfying the Manhattan world show from not only road but also background with a lot of buildings. The third set of 1200 frames, Video 3, has a large number of markers on the street, and many lines of street lamps, banners and buildings are found altogether (see Visualization 3).

 

Fig. 11 A test video acquired from the real world: (a) input frame of three videos under the straightforward driving environment (see Visualization 1, Visualization 2, and Visualization 3) and (b) line extraction results from (a).

Download Full Size | PPT Slide | PDF

Table 1 shows the results of the first experiment in the actual driving environment. Even in an urban environment, the accuracy of estimating VD and camera orientation becomes lower as the number of non-orthogonal lines increases. Video 1 has many lines that do not match the Manhattan world assumption because of bushes and trees. In this case, the performances of the algorithm tend to be lower than in other test cases. Although the 3-line RANSAC algorithm provides the most accurate results among four algorithms, the proposed method provides a similar accuracy with the lowest standard deviation. In case of Video 2 and Video 3, the proposed method showed stable and accurate results with the lowest error and standard deviation among comparison methods. Overall, the proposed method has more stable results than other methods in real vehicle environment because it ensures the orthogonality of VDs and at the same time considers all the angles.

Tables Icon

Table 1. Evaluated standard deviation of camera orientation estimation using four different methods.

The second experiment tests the orthogonality of the estimated VDs. The orthogonality error is defined as inner products of each pair of VDs as

e=VxVy+VyVz+VxVz.

Table 2 shows the results of orthogonality error of estimated VDs using images of Fig. 11. The J-linkage algorithm has nonzero orthogonality errors since the VPs are estimated in the image space without considering the orthogonality. The PCLines algorithm extracts the VD triplet among estimated multiple VDs considering the orthogonality. However, it does not fully guarantee the orthogonality. On the other hand, the 3-line RANSAC algorithm produces orthogonal VDs since it estimates the VDs in the Gaussian sphere using minimal solution sets ensuring orthogonality. The proposed method also satisfies the orthogonality since it sequentially estimates the orthogonal VDs using the linear Hough transform and circular histogram.

Tables Icon

Table 2. Orthogonality between vanishing directions estimated by three different methods.

Table 3 shows the processing time of each part of the proposed algorithm. Also, Table 4 shows the processing time of the three estimation methods. Although the proposed method is based on the voting algorithm using all of the detected edges to consider all directions, it is faster than any other methods by converting accumulation space into the 2D plane using only cross products and projections.

Tables Icon

Table 3. Running time of each part of the proposed algorithm (sec/frame).

Tables Icon

Table 4. Comparison of running time (sec/frame).

For qualitative evaluation of the camera orientation estimation, we classified some detected lines using thresholding of the geodesic distance d(V, l) = | arcsin(V · n)| less than 0.07. Figure 12 shows the result of classifying lines from a real video acquired by a driving vehicle. The odd rows are the input images and the even rows are the results of line detection. The blue lines represent the classified lines of the Z-axis for the direction of straightforward driving, the yellow lines the Y-axis for the horizontal direction, and the green lines the X-axis for the vertical direction. Experimental results show that most suitable lines in the Manhattan World are shown in the right direction, which demonstrates that the proposed algorithm can accurately estimates the actual camera orientation.

 

Fig. 12 Classification results using the camera orientation angles estimated from a real video: (a) the input 160th, 230th frames of first video, and 400th frame of second video, (b) the corresponding classification results, (c) the input of 530th frame of second video, 120th, and 500th frames of third video, and (d) the corresponding classification results.

Download Full Size | PPT Slide | PDF

5. Conclusions

In this paper, the camera orientation based on voting method is proposed for online camera calibration when vehicle drives straightforward. From some lines detected by LSD algorithm, the proposed method has the fast performance by estimating the Z-axis VD using linear Hough transform and unit cube projection. In addition, the voting method based on circular histogram provides the accurate camera angles since it sufficiently considers all detected lines into the accumulation space. Especially, the proposed method ensures the geometrical orthogonality of estimated camera angles by performing step-by-step process. Experimental results verify that the proposed method provides the stable performance in the actual driving situation as well as the ideal Manhattan world within a short period of time. Therefore, the proposed method can play a role in online calibration of the vehicle cameras in ADAS. It can be also applied in 3D object detection by measuring distance using the estimated angles for each frame. Furthermore, the proposed method can be used for view transformation of the camera to monitor surrounding circumstances in a smart parking assistance system if the camera system stores the orientation angles and computes optimal parameters.

Funding

Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2014-0-00077); Chung-Ang University Research Scholarship (2016).

References

1. P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011). [CrossRef]  

2. Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

3. W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016). [CrossRef]  

4. H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015). [CrossRef]  

5. S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018). [CrossRef]  

6. O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018). [CrossRef]  

7. M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018). [CrossRef]  

8. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

9. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112. [CrossRef]  

10. B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990). [CrossRef]  

11. S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983). [CrossRef]  

12. H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838. [CrossRef]  

13. M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

14. Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016). [CrossRef]  

15. W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

16. J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257. [CrossRef]  

17. Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383. [CrossRef]  

18. C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018). [CrossRef]  

19. M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984). [CrossRef]  

20. M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343. [CrossRef]  

21. J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287. [CrossRef]  

22. J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645. [CrossRef]  

23. J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

24. X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353. [CrossRef]  

25. J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017). [CrossRef]  

26. T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457. [CrossRef]  

27. J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

28. W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132. [CrossRef]  

29. R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019). [CrossRef]  

30. J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

31. W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018). [CrossRef]  

32. M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

33. R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012). [CrossRef]  

34. T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289. [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
    [Crossref]
  2. Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.
  3. W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
    [Crossref]
  4. H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
    [Crossref]
  5. S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
    [Crossref]
  6. O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
    [Crossref]
  7. M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
    [Crossref]
  8. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  9. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
    [Crossref]
  10. B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
    [Crossref]
  11. S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
    [Crossref]
  12. H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
    [Crossref]
  13. M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.
  14. Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
    [Crossref]
  15. W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).
  16. J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257.
    [Crossref]
  17. Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
    [Crossref]
  18. C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
    [Crossref]
  19. M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
    [Crossref]
  20. M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
    [Crossref]
  21. J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
    [Crossref]
  22. J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
    [Crossref]
  23. J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.
  24. X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
    [Crossref]
  25. J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
    [Crossref]
  26. T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
    [Crossref]
  27. J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.
  28. W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
    [Crossref]
  29. R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
    [Crossref]
  30. J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.
  31. W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
    [Crossref]
  32. M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.
  33. R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
    [Crossref]
  34. T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
    [Crossref]

2019 (1)

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

2018 (5)

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

2017 (1)

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

2016 (2)

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

2015 (1)

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

2014 (1)

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

2012 (1)

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

2011 (1)

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1990 (1)

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

1984 (1)

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

1983 (1)

S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
[Crossref]

Aggarwal, J. K.

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

Alfalou, A.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Antone, M. E.

M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

Antunes, M.

M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
[Crossref]

Barnard, S. T.

S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
[Crossref]

Barreto, J. P.

M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
[Crossref]

Bazin, J. C.

J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
[Crossref]

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

Brosseau, C.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Caprile, B.

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

Chang, C. H.

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

Chang, Y. L.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

Chen, O. T. C.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

Coughlan, J. M.

J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

Dai, D.

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

Demonceaux, C.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Desthieux, M.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Domanski, M.

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

Elloumi, W.

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

Fu, W.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Guo, R.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

Gutierrez, C.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Hanbury, A.

H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
[Crossref]

Heikkila, J.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
[Crossref]

Hoogs, A.

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

Hornácek, M.

M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

Hsu, L. Y.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

Ikeuchi, K.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Jakubowicz, J.

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Jang, J.

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

Kaddah, W.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Kehtarnavaz, N.

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

Kim, K.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

Kim, W. J.

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

Kroeger, T.

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

Kweon, I.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Leconge, R.

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

Lee, J.

J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

Lee, S. W.

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

Lezama, J.

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

Li, H.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Liu, P.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Liu, Y.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Lu, X.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Magee, M. J.

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

Maierhofer, S.

M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

Morel, J. M.

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Oh, S.

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

Ouerhani, Y.

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Paik, J.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

Park, S.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

Pei, L.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Peng, K.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

Pollefeys, M.

J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
[Crossref]

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

Proesmans, M.

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

Randall, G.

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Seo, Y.

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

Shin, M.

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

Silven, O.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
[Crossref]

Stankiewicz, O.

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

Tardif, J. P.

J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257.
[Crossref]

Teller, S.

M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

Torre, V.

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

Treuillet, S.

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

Tsai, W. H.

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

Tuytelaars, T.

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

Van Gool, L.

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

Vasseur, P.

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

von Gioi, R. G.

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

Wang, W.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Wildenauer, H.

H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
[Crossref]

Wu, Z.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Xu, Y.

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

Xue, R.

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Yang, K. F.

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

Yaoy, J.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

Ying, R.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Yoon, K.

J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

Yu, S.

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

Yu, W.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Yuan, P. H.

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

Yuille, A. L.

J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhou, D.

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

Zhou, H.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Zou, D.

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

Artif. Intell. (1)

S. T. Barnard, “Interpreting perspective images,” Artif. Intell. 21(4), 435–462 (1983).
[Crossref]

Comput. Vision, Graph. Image Process. (1)

M. J. Magee and J. K. Aggarwal, “Determining vanishing points from perspective images,” Comput. Vision, Graph. Image Process. 26(2), 256–267 (1984).
[Crossref]

Electronics (1)

R. Guo, K. Peng, D. Zhou, and Y. Liu, “Robust visual compass using hybrid features for indoor environments,” Electronics 8(2), 220–236 (2019).
[Crossref]

IEEE Trans. Pattern Analysis Mach. Intell. (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

IEEE Trans. Veh. Technol. (2)

P. H. Yuan, K. F. Yang, and W. H. Tsai, “Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices,” IEEE Trans. Veh. Technol. 60(8), 3603–3614 (2011).
[Crossref]

H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015).
[Crossref]

IEIE Trans. Smart Process. Comput. (4)

S. Park, K. Kim, S. Yu, and J. Paik, “Contrast enhancement for low-light image enhancement: A survey,” IEIE Trans. Smart Process. Comput. 7(1), 36–48 (2018).
[Crossref]

O. Stankiewicz and M. Domański, “Depth map estimation based on maximum a posteriori probability,” IEIE Trans. Smart Process. Comput. 7(1), 49–61 (2018).
[Crossref]

M. Shin, J. Jang, and J. Paik, “Calibration of a surveillance camera using a pedestrian homology-based rectangular model,” IEIE Trans. Smart Process. Comput. 7(4), 305–312 (2018).
[Crossref]

W. J. Kim and S. W. Lee, “Depth estimation with Manhattan world cues on a monocular image,” IEIE Trans. Smart Process. Comput. 7(3), 201–209 (2018).
[Crossref]

Image Process. On Line (2)

R. G. von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a line segment detector,” Image Process. On Line 2, 35–55 (2012).
[Crossref]

J. Lezama, G. Randall, and R. G. von Gioi, “Vanishing point detection in urban scenes using point alignments,” Image Process. On Line 7, 131–164 (2017).
[Crossref]

Int. J. Comput. Vis. (1)

B. Caprile and V. Torre, “Using vanishing points for camera calibration,” Int. J. Comput. Vis. 4(2), 127–139 (1990).
[Crossref]

J. Real-Time Image Process. (2)

W. Elloumi, S. Treuillet, and R. Leconge, “Real-time camera orientation estimation based on vanishing point tracking under Manhattan world assumption,” J. Real-Time Image Process. 13(4), 1–16 (2014).

C. H. Chang and N. Kehtarnavaz, “Fast J-linkage algorithm for camera orientation applications,” J. Real-Time Image Process. 14(4), 823–832 (2018).
[Crossref]

Opt. Commun. (1)

W. Kaddah, Y. Ouerhani, A. Alfalou, M. Desthieux, C. Brosseau, and C. Gutierrez, “Road marking features extraction using the VIAPIX® system,” Opt. Commun. 371, 117–127 (2016).
[Crossref]

Sensors (1)

Z. Wu, W. Fu, R. Xue, and W. Wang, “A novel line space voting method for vanishing-point detection of general road images,” Sensors 16(7), 948–960 (2016).
[Crossref]

Other (17)

J. P. Tardif, “Non-iterative approach for fast and accurate vanishing point detection,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 1250–1257.
[Crossref]

Y. Xu, S. Oh, and A. Hoogs, “A minimum error vanishing point detection approach for uncalibrated monocular images of man-made environments,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1376–1383.
[Crossref]

H. Wildenauer and A. Hanbury, “Robust camera self-calibration from monocular images of Manhattan worlds,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2831–2838.
[Crossref]

M. Hornáček and S. Maierhofer, “Extracting vanishing points across multiple views,” in Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 953–960.

Y. L. Chang, L. Y. Hsu, and O. T. C. Chen, “Auto-calibration around-view monitoring system,” in Proceedings of 2013 IEEE 78th Vehicular Technology Conference (VTC Fall) (IEEE, 2013), pp. 1–5.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
[Crossref]

T. Kroeger, D. Dai, and L. Van Gool, “Joint vanishing point extraction and tracking,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 2449–2457.
[Crossref]

J. Lee and K. Yoon, “Real-time joint estimation of camera orientation and vanishing points,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1866–1874.

W. Elloumi, S. Treuillet, and R. Leconge, “Tracking orthogonal vanishing points in video sequences for a reliable camera orientation in Manhattan world,” in Proceedings of 2012 5th International Congress on Image and Signal Processing (IEEE, 2012), pp. 128–132.
[Crossref]

M. Antunes and J. P. Barreto, “A global approach for the detection of vanishing points and mutually orthogonal vanishing directions,” in Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 1336–1343.
[Crossref]

J. C. Bazin and M. Pollefeys, “3-line RANSAC for orthogonal vanishing point detection,” in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 4282–4287.
[Crossref]

J. C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in Manhattan world,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 638–645.
[Crossref]

J. C. Bazin, Y. Seo, and M. Pollefeys, “Globally optimal consensus set maximization through rotation search,” in Proceedings of Asian Conference on Computer Vision (Springer, 2012), pp. 539–551.

X. Lu, J. Yaoy, H. Li, and Y. Liu, “2-line exhaustive searching for real-time vanishing point estimation in Manhattan world,” in Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2017), pp. 345–353.
[Crossref]

T. Tuytelaars, M. Proesmans, and L. Van Gool, “The cascaded hough transform as support for grouping and finding vanishing points and lines,” in Proceedings of International Workshop on Algebraic Frames for the Perception-Action Cycle (Springer, 1997), pp. 278–289.
[Crossref]

M. E. Antone and S. Teller, “Automatic recovery of relative camera rotations for urban scenes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 282–289.

J. M. Coughlan and A. L. Yuille, “Manhattan world: Compass direction from a single image by bayesian inference,” in Proceedings of the 7th IEEE International Conference on Computer Vision (IEEE, 1999), pp. 941–947.

Supplementary Material (3)

NameDescription
» Visualization 1       Video 1 consists of 600 frames captured on a two-lane road environment. In the first set, all the extracted lines do not correspond to the Manhattan world, especially in regions including trees and bushes.
» Visualization 2       Video 2 consists of 600 frames captured on a six-lane road. Many lines satisfying the Manhattan world show from not only road but also background with a lot of buildings.
» Visualization 3       The third set of 1200 frames, Video 3, has a large number of markers on the street, and many lines of street lamps, banners and buildings are found altogether.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Camera geometry in vehicle: (a) six extrinsic parameters including roll, pitch, yaw, and three-dimensional coordinates and (b) the origin and axis of the vehicle’s 3D coordinate system. The Y-axis indicates the upward direction from the origin.
Fig. 2
Fig. 2 Relationship between the image plane of a camera and the corresponding Gaussian sphere. XG, YG, and ZG represent axes of the Gaussian space, and XI and YI represent axes of the image plane.
Fig. 3
Fig. 3 Vanishing direction (VD) representations using the Gaussian sphere model: (a) VD representation using plane normals and (b) VD representation using the intersection of great circles.
Fig. 4
Fig. 4 Relationship between Manhattan world and Gaussian sphere: (a) a street image that satisfies Manhattan world assumption and detected line segments and (b) the corresponding plane normal vectors distributed in the Gaussian sphere.
Fig. 5
Fig. 5 Block diagram of the proposed camera orientation estimation algorithm.
Fig. 6
Fig. 6 Line segment detection result: (a) input image and (a) the result of LSD.
Fig. 7
Fig. 7 Relationship between the Gaussian sphere and unit cube.
Fig. 8
Fig. 8 Vanishing direction estimation using the linear Hough transform: (a) distribution of unit plane normals in the Gaussian sphere, (b) projection of plane normals into a 2D plane of the unit cube, (c) the strongest line estimation by linear Hough Transform, and (d) result of VD estimation.
Fig. 9
Fig. 9 Estimation of X and Y-axes: (a) distribution of plane normal on the spherical surface and the estimated Z-axis and (b) VD candidate c(ω).
Fig. 10
Fig. 10 Estimation of X- and Y-axes: (a) rotated vectors onto the XY-plane, (b) the corresponding circular histogram and (c) the finally estimated VD.
Fig. 11
Fig. 11 A test video acquired from the real world: (a) input frame of three videos under the straightforward driving environment (see Visualization 1, Visualization 2, and Visualization 3) and (b) line extraction results from (a).
Fig. 12
Fig. 12 Classification results using the camera orientation angles estimated from a real video: (a) the input 160th, 230th frames of first video, and 400th frame of second video, (b) the corresponding classification results, (c) the input of 530th frame of second video, 120th, and 500th frames of third video, and (d) the corresponding classification results.

Tables (4)

Tables Icon

Table 1 Evaluated standard deviation of camera orientation estimation using four different methods.

Tables Icon

Table 2 Orthogonality between vanishing directions estimated by three different methods.

Tables Icon

Table 3 Running time of each part of the proposed algorithm (sec/frame).

Tables Icon

Table 4 Comparison of running time (sec/frame).

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

x i = PX W ,
P = K [ R | T ] = [ f x s c x 0 f y c y 0 0 1 ] [ r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ] [ t x t y t z ] ,
x g = K 1 x i .
x g = K 1 PX W = [ R | T ] X W .
V = [ R | T ] = R [ v x v y v z ] T ,
V c = RI ,
θ = arctan ( f x ( x , y ) f y ( x , y ) ) ,
f x ( x , y ) = [ 1 1 1 1 ] * f ( x , y ) , f y ( x , y ) = [ 1 1 1 1 ] * f ( x , y ) ,
l G i = K 1 l i ,
n i = n y i | n y i | n i ,
n i = p s i × p e i | p s i × p e i | ,
u i = { n i / n y i | | n x i | 1 , | n z i | 1 } .
V Z = v 1 × v 2 .
v 1 = [ 1 1 θ max + μ max ] , v 2 = [ 1 1 θ max + μ max ] .
m i = V Z × n i | V Z × n i | .
m i = R C m i ,
R C = R Cx R Cz = [ 1 0 0 0 cos α sin α 0 sin α cos α ] [ cos β sin β 0 sin β cos β 0 0 0 1 ] ,
α = arccos ( v z z ) , β = arctan ( v z x / v z y ) .
ω max = max ω h ( ω ) ,
h ( ω ) = i = 0 3 c ( ω + 90 i ) .
V x = R C V rot ,
V y = V x × V z .
e = V x V y + V y V z + V x V z .

Metrics