Automatic Affine Structure Recovery Using RANSAC Jairo R. Sánchez, Diego Borro CEIT and TECNUN University of Navarra Manuel de Lardizábal 15 20018 San Sebastián, Spain [email protected] [email protected]

Abstract We present a new method for affine structure recovery from a projective one. It is based on the natural presence of parallel lines in manmade scenes. The proposed method is completely automatic and it does not need any prior information or human interaction. The affine information recovery is carried out using a projective reconstruction that can be obtained from two views of the scene using multiview geometry methods. Affine information contains more structure than the projective one and may be enough for many tasks like measuring centroids or identifying parallel lines. It is also the mid point and the most difficult step of the stratified scene reconstruction. Our method has the advantage of needing only two images and being completely autonomous. Parallel lines are detected directly on the projective structure and used to compute the plane at infinity. This detection is carried out in a probabilistic way using the RANSAC algorithm. Its convergence is based on the fact that intersection points of parallel lines belong to a unique plane (the plane at infinity). These points are easily identified since intersection points of non parallel lines will be randomly scattered across the space.

1

Introduction

The recovery of the 3D structure from 2D images is an important task in some applications that include augmented reality, autonomous

navigation and others. This reconstruction can be done using multiple view geometry methods [11]. The usual way to proceed is to obtain the camera’s internal and external parameters and use them to triangulate the 3D structure. The internal parameters depend on the geometry of the camera, i.e. focal length and principal point while external parameters are those that depend on the position of the camera, i.e. rotation and translation. Camera calibration is the process that computes the internal parameters of a camera. These parameters are needed to perform some applications like augmented reality or euclidean 3D reconstruction. Basically, there are two ways to calibrate a camera from image sequences: pattern based calibration and self (or auto) calibration. A complete survey of camera calibration can be found in [12]. Pattern based calibration methods use objects with known geometry to carry out the process. These methods are fast and precise, however they are only applicable when the calibration object is present in the captured scene. This is sometimes not possible, for example, when calibrating a recorded video sequence. Examples of this type of calibration are [19] and [21]. Self calibration methods obtain the parameters from captured images, without having any calibration object on the scene. These methods are useful when the camera is not accesible or when the environment cannot be altered with external markers. They obtain these parameters by exploiting some constraints that

exist in multiview geometry [5] [9] or imposing restrictions on the motion of the camera [7] [22]. Other approaches to camera calibration are able to perform the reconstruction from a single view [20], if the length ratios of line segments are known. Self calibration methods usually involves two steps: affine reconstruction and metric upgrade. These concepts will be explained in Section 2. The affine calibration is considered the most difficult step by some authors [10]. In this work, we propose a method for affine camera calibration that upgrades a projective structure to an affine one without any prior knowledge of the scene. The algorithm is based on the presence of parallel lines in the scene, present in almost any man-made structure or object. Initially, the projective structure is recovered from images using the fundamental matrix [8], hence, at least two images are required. Using this information, the intersection of all detected lines are computed and used to find the plane that fits on the maximum number of these intersection points. Giving the fact that parallel lines intersect on the special entity called the plane at infinity, if there are enough parallel lines, the fitted plane will be the plane at infinity. This special plane lets us to upgrade the existing projective representation of the scene to affine space. We have modified the RANSAC algorithm in order to compute the plane at infinity. RANSAC, “RANdom SAmple Consensus”, was first proposed by [6].

2

Background

such that at least one entry is not zero. This point in euclidean space is represented by a 2-vector: (𝑥, 𝑦) = (𝑥1 /𝑥3 , 𝑥2 /𝑥3 ) If 𝑥3 = 0 the point is at infinity. Two points ⃗ 𝑥1 and ⃗ 𝑥2 are said to be projectively equivalent if there exists a nonzero scalar 𝜆 that complies: ⃗ 𝑥1 ∼ ⃗ 𝑥2 ⇔ ⃗ 𝑥1 = 𝜆⃗ 𝑥2 A point transformation in the projective plane is represented by a non singular 3 × 3 matrix: ⃗ 𝑥′ = H⃗ 𝑥. Lines in the projective plane are also defined by 3-vectors and have the form ⃗𝑙 = (𝑎, 𝑏, 𝑐)⊤ . A point ⃗ 𝑥 lies on the line ⃗𝑙 if and only if ⃗ 𝑥⊤⃗𝑙 = 0. A line can be defined with two points as ⃗𝑙 = ⃗ 𝑥×⃗ 𝑥′ . In the same manner, the intersection point defined by two lines is represented by ⃗ 𝑥 = ⃗𝑙 × ⃗𝑙′ . This symmetry is applicable to any statement concerning to points and lines and it is called the duality principle of the projective plane. 2.2

The projective space

Similarly, points in projective space are represented by homogeneous 4-vectors: ⃗ = (𝑋1 , 𝑋2 , 𝑋3 , 𝑋4 )⊤ 𝑋 A point transformation in projective space is represented by a non singular 4 × 4 matrix: ⃗ ′ = H𝑋. ⃗ 𝑋 A plane in the projective space is also represented by a 4-vector ⃗𝜋 = (𝜋1 , 𝜋2 , 𝜋3 , 𝜋4 )⊤ . A ⃗ lies on the plane ⃗𝜋 if: point 𝑋 ⃗ = 0. ⃗𝜋 ⊤ 𝑋

In this section some preliminaries on projective geometry are introduced. For more detailed information refer to Hartley’s book [11] or to Pollefey’s Thesis [16]. 2.1

The projective plane

In projective geometry, a point on a plane is represented by an homogeneous 3-vector ⃗ 𝑥 = (𝑥1 , 𝑥2 , 𝑥3 )⊤

(1)

Equation 1 is unaffected by multiplication, so only the ratios {𝜋1 : 𝜋2 : 𝜋3 : 𝜋4 } are significant. This means that a plane in projective space has 3 degrees of freedom. From this fact follows that a plane can be defined by three ⃗ 1, 𝑋 ⃗ 2 and 𝑋 ⃗ 3: non-collinear points 𝑋





⃗ 1⊤ 𝑋 ⎣ 𝑋 ⃗ 2⊤ ⎦ ⃗𝜋 = ⃗0 ⃗ 3⊤ 𝑋

(2)

Figure 1: Metric, affine and projective transforms applied on an object in euclidean space.

The duality principle is also applicable in projective space. In this case, the dual of the point is the plane. In the same way that a plane is defined by three points, a point can be defined by three planes:





⃗𝜋1⊤ ⃗ = ⃗0 ⎣ ⃗𝜋2⊤ ⎦ 𝑋 ⃗𝜋3⊤ Lines in projective space can be defined by the two points or by the intersection of two planes. They have various possible representations, but in this work we have used the null-space and span representation. The line is represented by a 2 × 4 matrix with the form

[ W=

⃗⊤ 𝐴 ⃗ 𝐵⊤

]

⃗ + 𝜇𝐵 ⃗ is the pencil of points where the span 𝜆𝐴 lying on the line. In the same way, a line can be represented as the intersection of two planes using a matrix composed of two planes: ∗

W =

[

⃗⊤ 𝑃 ⃗⊤ 𝑄

]

⃗ + 𝜇′ Q is the pencil of planes The span 𝜆′ 𝑃 with the line as axis. 2.3

The stratification of 3D geometry

We are used to perceive the world in an euclidean way. If we have an euclidean representation of an object, we can measure almost everything (lengths, angles, volumes, etc.). However, it is sometimes impossible to obtain an euclidean representation of an object, so we must handle a simplified version of it. For example, many SFM (Structure From Motion) algorithms cannot obtain an euclidean structure if cameras are not calibrated and a sufficient number of images is not supplied. That is why we need the concept of the stratification of the 3D geometry [4]. In 3D geometry, there are four different layers: projective, affine, metric and euclidean. Each layer is a subset of the previous one and it has associated a group of transformations that leave some properties invariant of the geometrical entities. The euclidean layer is the most restrictive

one. An euclidean transformation is represented by a 6 degrees of freedom 4 × 4 matrix, 3 for rotation and 3 for translation [ ] R ⃗𝑡 T𝑒 = ⃗ ⊤ 0 1 where ⃗𝑡 = (𝑡𝑥 , 𝑡𝑦 , 𝑡𝑧 )⊤ is a 3D translation vector and R is a 3D orthogonal rotation matrix. This group of transformations leave the volume of objects invariant. The next layer is the metric or similarity one. The transformations related to this layer are 7 degrees of freedom 4×4 matrices and are represented by

[ T𝑚 =

𝑠R ⃗0⊤

⃗𝑡 1

]

where 𝑠 is a scalar representing an arbitrary scale factor. This transformation leaves invariant relative distances, angles and a special entity called the absolute conic. The next layer is the affine one. Affine transformations are 12 degrees of freedom 4×4 matrices represented by

[ T𝑎 =

A ⃗0⊤

⃗𝑡 1

]

where A is an invertible 3×3 matrix. Transformations belonging to this layer leave invariant the parallelism of planes, volume ratios, centroids and a special entity called the plane at infinity. The less restrictive layer is the projective one. Its transformations are 15 degrees of freedom 4 × 4 matrices represented by

[ T𝑝 =

A ⃗𝑣 ⊤

⃗𝑡 𝑣

]

where ⃗𝑣 is a 3-vector and 𝑣 is a scalar. These transformations leave invariant intersections and surfaces tangencies in contact. Figure 1 shows the effect of applying these transformations in a cube represented in euclidean space.

3

Stratified 3D reconstruction

Having only two views of a scene captured by an uncalibrated camera, it is only possible to

retrieve a 3D reconstruction up to an arbitrary projective transformation. This means that if we want the real euclidean structure, we have to apply a projective transformation to the retrieved reconstruction. In the example shown in Figure 1 the transformation required to upgrade the projective cube to euclidean geometry is T−1 𝑝 . This may not be sufficient for some tasks, like augmented reality, but it can be the starting point of a stratified reconstruction, i.e. add richer information to the reconstruction allowing it to be upgraded to the next geometric layer. First, the projective reconstruction is upgraded to the affine layer, then to the metric and finally to the euclidean. 3.1

Projective reconstruction

The projective reconstruction of the scene can be recovered using the fundamental matrix F [14]. It is an 8 degrees of freedom 3 × 3 matrix that relates the projection of a 3D point in two different views. If ⃗ 𝑥 and ⃗ 𝑥′ are images of the same 3D point, the relation is given by ⃗ 𝑥′⊤ F⃗ 𝑥 = 0. Having eight image to image correspondences the fundamental matrix can be obtained linearly [15] [8]. From the fundamental matrix a projective 3D transformation relating two views can be obtained. With this information the 3D structure can be recovered via linear triangulation methods [2]. 3.2

From projective to affine

Upgrading from projective to affine supposes to find the plane at infinity. This is the plane where parallel lines meet. In affine space, this plane is truly located at infinity and has coordinates Π∞ = (0, 0, 0, 1)⊤ , so points lying on it have 𝑋4 = 0 (they are at infinity). However, this does not happen in projective space because parallelism is not an invariant property of it. Lines that are parallel in affine space meet in a normal plane in projective space (not at infinity). It is said that the plane at infinity is not on its canonical position. Finding this plane allows us to obtain a transformation that moves it to its canonical position,

making parallel lines to meet again at infinity. The projective transformation that allows the upgrade from projective to affine space is defined by:

[ T𝑝𝑎 =

I3 ⊤ ⃗𝜋∞

⃗0

]

⇔ T−⊤ 𝜋∞ = (0, 0, 0, 1)⊤ 𝑝𝑎 ⃗

As seen in Figure 2, three sets of parallel lines are enough to locate the plane at infinity using Equation 2.

Figure 2: The plane at infinity is the plane defined ⃗𝑥 , 𝑉 ⃗𝑦 and 𝑉 ⃗𝑧 . These points are the vanishing by 𝑉 points of the cube.

3.3

4

Upgrading to affine space using RANSAC

We propose a new method to upgrade a projective reconstruction to affine space. Our method is based on locating the plane at infinity assuming that there exists parallel lines in the scene. This is an acceptable assumption in almost any man-made scene. The first step is to obtain a projective reconstruction of the scene [17] and then locate the maximum number of lines on it. The line search can be done either manually or automatically. If the projective reconstruction has been retrieved from an image pair (using the fundamental matrix and feature point matching) the lines can be detected in images locating the feature points that define them and then matching these points with the reconstructed vertices. Figure 3 shows this process. In this way, the result would be a 3D structure containing edges and not only points. These 3D edges are the only information needed to obtain the plane at infinity. There are several edge detectors that can be used to carry out this detection, like Canny operator [1] or the Hough transform [3].

From affine to metric

The metric level is the richest one that can be obtained from images. Upgrading from affine to metric supposes finding the absolute conic Ω∞ . This entity is a planar conic that lies on the plane at infinity. The key in this step is to find an affine transformation that maps the identified conic to its canonical position in euclidean space (𝑋12 +𝑋22 +𝑋32 = 0 and 𝑋4 = 0). However, this is beyond the scope of this work. Interested readers can refer to [18]. 3.4

From metric to euclidean

This step can be performed only if real lengths of the reconstructed object are known. This allows is to obtain a scale factor 𝑠 that upgrades metric reconstruction to euclidean space.

Figure 3: Edge matching process. (a) and (b) feature matching between images. (c) Projective reconstruction. (d) Projective reconstruction after edge detection and matching.

Once lines are detected, the next step is

to compute the intersection of all line pairs. Due to numerical stability reasons, it may be a good idea to normalize the 3D points so that the centroid is at the origin and the root-meansquared distance of 3D points to the origin is √ 3. Some of these intersection points will be vanishing points and others will not. The idea is to find vanishing points and use them to update the projective structure to affine space. Since vanishing points are located in the plane at infinity, there will be a plane defined by three of the computed intersection points that will contain all the vanishing points. This plane can be identified as the plane to which more intersection points belongs, assuming that intersection points not corresponding to parallel lines (non vanishing points) are randomly scattered across the space. 4.1

points: ⃗ ⃗ + 𝜇2 𝐷 ⃗ = 𝜆2 𝐶 ⃗ + 𝜇1 𝐵 𝜆1 𝐴 This leads to the next linear system of equations:



𝐴1 ⎢ 𝐴2 ⎣ 𝐴 3 𝐴4

𝐵1 𝐵2 𝐵3 𝐵4

−𝐶1 −𝐶2 −𝐶3 −𝐶4

⎤⎡



𝜆1 −𝐷1 −𝐷2 ⎥ ⎢ 𝜇1 ⎥ ⃗ =0 −𝐷3 ⎦ ⎣ 𝜆2 ⎦ 𝜇2 −𝐷4

The solution of this system can be computed as the singular vector corresponding to the smallest singular value of the coefficients matrix. This leads to two intersection points. Theoretically they must be equal, but can be different because lines might not intersect as shown in Figure 4 due to noise or numerical stability reasons.

Data normalization

The transformation matrix related to this normalization is:



√ 3 𝑟𝑚𝑠

H𝑁 = ⎣

0 0

0

0

0

√ 3 𝑟𝑚𝑠

√ 3 𝑟𝑚𝑠

0

√ − 3∗⃗ 𝑐𝑥 𝑟𝑚𝑠 √ 𝑐𝑦 − 3∗⃗ 𝑟𝑚𝑠 √ 𝑐𝑧 − 3∗⃗ 𝑟𝑚𝑠

⎤ ⎦

where 𝑟𝑚𝑠 is the root-mean-squared distance of 3D points to the origin and ⃗𝑐 is the centroid. This normalization can be inverted when the plane at infinity is found and before doing the affine rectification. If ⃗𝜋∞ is the plane at infinity of the unnormalized structure, then from Equation 1 follows: ⊤ ⃗ ⃗𝜋∞ 𝑋

=

⊤ −1 ⃗ ⃗𝜋∞ H𝑁 H𝑁 𝑋

=

⃗ H−⊤ 𝜋∞ H𝑁 𝑋 𝑁 ⃗

so the plane at infinity recovered from the normalized structure is 𝜋∞𝑁 = H−⊤ 𝜋∞ . The orig𝑁 ⃗ inal plane can be retrieved from the normalized one using 𝜋∞ = H⊤ 𝑁 𝜋∞𝑁 . 4.2

Computing intersection points

The intersection of two lines ( defined ) (respec-) ⃗ 𝐵 ⃗ and 𝐶, ⃗ 𝐷 ⃗ tively by the pairs of points 𝐴, can be computed equaling the span of these

Figure 4: The lines might not intersect.

A good choice can be the middle point of the line joining these two points. 4.3

Plane localization

The plane at infinity is located using RANSAC [6] over all the computed intersection points. RANSAC is an iterative algorithm that randomly takes groups of three points and computes the plane defined by them using Equation 2. Then, the restriction described in Equation 1 is tested on all intersection points and if the residual error is near to zero, the point is considered to lie on the plane. Points lying on the plane are called inliers and points not lying on it are called outliers. RANSAC takes as a solution the group that generates less outliers. The parameterization needed

is the threshold value of Equation 2 to consider that a point lies on the plane and the maximum number of iterations allowed to RANSAC. Theoretically, all lines along the same direction intersect in the same vanishing point. However, due to noise in the points defining lines and numerical stability reasons, this may not be true. As seen in Figure 5, small perturbations in the position of a vertex modifies considerably the intersection point. Furthermore, the error in the intersection is proportional to the distance from the computed in⃗ tersection point to the corrupted vertex. If 𝑉 ⃗ ⃗ is a point belonging to the line 𝜆𝐴 + 𝜇𝐵, then ⃗ is corrupted, error in 𝑉 ⃗ will grow linearly if 𝐵 with 𝜇: ⃗+𝜇 𝐵 ⃗ + 𝐸⃗𝐵 = 𝑉 ⃗ + 𝜇𝐸⃗𝐵 𝜆𝐴

(

)

4.4

Convergence

In absence of noise, the minimum number of pairs of parallel lines required is four, where at least three of them have different directions. If only three pairs are supplied, the maximum number of inliers that would be detected when trying this group by RANSAC would be three, just like when trying another group of non parallel lines.

5

Experimental results

The reconstruction algorithm has been tested using synthetic data. The data sets are scenes containing 3D objects with and without parallel lines. The models used are described in Table 1 and shown in Figure 6.

(3)

To solve this problem, intersection points close to each other are considered equal and their centroid is taken as the true intersection point, since the error in the centroid will be less than or equal to the maximum error. Due to Equation 3, the distance used to compare points depends linearly on the norm of one of them.

Figure 5: Computing intersections with noise.

Since the intersection points that appear many times are suspected of being vanishing points, we have modified the original RANSAC to take this into account. Each intersection point has an associated weight 𝑤𝑖 that conditions the probability to be chosen by RANSAC. This coefficient is selected according to the number of times the point appears when computing intersections. Thus, if there ⃗ is the centroid are 𝑛 intersection points and 𝑃 ⃗ of 𝑘 nearby points, the weight ∑ associated to 𝑃 would be 𝑤𝑃 = 𝑘/𝑛 being 𝑤𝑖 = 1. These modifications make RANSAC faster and more robust against noise in vertex position.

Table 1: Synthetic models. Model

Lines

Pairs intersecting on ⃗𝜋∞

A B C

36 48 36

18% 13% 11%

The first test runs the algorithm over a projective reconstruction of each model adding outliers. These outliers are randomly generated points that are added to the intersection points list. These points act as intersection points of non parallel lines. Figure 7 shows that the algorithm is very robust against outliers since the proportion of the inlying points (sets of parallel lines) can stay below 10% in most cases. The success rate decreases approximately linearly with the amount of outliers present in the data set. The robustness of the algorithm when running with noisy data is shown in Figure 8. In this test, the position of the normalized 3D points is corrupted with uniformly distributed random additive noise. This noise affects edges as shown in Figures 4 and 5, making them not intersect. The interval of the error is between [-Max. Error Max. Error]. As seen in the graphs, the algorithm is very sensitive to noise. However, its robustness increases with the inlier proportion. Figure 9

Figure 6: Generated synthetic models. (a) Model A (b) Model B (c) Model C

shows the number of iterations needed by the algorithm to converge in the tests shown in Figure 8.

6

Conclusions

The method introduced here can extract important information of the scene from a pair of images. Moreover, affine information can be used to determine the lens distortion, since the parallelism information can be used to rectify the images. This probabilistic approach has the advantage of needing less information compared with other affine calibration methods and does not require any special knowledge of the scene. However, it needs the presence of parallel lines, but experimental results show that the amount of parallel lines needed by the algorithm is small (10-15%). Although two images is the minimum information required to detect parallel lines, this algorithm can be adapted for a more general case. Since it depends heavily in the quality of the projective reconstruction, all the existing fundamental matrix optimization techniques are applicable here, like Levenberg-Marquardt [13]. Moreover, having more than two images allows to identify degenerate solutions, for example imposing cheirality constraints [10]. Experimental results show that the proposed method is robust against outliers but sensitive to noise. A possible solution to this problem may be to combine the outlier detection provided by RANSAC with the robustness against noise provided by other proba-

bilistic methods like genetic algorithms. Since the time needed to run an iteration by RANSAC is small in this case (solve a linear system to compute the plane and run a vector multiplication for every point) the method presented here is pretty fast and can be executed in real time if needed. Moreover, since the iterations done by RANSAC are independent of each other, the algorithm is highly parallelizable and can run fast in multiple core architectures.

References [1] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6):679–698, 1986. [2] K. Cornelis, M. Pollefeys, M. Vergauwen, and L. Van Gool. Augmented reality using uncalibrated video sequences. Lecture Notes in Computer Science, 2018:144– 160, 2001. [3] R.O. Duda and P.E. Hart. Use of the hough transformation to detect lines and curves in pictures. Commun. ACM, 15(1):11–15, 1972. [4] O. Faugeras. Stratification of 3dimensional vision: Projective, affine, and metric representations. Journal of the Optical Society of America, 12(3):465– 484, 1995. [5] O. Faugeras, Quang-Tuan Luong, and S. J. Maybank. Camera self-calibration:

Theory and experiments. In European Conference on Computer Vision, volume 588, pages 321–334, 1992. [6] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, 1981. [7] R. Hartley. Self-calibration from multiple views with a rotating camera. In ECCV ’94: Proceedings of the third European conference on Computer vision, volume 1, pages 471–478. Springer-Verlag New York, Inc., 1994. [8] R. Hartley. In defence of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6):580–593, 1997. [9] R. Hartley. Kruppa’s equations derived from the fundamental matrix. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(2):133–135, 1997. [10] R. Hartley, E. Hayman, L. Agapito, and I. D. Reid. Camera calibraton and the search for infinity. In International Conference on Computer Vision, volume 1, pages 510–517. IEEE Computer Society, 1999. [11] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000. [12] E. E. Hemayed. A survey of camera selfcalibration. In IEEE Conference on Advanced Video and Signal Based Surveillance, pages 351–357, 2003. [13] K. Levenberg. A method for the solution of certain non-linear problems in least squares. Quarterly of Applied Mathematics, 2(2):164–168, 1944.

[14] H.C. Longuet Higgins. A computer algorithm for reconstructing a scene from two projections. Nature, 293:133–135, September 1981. [15] Q.T. Luong and O. Faugeras. A stability analysis of the fundamental matrix. In European Conference on Computer Vision, pages 577–588, 1994. [16] M. Pollefeys. Self-calibration and metric 3D Reconstruction from Uncalibrated Image Sequences. PhD thesis, Katholike Universiteit Leuven, 1999. [17] Philip Torr. A structure and motion toolkit in matlab. Technical MSR-TR2002-56, Microsoft Research, 2002. [18] B. Triggs. Autocalibration and the absolute quadric. In IEEE Computer Vision and Pattern Recognition, pages 609–614, 1997. [19] R. Tsai. A versatile camera calibration technique for high accuracy 3d machine vision metrology using off the self tv cameras and lenses. IEEE Journal of Robotics and Automation, 4:323–344, 1987. [20] G.H. Wang, H.T. Tsui, Z.Y. Hu, and F.C. Wu. Camera calibration and 3d reconstruction from a single view based on scene constraints. Image and Vision Computing, 23:311–323, 2005. [21] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330–1334, 2000. [22] H. Zhong and Y. S. Hung. Selfcalibration from one circular motion sequence and two images. Pattern Recognition, 39(9):1672–1678, 2006.

Figure 7: Robustness of the algorithm against outliers.

Figure 8: Response of the algorithm when running with noisy data.

Figure 9: Iterations done by RANSAC.

Automatic Affine Structure Recovery Using RANSAC

sented by a 6 degrees of freedom 4 4 matrix,. 3 for rotation and 3 for .... a good idea to normalize the 3D points so that .... Notes in Computer Science, 2018:144–.

3MB Sizes 0 Downloads 310 Views

Recommend Documents

unsupervised change detection using ransac
the noise pattern, illumination, and mis-registration error should not be identified ... Fitting data to predefined model is a classical problem with solutions like least ...

Affine Normalized Invariant functionals using ...
S.A.M Gilani. N.A Memon. Faculty of Computer Science and Engineering ..... Sciences & Technology for facilitating this research and Temple University, USA for.

Affine Normalized Contour Invariants using ...
Faculty of Computer Science and Engineering ..... Conics have been used previously in computer vision .... and Temple University, USA for providing the.

Affine Invariant Contour Descriptors Using Independent Component ...
when compared to other wavelet-based invariants. Also ... provides experimental results and comparisons .... of the above framework is that the invariants are.

Affine Invariant Contour Descriptors Using Independent Component ...
Faculty of Computer Science and Engineering, GIK Institute of Engineering Sciences & Technology, NWFP, Pakistan. The paper ... removing noise from the contour data points. Then ... a generative model as it describes the process of mixing ...

Testable implications of affine term structure models
Sep 5, 2013 - and Piazzesi, 2009), studying the effect of macroeconomic devel- opments ... an excellent illustration of Granger's (1969) proposal that testing.

Testable implications of affine term structure models
Sep 5, 2013 - a Department of Economics, University of California, San Diego, United States b Booth School of Business, University of Chicago, United States.

Estimation of affine term structure models with spanned
of Canada, Kansas, UMass, and Chicago Booth Junior Finance Symposium for helpful ... University of Chicago Booth School of Business for financial support.

Affine term structure models for the foreign exchange ...
Email: [email protected]. .... First, it is currently the best-understood, having been ...... It is interesting to notice that the bulk of the decrease.

Estimation of affine term structure models with spanned - Chicago Booth
also gratefully acknowledges financial support from the IBM Faculty Research Fund at the University of Chicago Booth School of Business. This paper was formerly titled. ''Estimation of non-Gaussian affine term structure models''. ∗. Corresponding a

Simple Affine Extractors using Dimension Expansion - Semantic Scholar
Mar 25, 2010 - †Department of Computer Science, Colubmia University. Part of this research was done when the author was at. Department of Computing Science, Simon Fraser University. ...... metic circuits with bounded top fan-in. In IEEE ...

Affine Normalized Invariant Feature Extraction using ...
Pentium IV machine with Windows XP and Matlab as the development tool. The datasets used in the experiments include the Coil-20 datasets, MPEG-7. Shape-B datasets, English alphabets and a dataset of. 94 fish images from [9]. Using four orientations {

Automatic Music Transcription using Autoregressive ...
Jun 14, 2001 - indispensable to mix and manipulate the necessary wav-files. The Matlab ..... the problems related to automatic transcription are discussed, and a system trying to resolve the ..... and then deleting a certain number of samples.

Automatic Campus Network Management using GPS.pdf ...
Automatic Campus Network Management using GPS.pdf. Automatic Campus Network Management using GPS.pdf. Open. Extract. Open with. Sign In.

Simple Affine Extractors using Dimension Expansion
Aug 9, 2011 - †Department of Computer Science, Colubmia University. ..... degree roughly n/k that is non-constant on any k-dimensional affine subspace.

AUTOMATIC PITCH ACCENT DETECTION USING ...
CRF model has the advantages of modeling the relations of the sequential labels and is able to retain the long distance dependency informa- tion. Although ..... ECS-95-001,. Bonston University, SRI International, MIT, 1995. [8] R.-E. Fan, P.-H. Chen,

Automatic speaker recognition using dynamic Bayesian network ...
This paper presents a novel approach to automatic speaker recognition using dynamic Bayesian network (DBN). DBNs have a precise and well-understand ...

Performance Evaluation of RANSAC Family
checking degeneracy. .... MLESAC takes into account the magnitude of error, while RANSAC has constant .... International Journal of Computer Vision, 6.

The recovery of thematic role structure during noun ...
objects that are involved in these events (McRae, Ferretti,. & Amyote ... plying a thematic relation (see Downing, 1977; Levi, 1978), phrases should be interpreted more easily when the ..... involves mapping the constituents onto a sentence-like.

OPTIMAL FRAME STRUCTURE DESIGN USING ...
design coding structures to optimally trade off storage size of the frame structure with ..... [2] “Stanford Light Field Archive,” http://lightfield.stanford.edu/lfs.html.

Automatic Problem Decomposition using Co-evolution ...
Problem Decomposition. •. Interdependencies between subcomponents. •. Credit Assignment. •. Maintenance of diversity. •. Adding subcomponents ...