AUTOMATIC REGISTRATION OF SAR AND OPTICAL IMAGES BASED ON MUTUAL INFORMATION ASSISTED MONTE CARLO Muhammad Adnan Siddique1 , M. Saquib Sarfraz1,2 , David Bornemann3 , Olaf Hellwich3 1

Computer Vision (COMVis) Research Group, COMSATS Institute of IT, Lahore, Pakistan 2 Institute for Anthropomatics, Karlsruhe Institute of Technology, Karlsruhe, Germany 3 Computer Vision and Remote Sensing Lab, Technical University of Berlin, Berlin, Germany ABSTRACT

The development of Geographical Information Systems applications involving fusion of data from different space-borne imaging sensors inevitably requires a preliminary registration of the images. In case of Synthetic Aperture Radar (SAR) and optical sensors, the registration is particularly challenging due to the vast radiometric differences in the data. In this paper, we present a novel method to register SAR and optical images automatically. It provides an accurate registration despite the radiometric differences in the images. Moreover, this paper introduces a Monte Carlo formulation of the image registration problem. Index Terms— Image registration, Monte Carlo, Random features, Mutual Information, SAR, Optical sensors

1. INTRODUCTION Space-borne Earth imaging satellites continue to grow in number, thereby rendering a perpetual increase in the volume of the data available to potential users in academia and industry. The characteristics of the imaging sensors have also diversified over time; we have both active and passive sensors in space, operating generally in rather different spectral bands and employing varied acquisition techniques. The spatial resolutions of the sensors are also generally different. Consequently, there are vast differences among the characteristics of the data from the different sensors. Images obtained with Synthetic Aperture Radar (SAR) and optical sensors serve as a typical example; despite covering the same terrain, the images exhibit a vast difference in their radiometric characteristics. SAR images are additionally affected with speckle noise, that tends to impede information extraction and interpretation. Users of remotely sensed data are generally concerned with higher-end computer vision tasks required in developing Geographical Information System (GIS) applications for scientific analysis. GIS application development, nonetheless, inevitably depends on a reliable implementation of lower-end tasks, such as ‘registration’ of images from the different sensors, which is mandatory for developing applications involving data fusion, change detection, image overlay etc. Therefore, taking advantage of the ampleness of the data is directly associated with performing better at the lower-end tasks. In this paper, we present a novel method to register SAR and optical images automatically. The method provides an accurate registration despite the radiometric differences in the images.

2. RELATED WORK Image registration refers to aligning two (or more) images to get a coordinate level correspondence, i.e. the corresponding coordinates among the two images must correspond to the same physical terrain/object. The problem of registration of multi-sensor satellite imagery has been frequently addressed, and numerous solutions have been offered. However, they are generally applicable in particular scenarios, or among particular sensors. Manual approaches are still used by many GIS analysts; some ground control points are manually marked among the images, and a transformation model is enforced to bring correspondence among them. Among the automated solutions, traditional approaches may broadly be characterized as area-based or feature-based [1]. Area-based (also referred to as template-matching or intensitybased) approaches consider registration as an optimization problem [2]. Under a chosen similarity metric, an ‘optimal’ transformation is deduced such that its application offers maximum ‘similarity’ among the images in terms of coordinate level correspondence. In this context, different similarity metrics have been proposed over time, as in [3, 4]. Mutual Information (MI) has emerged as a preferred choice [2, 4] among other metrics (such as correlation coefficient, correlation ratio etc.). However, area-based approaches are generally useful only when the images differ by translation. The presence of any non-linear deformation motivates the use of feature-based approaches, which strive to extract some ‘homologous’ entities (subsequently referred to as features) among the images. These entities could be points (like building corners, landmarks), lines or contours (e.g. roads, coastlines) or even regions. After feature-extraction, the features are matched to establish correspondence among them; subsequently, on the basis of the pixel location of the matched features, a spatial transformation is applied to register the images. In recent years, some hybrid approaches have also emerged, as in [5], which tend to avail the advantages of both approaches. 3. MI ASSISTED MONTE CARLO FORMULATION Scale Invariant Feature Transform (SIFT) has been proposed in [6, 7], with some modifications for feature-based registration of multisensor imagery. SIFT seems to provide good results, though the investigations have been restricted to only SAR images (albeit from different sensors). We investigated the use of SIFT for point features extraction and matching among SAR and optical images. The SAR image used is a high-resolution TerraSAR-X image (SL-S-EEC-RE, pixel size 1.5 m, courtesy Infoterra GmbH) of the city of Rosenheim, Germany. The optical image of the corresponding area is taken from Google Earth (having pixel size same as the SAR image). Our results

Fig. 1. SIFT feature extraction and matching among SAR (left) and Optical (right) images

Fig. 2. 100 keypoints randomly initialized in SAR (left) and Optical (right) images

are as shown in Figure 1. Clearly, the matching is poor. Moreover, we strongly need to appreciate the fact that most of the point features given by SIFT do not represent the same physical point in the two images. These two observations lead to the natural inference that in case of wide differences in the radiometric characteristics of the images, as for SAR and optical images, it is very difficult, if not impossible, to extract features in the two images that exhibit perfect correspondence, i.e. representing exactly the same physical point in the imaged scene. We propose a novel solution to this problem, as explained next. 3.1. Monte Carlo Approach We argue that, since the radiometric differences among the SAR and optical images deny good correspondence among the extracted features, the feature-extraction is an extraneous step. In our Monte Carlo approach, we propose initializing ‘random features’, and follow it up with a powerful feature-matching step. Our approach obviates the need to employ a feature extractor. The details of the approach are as follows. 3.1.1. Random Initialization We randomly initialize some points (hereafter referred to as keypoints) among the images under uniform distribution, as shown in Figure 2. Uniform distribution ensures that we do not bias any particular region in the images; i.e. the probability of keypoints falling in a region in the image is the same as that of any other region of the same size. Moreover, there is no correlation among the keypoints across the two images. This scenario is analogous to a feature extractor likely performing at the worst. 3.1.2. Matching with normalized Mutual Information (MI) Traditionally, mutual information (MI) has been used as an areabased approach. Another novelty introduced by our work is to use MI for feature-matching as well. MI tries to find the best match for a keypoint in an image against all the candidates in the other image. Firstly, candidate regions, C are cropped out using a square window W of a predefined size, centered around each keypoint. Considering

Is and Io to be the SAR and optical images respectively, and the number of random initializations to be N , the ith candidate region in the SAR image and the jth candidate in the optical image is, Csi = Is ◦ W i

Coj = Io ◦ W j

where ◦ represents the cropping operation, W i is the window centered at the ith keypoint in the SAR image and W j is the window centered at the jth keypoint in the optical image, while i, j ∈ {1, 2, . . . N }. The normalized MI implementation (as proposed in [2]) of the two candidates regions is   H(Csi ) + H(Coj ) MI Csi , Coj = H(Csi , Coj ) where H(Csi ) and H(Coj ) are the marginal Shannon entropies, and H(Csi , Coi ) is the joint Shannon entropy of the candidate regions. These entropies are computed using the marginal and joint probability mass functions (and histograms) of the candidate regions, as in [2]. The matching of keypoints across the two images is established on the following basis   mi = arg max MI Csi , Coj j

where mi is the specific keypoint in the optical image that is matched to the ith keypoint in the SAR image, henceforth represented as i → mi . However, multiple keypoints in the SAR image may tend to get matched to the same keypoint in the optical image. Such ambiguities can be seen in Figure 3a. It implies that for each mi , there may be multiple instances of i, i.e. {i1 , i2 , . . .} → mi . This observation is not unexpected. Since the keypoints were originally randomly initialized, the candidate regions are random too; and therefore, a measure of statistical similarity (like MI) applied on the candidate regions cannot in itself guarantee that the returned matches would be a ‘one-to-one’ correspondence across the two images. We reject this multiplicity by retaining only the keypoints pair that exhibits the highest MI; i.e., from among the elements of the set

(a) Keypoints matched with MI (ambigous matches exist)

(b) Keypoints pairs with the highest MI

(c) Matched keypoints after Recursive Model Fitting Fig. 3. Mutual Information (MI) Assisted Monte Carlo Formulation for registration of SAR (left) and Optical (right) images   i1 → mi , i2 → mi , . . . , the one (ir ) that offers the highest MI is retained while the others are rejected.   i ir = arg max MI Csik , Com



ik

ir → mi Having removed the ambiguities, if the number of matched pairs retained is nr (out of N ), then the set containing these pairs is n  o RI = ir,l → mi,l : l = 1, 2 . . . nr as shown in Figure 3b. 3.1.3. Recursive Model Fitting

i1

G1 = {k : |gk − g1 | ≤ τ } We repeat this step, until we have computed transformations using all the matched pairs and have found the sets of pairs that tend to satisfy them, i.e. {G1 , G2 , . . . , Gnr }. The pairs in the set exhibiting the highest cardinality are retained, while others are discarded. The set containing these pairs is, n o RII = GL : L = arg max |Gl | l

Since the keypoints were randomly initialized, we do not expect the retained matched pairs to show perfect correspondence. However, in principle, the matched keypoints should tend to fall more or less in the same neighborhood in the physical scene. It can be seen in Figure 3b that most of the matches are correct in this sense. In order to reject the outliers, we propose a recursive approach of model-fitting. We take a matched pair and compute an initial transformation model parameter. Since a matched pair corresponds to two points, one in each image, the transformation model parameter here is essentially the gradient g of a straight line segment joining the two points: g1 =

where g1 represents the gradient of the first pair, and (xs , ys ) and (xo , y0 ) are the coordinates in the SAR and the optical images respectively. Next, we find those among the remaining pairs that satisfy this model parameter, within a tolerance, τ . These selected pairs are given in the following set:

i1

yom − ysir1 y m − ysir1 = o  i1 i i i 1 r1 r1 xm xom + xs − xs o

II

I

R ⊆ R . Figure 3c shows these matched pairs. 3.2. Improved localization of keypoints Although the outliers have been discarded, and the set RII only contains the keypoints pairs that are well-matched, perfect correspondence (in terms of precise physical location of the keypoints in the two images) is not yet guaranteed. Therefore, the localization of the keypoints in one image needs to be improved relative to their matches in the other image. We use an area-based implementation of normalized MI to refine the localization of keypoints, as depicted in Figure 4.   p p {(∆xim , ∆yim )} = arg max MI Csp , Io ◦ W (xo +∆x,yo +∆y) (∆x,∆y)

Fig. 5. Final registered images [overlaid] (a) Area-based MI to improve localization of keypoints an area-based implementation of MI subsequently improves the localization of the keypoints. The final set of matched keypoints pairs returns a transformation model that is used to register the images. 6. FUTURE WORK

(b) Improved Keypoints in SAR (left) and Optical (right) images Fig. 4. Improvement of keypoint localization p = 1, 2 . . . , RII For the pth keypoint in the SAR image (from among the keypoints pairs retained in RII ), the location of the corresponding match in the optical image undergoes a translation of {(∆xim , ∆yim )} to compute the new match point which provides an improved correspondence. 4. RESULTS For the registration of the aforementioned SAR and optical images, a 100 keypoints were randomly initialized and matched using the proposed MI assisted Monte Carlo Formulation. Prior to the initialization of the keypoints, the pixel sizes of the two images were brought to the same scale. The removal of the ambiguities and the outliers led to the retention of 6 matched pairs. These pairs, after improving their localization to ensure accurate correspondences, were used to compute a final transformation model. This transformation was then applied to the images to register them. In this case, the model used was affine. The results of the intermediate steps of the proposed method are given in the Figures 2-4, and the final registered images are shown in Figure 5. It can be seen that the registration has been very accurate. The registered images are appropriate for any subsequent data fusion processing. 5. CONCLUSION In this paper, we have presented a novel approach to register multisensor imagery. It provides an automated solution to the registration problem, agnostic to the radiometric differences among the sensors, and without incorporating the traditionally used feature extraction strategies. We initialize random keypoints (as random features) and follow up with a feature-matching strategy based on mutual information. A recursive model-fitting stage eliminates the outliers and

In further development, we aim to improve upon the proposed formulation. A possible improvement might be the initialization of keypoints that are somewhat correlated across the two images on the basis of some existing map-projection information. This might lead to fewer ambiguities and outliers, and thus simplify the following correspondence analysis. The use of additional likelihood information within the matching process might also further reduce the magnitude of multiple correspondences and support the registration process. Moreover, instead of a heuristic selection of the parameters, we aim to derive their optimal values in terms of the size and resolution of the imagery. 7. REFERENCES [1] L. M. G. Fonseca and B. S. Manjunath, “Registration techniques for multisensor remotely sensed imagery,” Photogrammetric Engineering and Remote Sensing, vol. 62(9), pp. 1049–1056, 1996. [2] S. Suri and P. Reinartz, “Mutual-information-based registration of TerraSAR-X and Ikonos imagery in urban areas,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48(2), pp. 939–949, 2010. [3] J. Inglada and A. Giros, “On the possibility of automatic multisensor image registration,” IEEE Transactions on Geoscience and Remote Sensing, vol. 42(10), pp. 2104–2120, 2004. [4] J. Inglada, “Similarity measures for multisensor remote sensing images,” IEEE Geoscience and Remote Sensing Symposium, vol. 1, pp. 104–106, 2002. [5] G. Hong and Y. Zhang, “Combination of feature-based and areabased image registration technique for high resolution remote sensing image,” IEEE Geoscience and Remote Sensing Symposium, pp. 337–380, 2007. [6] S. Suri, P. Schwind, P. Reinartz, and J. Uhl, “Combining mutual information and Scale Invariant Feature Transform for fast and robust multisensor SAR image registration,” 75th Annual ASPRS Conference, 2009. [7] S. Suri, P. Schwind, J. Uhl, and P. Reinartz, “Modifications in the SIFT operator for effective SAR image matching,” International Journal of Image and Data Fusion, vol. 1(3), pp. 243–256, 2010.

AUTOMATIC REGISTRATION OF SAR AND OPTICAL IMAGES ...

... for scientific analysis. GIS application development, nonetheless, inevitably depends on a ... solutions, traditional approaches may broadly be characterized as.

2MB Sizes 0 Downloads 297 Views

Recommend Documents

DETECTION OF ROADS IN SAR IMAGES USING ...
the coordinates system of the current segment are defined by the endpoint of .... the searching region to reduce the false-alarm from other road-like segments; for ...

Range Resolution Improvement of Airborne SAR Images - IEEE Xplore
1, JANUARY 2006. 135. Range Resolution Improvement of. Airborne SAR Images. Stéphane Guillaso, Member, IEEE, Andreas Reigber, Member, IEEE, Laurent ...

super-resolution of polarimetric sar images for ship ...
POCS method is applied to extract the information from LR images in ... Super-resolution is a technique to combine information in ... However, as a spectral domain ..... [1] R. Tsai and T. Huang, ”Multi-frame image restoration and registration,”.

Automatic Voter Registration Fact Sheet.pdf
Page 1 of 2. Automatic Voter Registration. This is a process by which the government, rather than the individual, takes the initiative to. register eligible voters to vote. Agencies use the information they already collect during the. course of trans

Contributions to the Automatic Restoration of Images ...
that scene and camera are immersed in a non-participating medium, i.e. they. 5 ... ipating media, e.g surveillance, mapping, autonomous robots and vehicles [13] ..... Furthermore, we captured a sequence from a residential area in a foggy day.

Thresholding for Edge Detection in SAR Images
system is applied to synthetic aperture radar (SAR) images. We consider a SAR ... homogenous regions separated by an edge as two lognormal. PDFs with ...

integrative use of sar and optical data for forest ...
(PALSAR) and optical (such as AVNIR, LANDSAT) data in an integrative way for ..... http://www.gisdevelopment.net/application/nrm/overview/mma09_cksim.htm ...

Automatic Non-rigid Registration of 3D Dynamic Data ...
istration of 3D dynamic facial data using least-squares con- formal maps, and ..... cial expressions that provides a good representation of facial motion. Isomap ...

Automatic Face Annotation in News Images by ...
Google Images, Bing Images and Yahoo! Image Search. Our .... Third, for each name n and each face f a procedure is executed in order to compute a ...

Automatic Circle Detection on Images with an Adaptive ...
ABSTRACT. This article presents an algorithm for the automatic detection of circular shapes from complicated and noisy images. The algorithm is based on a ...

Automatic Circle Detection on Images with an ... - Ajith Abraham
algorithm is based on a recently developed swarm-intelligence technique, well ... I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search ...

Automatic Circle Detection on Images with an Adaptive ...
test circle approximates the actual edge-circle, the lesser becomes the value of this ... otherwise, or republish, to post on servers or to redistribute to lists, requires prior ... performance of our ABFOA based algorithm with other evolutionary ...

Rigid registration of 3-D ultrasound with mr images
sponding to the final stage of registration. The intensity fit was computed using a polynomial function (see Section II-C). For ... is updated at each transformation stage. Such a ..... case of US images, as it is very difficult to extract meaningful

AUTOMATIC DISCOVERY AND OPTIMIZATION OF PARTS FOR ...
Each part filter wj models a 6×6 grid of HOG features, so wj and ψ(x, zj) are both .... but they seem to be tuned specifically to shelves in pantry, store, and book-shelves respectively. .... off-the-shelf: an astounding baseline for recognition.

SAR 135 Clue Report
VIRTUALLY 100% CERTAIN CLUE MEANS. SUBJECT IS IN THESE SEGMENTS. VERY STRONG CHANCE THAT CLUE MEANS. SUBJECT IS IN THESE ...

Optical Limiting and Nonlinear Optical Properties
Nanoparticles: Its Application to Optical Limiting. A Thesis submitted towards ... School of Physics, University of Hyderabad, India, under direct ..... material are necessary in order to develop magnetic recording materials. With using metal ...

Rewritable optical disk with spare area and optical disk processing ...
Jul 29, 2005 - Standard ECMAi272 120 mm DVD Rewritable Disk (DVD-RAM) pp. 41-42. (Continued). Primary Examiner * Paul Huber. (74) Attorney, Agent ...

SAR 135 Clue Report
SUBJECT IS NOT IN THESE SEGMENTS. VERY STRONG CHANCE THAT CLUE MEANS. SUBJECT IS NOT IN THESE SEGMENTS. VIRTUALLY 100% CERTAIN CLUE MEANS. SUBJECT IS NOT IN THESE SEGMENTS. COPIES. URGENT REPLY NEEDED, TEAM STANDING BY TIME. INFORMATION ONLY. 2. DATE