Home A Framework for Image Alignment of TerraSAR-X Images Using Fractional Derivatives and View Synthesis Approach
Article Open Access

A Framework for Image Alignment of TerraSAR-X Images Using Fractional Derivatives and View Synthesis Approach

  • B. Sirisha EMAIL logo , B. Sandhya , Chandra Sekhar Paidimarry and A.S. Chandrasekhara Sastry
Published/Copyright: February 21, 2018
Become an author with De Gruyter Brill

Abstract

Conventional integer order differential operators suffer from poor feature detection accuracy and noise immunity, which leads to image misalignment. A new affine-based fractional order feature detection algorithm is proposed to detect syntactic and semantic structures from the backscattered signal of a TerraSAR-X band stripmap image. To further improve the alignment accuracy, we propose to adapt a view synthesis approach in the standard pipeline of feature-based image alignment. Experiments were performed to test the effectiveness and robustness of the view synthesis approach using a fractional order feature detector. The evaluation results showed that the proposed method achieves high precision and robust alignment of look-angle-varied TerraSAR-X images. The affine features detected using the fractional order operator are more stable and have strong capacity to reduce sturdy speckle noise.

1 Introduction

The standard pipeline for feature-based image alignment (FIA) involves feature detection, feature description, feature matching, and transformation estimation using a homography matrix [2, 15]. The standard FIA approach fails mainly due to three reasons: (i) complex and challenging characteristics of TerraSAR-X images, (ii) challenging geometric deformation that can arise between source and target image, and (iii) challenges arising due to inherent speckle noise of TerraSAR-X images. Lepetit and Fua [7] demonstrated that generation of supplementary synthetic views from a single-view image helps in improving the performance of image matching in optical images. Morel and Yu [13] combined a view synthesis approach with SIFT extractor and matching. This framework is called Affine SIFT (ASIFT), which improves the matching performance significantly. Mishkin et al. [12] combined view synthesis with an affine covariant detector (Hessian Affine and MSER) and SIFT matching. The resulting matching approach is called MODS, which outperforms the ASIFT approach in terms of the number of true matches and speed. The incorporating view synthesis algorithm in the standard pipeline of FIR has been proved to be effective in finding more precise and correct feature correspondences, aligning difficult matching cases, and also improves the performance of feature detectors like Hessian Affine [10], MSER [9], and SIFT [8, 11], in terms of addressing the range of deformations between the images. All these aforementioned observations are made for optical images and may not be true for TerraSAR-X images. In this paper, we examine and evaluate the concept of view synthesis, which is of late being used for optical images for aligning complex TerraSAR-X images. It is observed that the incorporating view synthesis approach in feature-based image registration addresses the challenging geometric deformation that can arise between the source and target images. However, in case of TerraSAR-X images, in addition to geometric deformation, we need to address the problems arising due to sturdy speckle noise. To counter the problem of inherent speckle noise, we have developed a fractional derivative-based feature detector to detect robust and affine invariant feature points.

The contributions of our paper are as follows:

  • A new fractional derivative-based affine covariant feature detection algorithm is proposed to detect syntactic and semantic structures from the backscattered signal of TerraSAR-X band stripmap images.

  • The adaptive TerraSAR-X image alignment using view synthesis (AIAVS) with an affine covariant fractional order feature detector is proposed.

Employing view synthesis for SAR image alignment is a recent development, and the literature is limited. The rest of the paper is organised in a top-down manner. In Section 2, we introduce the AIAVS. Experimental results and analysis for fractional derivative-based affine covariant detector and the AIAVS framework are presented in Section 3. Finally, Section 4 concludes the paper.

2 AIAVS

AIAVS is used to address the challenging geometric deformation that can arise between source and target images. Adaptive image alignment algorithm generates synthetic views of TerraSAR-X images and then applies the standard image alignment pipeline on an entire set of images until the alignment error is less than the threshold. However, in case of TerraSAR-X images, in addition to geometric deformation, we need to address the problems arising due to sturdy speckle noise. As the integer order feature detectors fail to detect more numbers of feature points, we have developed a fractional derivative-based feature detector to counter this problem. In each iteration, features are detected using the proposed fractional derivative-based affine covariant detector. Synthetic views generated in each iteration are different to the next iteration. The rest of the section describes the stages of the AIAVS algorithm.

  • Stage 1: Synthetic view TerraSAR-X image generation.

  • Stage 2: Local feature extraction using the proposed fractional derivative-based affine covariant detector.

  • Stage 3: Two-way nearest neighbour matching strategy.

  • Stage 4: Transformation estimation, error estimation, and loop iteration.

2.1 Stage 1: Synthetic View Image Generation

The affine transformation decomposition matrix A was proposed by J. M. Morel:

(1) A=λ[cosψsinψsinψcosψ][t001][cosϕsinϕsinϕcosϕ],A=HλR1(Ψ)TtR2(ϕ)

where homography matrix H is approximated by affine transformation; scale parameter λ>0; R1, R2 parameters represent rotations; and Tt=t>1, where t is absolute tilt; ϕ∈(0, π) is the longitude w.r.t optical axis; and Ψ∈(0, 2π) is the camera rotation parameter along with optical axis. Therefore, tilt, longitude, and scale are the three main parameters involved in generating the views.

2.1.1 Algorithm for Generating Synthetic Views

  • The Gaussian scale space pyramid is built with Gaussian σσbase<1.

  • The scale space image obtained is rotated using step Δϕ−Δϕbase/t.

  • The rotated scale space image is convolved with Gaussian filter in horizontal and vertical direction with (σσbase) and (σt.σbase).

Scale λ, latitude ϕ, and longitude θ are the three main parameters that determine the number of views that can be generated in view synthesis. The images first undergo scale change and then rotations followed by tilts. The tilt-t operation is accomplished by employing anti-aliasing filter convolution by a Gaussian filter σh in horizontal direction and σv in vertical direction, and shrunk in the same direction by t. Figure 1 shows the synthetic views generated for scale {λ}=1, first tilt t=2, angle Δϕ=72/t=50°, and θ<180. The number of views generated is four, i.e. a=0°, b=51.4°, c=102.84°, and d=154.2°.

Figure 1: Synthetic Views Generated for Tilt 


t=2,
$t = \sqrt 2 ,$
 Angle Δϕ=72/t=50°, θ<180.
Number of views generated is four, i.e. a=0°, b=51.4°, c=102.84°, and d=154.2°.
Figure 1:

Synthetic Views Generated for Tilt t=2, Angle Δϕ=72/t=50°, θ<180.

Number of views generated is four, i.e. a=0°, b=51.4°, c=102.84°, and d=154.2°.

2.2 Local Feature Extraction Using Fractional Order Derivatives

In TerraSAR images, the grey level intensities between the two adjacent pixels are highly correlated due to high self-similarity. These fractal structures of SAR images are very difficult to extract when associated with speckle noise. In this paper, we propose a fractional differential-based feature detector to detect complex fractal features and describe it using RootSIFT [1].

2.2.1 Construction of Fractional Differential Masks

Fractional calculus is non-integer order calculus. It is the branch of calculus that generalises the derivative of a function to non-integer order [3]. The fractional order differential is more efficient in the detection of edges and linear features than integer order differential techniques. The fractional order differential technique highlights high-frequency components and preserves all the low-frequency components of the image. The most commonly used fractional differential operators are the Grünwald-Letnikov, Riemann-Liouville, Weyl-Riesz, Erdélyi-Kober, and Caputo operators [6, 14]. The Grünwald-Letnikov derivative of a function f is defined as

(2) D1f(x)=limh0f(x+h)f(x)f(x).

Iterating this operation yields an expression for the nth derivative of a function.

(3) Dnf(x)==limh0hnm=0n(1)m(nm)f(xmh).

The above expression can be generalised to any non-integer number because

  • for any natural number, the calculation of the nth derivative is given by an explicit formula;

  • the generalisation of the factorial by the gamma function allows;

    (4) (nm)=(n)(n+1)(n+m1)Γ(m+1),

which is valid for non-integer values. Let f(t) be the one-dimensional signal with Ω=[p, q]. The support domain Ω is further divided by step interval h=1 into n parts, n=[srh]. The kth order Grünwald-Letnikov is expressed as

(5) rGDsk=limh0fh(k)(t)hkq=0n(kr)f(sq+h),

where

(6) (kq)=(k)(k+1)(k+q1)Γ(q+1),

Γ=gamma function. The signal f(t) is now expressed as

(7) fk(t)=f(t)+(k)f(t1)+(k)(k+1)2f(t2)+Γ(k+1)Γ(n+1)Γ(k+n+1)f(tn).

The two-dimensional fractional derivative mask is obtained by linear filtering [5]. Let the x and y coordinates be x∈[x1, x2], y∈[y1, y2].

The partial order fractional derivative of f(x, y) is

(8) kf(x,y)xkf(x,y)+(k)f(x1,y)+(k)(k+1)2f(x2,y)+,
(9) kf(x,y)ykf(x,y)+(k)f(x,y1)+(k)(k+1)2f(x,y2)+.

Using Eqs. (8) and (9), a 3×3 fractional derivative mask is constructed. Figure 2 shows the images obtained after convolving with a fractional mask of order k=0.1–0.8. It can be observed from Figure 2 that the characteristics of the image change as the order of k varies. As the order of k increases, the high-frequency responses are highlighted without smoothing the low-frequency components. Hence, the textural, structural components are preserved.

Figure 2: Images Obtained after Convolving the Test Image with the Fractional Mask of Order k (0.1–0.8).
Figure 2:

Images Obtained after Convolving the Test Image with the Fractional Mask of Order k (0.1–0.8).

2.3 Proposed Fractional Affine Detector

2.3.1 Algorithm

  1. Spatial localisation: First-order derivative filters, like Robert, Prewitt, and Sobel, are convolved with fractional order differential filters to obtain a second-order fractional mask. The gradient image of the first-order derivative Sobel filter is given as

    (10) gx=g(x1,y1)+g(x+1,y1)2g(x1,y)+2g(x+1,y)g(x1,y+1)+g(x+1,y+1),
    (11) gy=g(x1,y1)+g(x1,y+1)2g(x,y1)+2g(x,y+1)g(x+1,y1)+g(x+1,y+1).

    The fractional order differential gradient equations in x and y directions are given below:

    (12) f(x)=k2k2f(x1,y)kf(x,y)+f(x+1,y),
    (13) f(y)=k2k2f(x,y1)kf(x,y)+f(x,y+1).

    The gradient image of the first-order derivative Sobel filter gx in Eq. (10) is convolved with the fractional order differential filter fx in Eq. (12) to obtain a second-order fractional mask Lx.

    The gradient image of the first-order derivative Sobel filter gy in Eq. (11) is convolved with the fractional order differential filter fy in Eq. (13) to obtain a second-order fractional mask Ly.

    The gradient image of the first-order derivative Sobel filter gx in Eq. (10) is convolved with the fractional order differential filter fy in Eq. (13) to obtain a second-order fractional mask Lxy.

    The gradient image of the first-order derivative Sobel filter gy in Eq. (11) is convolved with the fractional order differential filter fx in Eq. (12) to obtain a second-order fractional mask Lyx.

    Scale-adapted fractional derivative detector:

    (14) μ(x,σI,σD)=[μ11μ12μ21μ22],
    (15) =σD2g(σI)[LxLx(x,σDLyLx(x,σDLxLy(x,σDLyLy(x,σD].

    Lx, Ly are fractional second-order image derivatives computed in their corresponding directions using the Gaussian scale σD.

    The σI decides the present scale at which fractional points are detected in Gaussian scale space and also performs weighted averaging of derivatives in eight neighbourhoods. The σD-derivative scale decides the Gaussian kernel size.

    Cornerness is calculated using the determinant trace of scale-adapted second moment matrix.

    Key points are obtained by detecting the local maxima of a point in its eight neighbourhoods. The threshold value is used to filter the poor cornerness points. Figure 3 shows the cornerness image for order k=0.8.

  2. Each detected feature point is normalised to be affine invariant using affine shape adaptation.

  3. The affine region is assessed iteratively by carefully selecting the integration scale, differentiation scale, and spatially localised feature points.

  4. The affine region is updated with the selected spatial locations and scales.

  5. Step (iii) is rerun if the stopping criterion is not met. The stopping criterion is decided based on the eigenvalues of the affine transformation matrix.

Figure 3: Cornerness Image for (k=0.8).
Figure 3:

Cornerness Image for (k=0.8).

It is observed that the fractional differential filter not only maintains the low-frequency contour information in the smooth area, but also highlights the high-frequency edge and texture part in the image. This property has special advantage and visual effect for the images whose texture information has important meaning. Hence, fractional derivative-based affine detector responds very well to textural, structural scenes compared to integer order Hessian Affine detector.

2.4 Stage 3: Two-Way Nearest Neighbour Matching Strategy

The standard first- to second-nearest neighbour ratio matching fails when multiple interpretation of the same features are present. The drawback of multiple detections of the same features is enlarged in case of view synthesis, as Hessian Affine-detected local feature points often have a response in several synthetic views. In order to address this, we use two-way nearest-neighbour matching of the feature descriptors using Bhattacharyya distance.

Let Sx and Ty be the input image pair with corresponding sets of feature points X=Xi and Y=Yj. A match between the pair of feature points is established only if Xi is the perfect match for Yj in association with all the other feature points in X, and Yj is the perfect match for Xi in association with all the other feature points in Y.

This matching approach improves the alignment accuracy, as the number of true matches obtained using the two-way matching approach is higher than that obtained using traditional matching. Figure 4 shows the corresponding feature points of two SAR images that vary by a scale factor of 4.5. Figure 4A shows the correspondences obtained using the standard first- to second-nearest neighbour ratio matching strategy, and Figure 4B shows the correspondences obtained using the two-way nearest-neighbour ratio (NNR) matching strategy. It is also observed that when the scale factor between the image pair increases, two-way matching is very robust in finding correspondences where traditional approaches fail.

Figure 4: Comparison of the Proposed Two-Way Nearest-Neighbour Ratio Matching Strategy Using (A) Bhattacharya Distance and the (B) Standard First to Second NNR Matching Strategy.
Figure 4:

Comparison of the Proposed Two-Way Nearest-Neighbour Ratio Matching Strategy Using (A) Bhattacharya Distance and the (B) Standard First to Second NNR Matching Strategy.

2.5 Stage 4: Geometric Verification, Error Estimation, and Loop Iteration

The main aim of the transformation model is to spatially align the reference and sensed image. To increase the model robustness, outliers should be detected and removed, and only the matched corresponding inlier points should be subjected to transformation. This is achieved using Random Sample Consensus (RANSAC) [4], which detects the inliers of the corresponding feature points and estimates the transformation matrix, H. The source image can be transformed using H to the coordinate system of the target image. Alignment error is computed between the transformed and target images. If the error is above a predefined threshold, the process is iterated with increasing the number of views.

3 Experimental Results and Analysis

The proposed image alignment approach using view synthesis was compared to the standard FIA algorithm and ASIFT algorithm. The experimental results were evaluated and tested on 540 TerraSAR-X images.

3.1 Evaluation Dataset

Though SAR image analysis has been studied extensively, there exists no benchmark dataset of TerraSAR-X band images to compare the performance of various algorithms. Both standard and simulated images are used to evaluate the performance of the proposed prediction model. In this paper, four TerraSAR-X band images, of dimension 10,556*9216 of the same scene but captured at different look angles, are used for the evaluation. The specification details are as follows: acquisition mode, spotlight 1 m resolution; wavelength, approximately 3 cm; polarisation mode, single; polarizing channel, VV; angle of incidence (look angle), 40.9, 41.9, 42.9, 43.9; date of acquisition, 12 October 2008; look direction, right. Images of size 850×1000 have been cropped from these four images to generate 10 target images. Figure 5 shows 10 target images used in our experiments. From the standard images acquired by the synthetic aperture radar, the dataset is generated by synthetic alteration to incorporate the desired image properties, like scale, rotation, and induced speckle noise.

Figure 5: Images Obtained after Convolving with the Fractional Mask of Order k (0.1–0.8) with the Test Image.
(A, B) k=0.1, (C) k=0.2, (D) k=0.3, (E) k=0.4, (F) k=0.5, (G) k=0.6, (H) k=0.7, and (I) k=0.8.
Figure 5:

Images Obtained after Convolving with the Fractional Mask of Order k (0.1–0.8) with the Test Image.

(A, B) k=0.1, (C) k=0.2, (D) k=0.3, (E) k=0.4, (F) k=0.5, (G) k=0.6, (H) k=0.7, and (I) k=0.8.

An area similar to the target image is cropped from the other look-angle SAR images. The source images are generated by applying transformations on the same area cropped from a different look-angle SAR image. One look-angle image is considered the target image, and another look-angle image transformed by applying a known geometric transformation is considered the source image. We have created 10 datasets in which the amount of common area (overlap) between the fixed and moving images is varied. The common area between the target and source images in dataset 1, 2 is 100%, in dataset 3 is 95%, in dataset 4 is 90%, in dataset 5 is 85%, in dataset 6 is 80%, in dataset 7 is 75%, in dataset 8 is 70%, in dataset 9 is 60%, and in dataset 10 is 50%. In each dataset, there are 54 SAR images; hence, a total of 540 image pairs are generated, as the pairs differ in varying degrees of scale, rotation, noise, and overlap. The deformation specifications for each dataset are listed in Table 1.

Table 1:

Synthesized SAR Image Dataset with Induced Deformation Details.

Type of deformation Deformation details Number of image pairs
Look angle 3° Variation 3
Look angle+scale Scale factor induced between fixed image If and moving image Im
Scale factor=0.5; If scale down 0.9; Im scale up 1.8
Scale factor=2; If scale up: 1.8; Im scale down to 0.9
Scale factor=2.5; If scale up 1.8; Im scale down to 0.7
Scale factor=3; If scale up 1.8; Im scale down to 0.6
Scale factor=3.5; If scale up 1.8; Im scale down to 0.51
Scale factor=4.5; If scale up 1.8; Im scale down to 0.4 6
Look angle+rotation Angle of rotation between If and Im varied from 10° to 350°, with 10° intervals 35
Look angle+speckle noise Induced speckle noise of variance v=0.04, 0.05, 0.12, 0.16, 0.2, 0.24, 0.25, 0.32, 0.36, 0.4 10

3.1.1 Evaluation of Fractional Affine Feature Detector

The performance of the fractional affine feature detector is tested on dataset 1, which is composed of 54 TerraSAR-X images that vary in look angle, scale, rotation, and speckle noise. The results are analysed in the following ways:

  1. Selection of suitable first-order derivative filter to be convolved with fractional order derivative filter to obtain a second-order fractional mask.

  2. Selection of suitable fractional order-k.

Selection of suitable first-order derivative filter

Sobel, Robert, and Prewitt are the three classical first-order derivative filters employed to convolve with fractional order derivative filter. The cornerness image generated using this fractional order detector highlights high-frequency responses and preserves low-frequency components; hence, the quality of the TerraSAR-X image improves by reducing the effect of speckle noise. The performance is measured with the help of peak signal-to-noise ratio (PSNR) and mean square error (MSE). These two measures give the estimate of quality between the original image and the deformed image. Table 2 shows the PSNR and MSE obtained when the Sobel, Robert, and Prewitt operators convolved with the fractional derivative operator for varied orders k=0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9. It is observed from Table 2 that when the Sobel operator convolved with the fractional order derivative k=0.8, the value of PSNR is high and the MSE is low compared to the Robert and Prewitt combination. Hence, convolving a Sobel mask with a fractional derivative mask of order k=0.8 reduces the effect of speckle noise and gives better feature extraction compared with the fused Robert and Prewitt mask.

Table 2:

MSE and PSNR of Prewitt with Fractional Operator (P+F), Robert with Fractional Operator (R+F), and Sobel with Fractional Operator (S+F) for Fractional Orders k=0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9.

k MSE (P+F) PSNR (P+F) MSE (R+F) PSNR (R+F) MSE (S+F) PSNR (S+F)
k=0.1 1.4096 8.3931 1.2211 9.1561 1.4779 8.1165
k=0.2 1.4326 8.3012 1.216 9.1778 1.4508 8.2213
k=0.3 1.3886 8.444 1.2135 9.1883 1.4367 8.2762
k=0.4 1.4366 8.2756 1.2226 9.1495 1.4859 8.0858
k=0.5 1.5066 7.9947 1.2401 9.0751 1.5691 7.7721
k=0.6 1.3924 8.4508 1.2132 9.2898 1.4347 8.2813
k=0.7 1.6666 7.3992 1.3176 8.8611 1.7509 7.1167
k=0.8 1.3966 8.4623 1.2155 9.3799 1.4481 8.2917
k=0.9 2.0766 5.7717 1.5024 8.0228 2.3069 6.3502

Selection of suitable fractional order-k

The fractional order differential operator not only preserves the low-frequency contour information in the smooth area, but also highlights the high-frequency edge and texture part of the images to aid in better feature extraction. However, the optimal selection of the fractional differential order is a crucial problem. Repeatability is a measure used to evaluate feature detector performance, which is used to select the optimal and suitable fractional order. Repeatability is defined as the ratio of the number of matched points to the total number of key points extracted from both images. Figure 6 shows the repeatability values obtained from fractional orders k=0.2, 0.4, 0.6, and 0.8 on a pair of TerraSAR-X images deformed by induced speckle noise of variance 0.05–0.5. The proposed detector performance is compared against the affine invariant integer order detector Hessian Affine. It is observed that for fractional order k=0.8, the repeatability value is high and consistent across varied variance values.

Figure 6: Repeatability Measure for Order k=0.2, 0.4, 0.6, 0.8 Compared with Hessian Affine.
Figure 6:

Repeatability Measure for Order k=0.2, 0.4, 0.6, 0.8 Compared with Hessian Affine.

We have also tested the performance of the fractional order feature detector on the standard optical image registration dataset Oxford Affine. Five pairs of graffiti images of size 800×640 that vary in view angle with induced speckle noise of variance 0.4 are used for evaluation. A fractional order feature detector of order k=0.8, 0.88, 0.92, and 0.94 is compared against four integer order feature detectors like SIFT, SURF, MSER, and Hessian Affine.

It is observed from Figure 7 that for fractional order k=0.8–0.94, the repeatability values are almost the same and consistent; hence, fractional order k=0.8 is optimal and selected for the experiments to detect features. The affine invariant feature points detected for k=0.8 are described using SIFT variant RootSIFT. All the features are stored in a vector array and used for matching.

Figure 7: Repeatability Measure for Fractional Order k=0.8, 0.88, 0.92, 0.94 is Compared Against Four Integer Order Feature Detectors Like SIFT, SURF, MSER, and Hessian Affine.
Figure 7:

Repeatability Measure for Fractional Order k=0.8, 0.88, 0.92, 0.94 is Compared Against Four Integer Order Feature Detectors Like SIFT, SURF, MSER, and Hessian Affine.

3.2 Analysis of the AIAVS Approach

The performance of AIAVS is tested on 540 SAR images that vary in look angle, scale, rotation, and speckle noise. The approach is compared against ASIFT and standard FIA. Intel i5 CPU @ 2.6 GHz with 8 Gb RAM, single core machine is used for computations. The results are analysed in the following ways:

  • Quantitative assessment is done for 540 synthetically generated SAR images.

  • Iteration (tilt) analysis is done deformation wise.

  • Qualitative assessment of the AIAVS framework is done.

Table 3 shows the quantitative assessment of the AIAVS framework tested deformation wise. It is observed that out of 540 SAR images, the AIAVS framework could align 502 image pairs, ASIFT could align 320 image pairs, and standard FIA (Hessian Affine detector and SIFT descriptor) could align only 197. It is perceived from Table 3 that the AIAVS framework is effective in addressing extreme rotation and scale deformation images. Figure 8 shows dataset 1 – source SAR image (A); dataset 1–60° rotated target image (B); the number of correspondences obtained using FIA, ASIFT, and AIAVS (C, E, G); and the aligned or transformed output image using FIA, ASIFT, and AIAVS (D, F, H). Figure 9 shows dataset 2 – source SAR image (A); dataset 2 – scale deformed target image (B); the number of correspondences obtained using FIA, ASIFT, and AIAVS (C, E, G); and the output aligned images using FIA, ASIFT, and AIAVS (D, F, H). It is observed from Figures 8 and 9 that the AIAVS approach can align TerraSAR image pairs varying by any affine deformation.

Table 3:

Quantitative Assessment of the AIAVS Approach: Total Number of Aligned SAR Images Deformation Wise.

Total images (540) Look angle (30) Rotation (350) Scale (60) Speckle (100) Total aligned images
FIA 30 34 31 100 197
ASIFT 30 150 40 100 320
AIAVS 30 316 56 100 502
Figure 8: Dataset 1.
(A) Dataset 1 – source SAR image; (B) dataset 1 – 60° rotated target image; (C, E, G) number of correspondences obtained using FIA, ASIFT, and AIAVS; and (D, F, H) output aligned images using FIA, ASIFT, and AIAVS.
Figure 8:

Dataset 1.

(A) Dataset 1 – source SAR image; (B) dataset 1 – 60° rotated target image; (C, E, G) number of correspondences obtained using FIA, ASIFT, and AIAVS; and (D, F, H) output aligned images using FIA, ASIFT, and AIAVS.

Figure 9: Dataset 2.
(A) Dataset 2 – source SAR image; (B) dataset 2 – scale deformed target image; (C, E, G) number of correspondences obtained using FIA, ASIFT, and AIAVS; and (D, F, H) output aligned images using FIA, ASIFT, and AIAVS.
Figure 9:

Dataset 2.

(A) Dataset 2 – source SAR image; (B) dataset 2 – scale deformed target image; (C, E, G) number of correspondences obtained using FIA, ASIFT, and AIAVS; and (D, F, H) output aligned images using FIA, ASIFT, and AIAVS.

Table 4 shows the iteration/tilt wise analysis. It is observed that the look angle and induced speckle noise deformed SAR images could be aligned in first tilt. In the case of scale deformation, most of the images below scale factor 2.5 could be aligned in first iteration. When the scale factor increases above 2.5, images become aligned in subsequent iterations. In case of rotation deformation, we observed that for angles between 10–120° and 250–355°, varied images could align in first tilt. In rotation deformation for angles between 120° and 245°, images become aligned in higher iterations. Qualitative assessment of the AIAVS approach is done in terms of number of inliers, key point error, and time (seconds). It is observed from Table 5 that the number of inliers obtained for the AIAVS approach is high and the key point error is low compared to the ASIFT and FIA approaches. The ASIFT algorithm generates a lower number of correct inliers and is slower than AIAVS because it employs standard NNR matching criteria, which eliminate one to many correspondences, including true correspondences.

Table 4:

Iteration (Tilt) Analysis: Total Number of Aligned SAR Images Deformation and Iteration/Tilt Wise.

Deformation Iter-1 Iter-2 Iter-3 Iter-4 Iter-5 Iter-6 Not aligned
Look angle (30) 30 0 0 0 0 0 0
Rotation (350) 179 70 41 12 10 4 32
Scale (60) 37 13 3 3 0 0 4
Speckle (100) 100 0 0 0 0 0 0
Total aligned 346 83 44 15 10 4 36
Table 5:

Qualitative Assessment of Number of Inliers (I), Key Point Error (KPE), and Time (Seconds) among the FIA, ASIFT, and Proposed AIAVS Approaches.

I-FIA I-ASIFT I-AIAVS KPE-FIA KPE-ASIFT KPE-AIAVS Time-FIA Time-ASIFT Time-AIAVS
Look angle deformation
1.2° 280 708 1213 5.98 4.44 0.88 25.8 192 45
1.4° 282 649 1010 6.10 4.65 0.918 26 180 32.5
1.6° 795 2821 3295 2.66 2.24 0.35 38.2 200 52
Rotation deformation
60° 6 22 382 13.32 6.43 4.089 5.5 120 60
130° 8 20 352 10.23 6.95 3.963 8 132 72
240° 7 15 385 14.11 7.258 4.625 6.9 125 69
Scale deformation
2 299 27 1176 5.81 9.203 3.27 30 138 54
3 46 68 899 3.88 1.44 2.30 12.3 131 60
4.5 35 206 200 2.90 1.246 1.65 11.8 250 72
Speckle deformation
v=0.2 266 389 1052 6.86 4.69 0.56 24.2 160 43
v=0.32 263 547 1100 7.27 4.78 0.59 24 195 38
v=0.4 234 573 1142 6.922 4.62 0.63 21.9 189 54

4 Conclusion

AIAVS is used to address challenging geometric deformation that can arise between source and target images. However, in case of TerraSAR-X images, in addition to geometric deformation, we need to address the problems arising due to sturdy speckle noise. It is observed that integer order feature detectors fail to detect more numbers of feature points; hence, we have developed a fractional derivative-based feature detector to counter this problem. Incorporating fractional-based affine detector in view synthesis approach improves the accuracy of TerraSAR-X image alignment even in the presence of speckle noise.

Bibliography

[1] R. Arandjelovic, Three things everyone should know to improve object retrieval, in: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), CVPR ’12, pp. 2911–2918, IEEE Computer Society, Washington, DC, USA, 2012.10.1109/CVPR.2012.6248018Search in Google Scholar

[2] L. G. Brown, A survey of image registration techniques, ACM Comput. Surv. 24 (1992), 325–376.10.1145/146370.146374Search in Google Scholar

[3] K. Diethelm and N. J. Ford, Analysis of Fractional Differential Equations, Springer, Berlin, Germany, 1999.Search in Google Scholar

[4] M. A. Fischler and R. C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24 (1981), 381–395.10.1145/358669.358692Search in Google Scholar

[5] S. Kempfle, I. Schäfer and H. Beyer, Fractional calculus via functional calculus: theory and applications, Nonl. Dynam. 29 (2002), 99–127.10.1023/A:1016595107471Search in Google Scholar

[6] B. T. Krishna, Studies on fractional order differentiators and integrators: a survey, Signal Process. 91 (2011), 386–426.10.1016/j.sigpro.2010.06.022Search in Google Scholar

[7] V. Lepetit and P. Fua, Keypoint recognition using randomized trees, IEEE Trans. Pattern Anal. Mach. Intell. 28 (2006), 1465–1479.10.1109/TPAMI.2006.188Search in Google Scholar PubMed

[8] D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vision 60 (2004), 91–110.10.1023/B:VISI.0000029664.99615.94Search in Google Scholar

[9] J. Matas, O. Chum, M. Urban and T. Pajdla, Robust wide baseline stereo from maximally stable extremal regions, in: P. L. Rosin and A. D. Marshall, eds., BMVC, British Machine Vision Association, Durham, UK, 2002.10.5244/C.16.36Search in Google Scholar

[10] K. Mikolajczyk and C. Schmid, Scale & affine invariant interest point detectors, Int. J. Comput. Vision 60 (2004), 63–86.10.1023/B:VISI.0000027790.02288.f2Search in Google Scholar

[11] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir and L. Van Gool, A comparison of affine region detectors, Int. J. Comput. Vision 65 (2005), 43–72.10.1007/s11263-005-3848-xSearch in Google Scholar

[12] D. Mishkin, J. Matas and M. Perdoch, MODS: fast and robust method for two-view matching, Comput. Vis. Image Understand. 141 (2015), 81–93.10.1016/j.cviu.2015.08.005Search in Google Scholar

[13] J. -M. Morel and G. Yu, ASIFT: a new framework for fully affine invariant image comparison, SIAM J. Imaging Sci. 2 (2009), 438–469.10.1137/080732730Search in Google Scholar

[14] R. Scherer, S. L. Kalla, Y. Tang and J. Huang, The Grünwald-Letnikov method for fractional differential equations, Comput. Math. Appl. 62 (2011), 902–917.10.1016/j.camwa.2011.03.054Search in Google Scholar

[15] B. Zitová and J. Flusser, Image registration methods: a survey, Image Vis. Comput. 21 (2003), 977–1000.10.1016/S0262-8856(03)00137-9Search in Google Scholar

Received: 2017-07-30
Published Online: 2018-02-21

©2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Articles in the same Issue

  1. An Optimized K-Harmonic Means Algorithm Combined with Modified Particle Swarm Optimization and Cuckoo Search Algorithm
  2. Texture Feature Extraction Using Intuitionistic Fuzzy Local Binary Pattern
  3. Leaf Disease Segmentation From Agricultural Images via Hybridization of Active Contour Model and OFA
  4. Deadline Constrained Task Scheduling Method Using a Combination of Center-Based Genetic Algorithm and Group Search Optimization
  5. Efficient Classification of DDoS Attacks Using an Ensemble Feature Selection Algorithm
  6. Distributed Multi-agent Bidding-Based Approach for the Collaborative Mapping of Unknown Indoor Environments by a Homogeneous Mobile Robot Team
  7. An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data
  8. Combined Multi-Agent Method to Control Inter-Department Common Events Collision for University Courses Timetabling
  9. An Improved Particle Swarm Optimization Algorithm for Global Multidimensional Optimization
  10. A Kernel Probabilistic Model for Semi-supervised Co-clustering Ensemble
  11. Pythagorean Hesitant Fuzzy Information Aggregation and Their Application to Multi-Attribute Group Decision-Making Problems
  12. Using an Efficient Optimal Classifier for Soil Classification in Spatial Data Mining Over Big Data
  13. A Bayesian Multiresolution Approach for Noise Removal in Medical Magnetic Resonance Images
  14. Gbest-Guided Artificial Bee Colony Optimization Algorithm-Based Optimal Incorporation of Shunt Capacitors in Distribution Networks under Load Growth
  15. Graded Soft Expert Set as a Generalization of Hesitant Fuzzy Set
  16. Universal Liver Extraction Algorithm: An Improved Chan–Vese Model
  17. Software Effort Estimation Using Modified Fuzzy C Means Clustering and Hybrid ABC-MCS Optimization in Neural Network
  18. Handwritten Indic Script Recognition Based on the Dempster–Shafer Theory of Evidence
  19. An Integrated Intuitionistic Fuzzy AHP and TOPSIS Approach to Evaluation of Outsource Manufacturers
  20. Automatically Assess Day Similarity Using Visual Lifelogs
  21. A Novel Bio-Inspired Algorithm Based on Social Spiders for Improving Performance and Efficiency of Data Clustering
  22. Discriminative Training Using Noise Robust Integrated Features and Refined HMM Modeling
  23. Self-Adaptive Mussels Wandering Optimization Algorithm with Application for Artificial Neural Network Training
  24. A Framework for Image Alignment of TerraSAR-X Images Using Fractional Derivatives and View Synthesis Approach
  25. Intelligent Systems for Structural Damage Assessment
  26. Some Interval-Valued Pythagorean Fuzzy Einstein Weighted Averaging Aggregation Operators and Their Application to Group Decision Making
  27. Fuzzy Adaptive Genetic Algorithm for Improving the Solution of Industrial Optimization Problems
  28. Approach to Multiple Attribute Group Decision Making Based on Hesitant Fuzzy Linguistic Aggregation Operators
  29. Cubic Ordered Weighted Distance Operator and Application in Group Decision-Making
  30. Fault Signal Recognition in Power Distribution System using Deep Belief Network
  31. Selector: PSO as Model Selector for Dual-Stage Diabetes Network
  32. Oppositional Gravitational Search Algorithm and Artificial Neural Network-based Classification of Kidney Images
  33. Improving Image Search through MKFCM Clustering Strategy-Based Re-ranking Measure
  34. Sparse Decomposition Technique for Segmentation and Compression of Compound Images
  35. Automatic Genetic Fuzzy c-Means
  36. Harmony Search Algorithm for Patient Admission Scheduling Problem
  37. Speech Signal Compression Algorithm Based on the JPEG Technique
  38. i-Vector-Based Speaker Verification on Limited Data Using Fusion Techniques
  39. Prediction of User Future Request Utilizing the Combination of Both ANN and FCM in Web Page Recommendation
  40. Presentation of ACT/R-RBF Hybrid Architecture to Develop Decision Making in Continuous and Non-continuous Data
  41. An Overview of Segmentation Algorithms for the Analysis of Anomalies on Medical Images
  42. Blind Restoration Algorithm Using Residual Measures for Motion-Blurred Noisy Images
  43. Extreme Learning Machine for Credit Risk Analysis
  44. A Genetic Algorithm Approach for Group Recommender System Based on Partial Rankings
  45. Improvements in Spoken Query System to Access the Agricultural Commodity Prices and Weather Information in Kannada Language/Dialects
  46. A One-Pass Approach for Slope and Slant Estimation of Tri-Script Handwritten Words
  47. Secure Communication through MultiAgent System-Based Diabetes Diagnosing and Classification
  48. Development of a Two-Stage Segmentation-Based Word Searching Method for Handwritten Document Images
  49. Pythagorean Fuzzy Einstein Hybrid Averaging Aggregation Operator and its Application to Multiple-Attribute Group Decision Making
  50. Ensembles of Text and Time-Series Models for Automatic Generation of Financial Trading Signals from Social Media Content
  51. A Flame Detection Method Based on Novel Gradient Features
  52. Modeling and Optimization of a Liquid Flow Process using an Artificial Neural Network-Based Flower Pollination Algorithm
  53. Spectral Graph-based Features for Recognition of Handwritten Characters: A Case Study on Handwritten Devanagari Numerals
  54. A Grey Wolf Optimizer for Text Document Clustering
  55. Classification of Masses in Digital Mammograms Using the Genetic Ensemble Method
  56. A Hybrid Grey Wolf Optimiser Algorithm for Solving Time Series Classification Problems
  57. Gray Method for Multiple Attribute Decision Making with Incomplete Weight Information under the Pythagorean Fuzzy Setting
  58. Multi-Agent System Based on the Extreme Learning Machine and Fuzzy Control for Intelligent Energy Management in Microgrid
  59. Deep CNN Combined With Relevance Feedback for Trademark Image Retrieval
  60. Cognitively Motivated Query Abstraction Model Based on Associative Root-Pattern Networks
  61. Improved Adaptive Neuro-Fuzzy Inference System Using Gray Wolf Optimization: A Case Study in Predicting Biochar Yield
  62. Predict Forex Trend via Convolutional Neural Networks
  63. Optimizing Integrated Features for Hindi Automatic Speech Recognition System
  64. A Novel Weakest t-norm based Fuzzy Fault Tree Analysis Through Qualitative Data Processing and Its Application in System Reliability Evaluation
  65. FCNB: Fuzzy Correlative Naive Bayes Classifier with MapReduce Framework for Big Data Classification
  66. A Modified Jaya Algorithm for Mixed-Variable Optimization Problems
  67. An Improved Robust Fuzzy Algorithm for Unsupervised Learning
  68. Hybridizing the Cuckoo Search Algorithm with Different Mutation Operators for Numerical Optimization Problems
  69. An Efficient Lossless ROI Image Compression Using Wavelet-Based Modified Region Growing Algorithm
  70. Predicting Automatic Trigger Speed for Vehicle-Activated Signs
  71. Group Recommender Systems – An Evolutionary Approach Based on Multi-expert System for Consensus
  72. Enriching Documents by Linking Salient Entities and Lexical-Semantic Expansion
  73. A New Feature Selection Method for Sentiment Analysis in Short Text
  74. Optimizing Software Modularity with Minimum Possible Variations
  75. Optimizing the Self-Organizing Team Size Using a Genetic Algorithm in Agile Practices
  76. Aspect-Oriented Sentiment Analysis: A Topic Modeling-Powered Approach
  77. Feature Pair Index Graph for Clustering
  78. Tangramob: An Agent-Based Simulation Framework for Validating Urban Smart Mobility Solutions
  79. A New Algorithm Based on Magic Square and a Novel Chaotic System for Image Encryption
  80. Video Steganography Using Knight Tour Algorithm and LSB Method for Encrypted Data
  81. Clay-Based Brick Porosity Estimation Using Image Processing Techniques
  82. AGCS Technique to Improve the Performance of Neural Networks
  83. A Color Image Encryption Technique Based on Bit-Level Permutation and Alternate Logistic Maps
  84. A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition
  85. Database Creation and Dialect-Wise Comparative Analysis of Prosodic Features for Punjabi Language
  86. Trapezoidal Linguistic Cubic Fuzzy TOPSIS Method and Application in a Group Decision Making Program
  87. Histopathological Image Segmentation Using Modified Kernel-Based Fuzzy C-Means and Edge Bridge and Fill Technique
  88. Proximal Support Vector Machine-Based Hybrid Approach for Edge Detection in Noisy Images
  89. Early Detection of Parkinson’s Disease by Using SPECT Imaging and Biomarkers
  90. Image Compression Based on Block SVD Power Method
  91. Noise Reduction Using Modified Wiener Filter in Digital Hearing Aid for Speech Signal Enhancement
  92. Secure Fingerprint Authentication Using Deep Learning and Minutiae Verification
  93. The Use of Natural Language Processing Approach for Converting Pseudo Code to C# Code
  94. Non-word Attributes’ Efficiency in Text Mining Authorship Prediction
  95. Design and Evaluation of Outlier Detection Based on Semantic Condensed Nearest Neighbor
  96. An Efficient Quality Inspection of Food Products Using Neural Network Classification
  97. Opposition Intensity-Based Cuckoo Search Algorithm for Data Privacy Preservation
  98. M-HMOGA: A New Multi-Objective Feature Selection Algorithm for Handwritten Numeral Classification
  99. Analogy-Based Approaches to Improve Software Project Effort Estimation Accuracy
  100. Linear Regression Supporting Vector Machine and Hybrid LOG Filter-Based Image Restoration
  101. Fractional Fuzzy Clustering and Particle Whale Optimization-Based MapReduce Framework for Big Data Clustering
  102. Implementation of Improved Ship-Iceberg Classifier Using Deep Learning
  103. Hybrid Approach for Face Recognition from a Single Sample per Person by Combining VLC and GOM
  104. Polarity Analysis of Customer Reviews Based on Part-of-Speech Subcategory
  105. A 4D Trajectory Prediction Model Based on the BP Neural Network
  106. A Blind Medical Image Watermarking for Secure E-Healthcare Application Using Crypto-Watermarking System
  107. Discriminating Healthy Wheat Grains from Grains Infected with Fusarium graminearum Using Texture Characteristics of Image-Processing Technique, Discriminant Analysis, and Support Vector Machine Methods
  108. License Plate Recognition in Urban Road Based on Vehicle Tracking and Result Integration
  109. Binary Genetic Swarm Optimization: A Combination of GA and PSO for Feature Selection
  110. Enhanced Twitter Sentiment Analysis Using Hybrid Approach and by Accounting Local Contextual Semantic
  111. Cloud Security: LKM and Optimal Fuzzy System for Intrusion Detection in Cloud Environment
  112. Power Average Operators of Trapezoidal Cubic Fuzzy Numbers and Application to Multi-attribute Group Decision Making
Downloaded on 9.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0381/html
Scroll to top button