Home Blind Restoration Algorithm Using Residual Measures for Motion-Blurred Noisy Images
Article Open Access

Blind Restoration Algorithm Using Residual Measures for Motion-Blurred Noisy Images

  • Mayana Shah EMAIL logo and U. D. Dalal
Published/Copyright: June 18, 2018
Become an author with De Gruyter Brill

Abstract

Image de-blurring is an inverse problem whose intent is to recover an image from the image affected badly with different environmental conditions. Usually, blurring can happen in various ways; however, de-blurring from a motion problem with or without noise can pose an important problem that is difficult to solve with less computation task. The quality of the restored image in iterative methods of blind motion de-blurring depends on the regularization parameter and the iteration number, which can be automatically or manually stopped. Blind de-blurring and restoration employing image de-blurring and whiteness measures are proposed in this paper to automatically decide the number of iterations. The technique has three modules, namely image de-blurring module, whiteness measures module, and image estimation module. New whiteness measures of hole entropy and mean-square contingency coefficient have been proposed in the whiteness measures module. Initially, the blurred image is de-blurred by the employment of edge responses and image priors using point-spread function. Later, whiteness measures are computed for the de-blurred image and, finally, the best image is selected. The results are obtained for all eight whiteness measures by employing evaluation metrics of increase in signal-to-noise ratio (ISNR), mean-square error, and structural similarity index. The results are obtained from standard images, and performance analysis is made by varying parameters. The obtained results for synthetically blurred images are good even under a noisy condition with ΔISNR average values of 0.3066 dB. The proposed whiteness measures seek a powerful solution to iterative de-blurring algorithms in deciding automatic stopping criteria.

1 Introduction

Image blurring is one of the prime causes of poor image quality in digital imaging. Two main causes of blurry images are out-of-focus shots and camera shake. The image blurring process is commonly modeled as the convolution of a clear image with a shift-invariant kernel plus noise [13, 19]. The image recovery problem is to estimate an image from the blurred image by use of various methods [7]. Essentially, it tries to perform an operation on the image that is the inverse of the imperfections in the image formation system [10]. Unfortunately, the restoration of blurred and noisy images is an ill-posed inverse problem [22]. Linear inverse problems arise in a wide range of applications such as astrophysics, signal and image processing, statistical inference, and optics [3].

Generally, image de-blurring (ID) is the operation of taking a blurred image and estimating the clean original image. There are two tightly coupled sub-problems: estimating the blur kernel and estimating the clear image using the estimated blur kernel. Existing ID methods can thus be classified into two categories: blind ID (BID) that jointly solves the above two sub-problems and non-blind ID (NBID) that only solves the second sub-problem [5, 8, 13]. Research on ID can be divided into NBID, in which the blur filter is assumed known, and BID, in which both the image and the blur filter are (totally or partially) unknown. The most popular NBID methods are the Wiener filter [25] and Lucy-Richardson [15, 18] methods. Despite its narrower applicability, NBID is already a challenging problem to which a large amount of research has been devoted, mainly due to the ill-conditioned nature of the blur operator: the observed image does not uniquely stably determine the underlying original image. The problem becomes worse if there is even a slight mismatch between the assumed blur and the true one. Most of the NBID methods overcome this difficulty through the use of an image regularizer, or prior, the weight of which has to be tuned or adapted [4]. BID seeks a solution in terms of a correct pair of an unknown point-spread function (PSF) and original image from multiple combinations of these two unknowns. Some models and algorithms have been proposed that estimate blur kernel in the parametric form [16]. This simplified approach cannot define ideally true motion blurring, and an iterative regularization-based approach is used to estimate complex blurs successfully.

The BID methods require proper selection of iteration number and tuning of the regularizing parameter. In already existing methods, the iterations have to be manually stopped when a good image estimate with high increase in signal-to-noise ratio (ISNR) value is obtained. The method is iterative and starts by estimating the main features of the image, using a large regularization weight, and gradually learns the image and filter details, by slowly decreasing the regularization parameter. From an optimization point of view, this can be seen as a continuation method designed to obtain a good local minimum of the underlying non-convex objective function. The drawback of the method is that it requires manual stopping, which corresponds to choosing the final value of the regularization parameter. In fact, adjusting the regularization parameter and/or finding robust stopping criteria for iterative (blind or not) ID algorithms is a long-standing, but still open, research area [1]. More work has been devoted to choosing the accurate regularization parameter. The discrepancy principle (DP) [21] selects the regularization parameter such that residual image (the difference between the observed image and the blurred estimate) variance matches to the variance of the noise. The extended versions of DP are based on residual moments [9]. Stein’s unbiased risk estimate-based [11, 12] approaches are not useful in BID, as it requires full knowledge of the degradation model, but is useful in NBID. It provides an estimate of the mean-square error (MSE), by assuming knowledge of the noise distribution and requiring an accurate estimate of its variance [26].

In this paper, blind de-blurring and restoration employing ID and whiteness measures are proposed. The proposed technique consists of modules of ID, whiteness measures, and image estimation. In the de-blurring module, the blurred image is de-blurred by the employment of edge responses and image priors using PSF. In the whiteness measures module, eight whiteness measures, including covariance, weighted co-variance, entropy, block covariance, block weighted co-variance, block entropy, mean-square contingency coefficient, and holoentropy, are computed for the de-blurred image. Finally, the best image is selected based on whiteness measures, minimum MSE (MMSE), and ISNR values to have the estimated image in the image estimation module.

The rest of the paper is organized as follows: Section 2 gives the literature review, and Section 3 gives the contribution of the paper. Section 4 describes the proposed technique, and Section 5 gives the results and discussion. The conclusion is summed up in Section 6.

2 Review of Related Works

Many researchers have developed several approaches for ID and restoration. Among them, a handful of significant researches are presented in this section.

Ji and Wang [13] presented a convex minimization model that explicitly takes account of errors in the blur kernel. The resulting minimization problem was efficiently solved by the so-called accelerated proximal gradient method. In addition, a new boundary extension scheme was incorporated in the proposed model to further improve the results. Tai et al. [19] investigated the role that non-linear camera response functions (CRFs) have on ID. They presented a comprehensive study analyzing the effects of CRF on motion de-blurring. In particular, they showed how non-linear CRFs can cause a spatially variant blur to behave as a spatially varying blur. They proved that such non-linearity can cause large errors around edges when directly applying de-convolution to a motion-blurred image without CRF correction. These errors were inevitable even with PSF and with state-of-the-art regularization-based de-convolution algorithms. In addition, they showed how CRFs can adversely affect PSF estimation algorithms in the case of blind de-convolution.

Wang et al. [24] pointed out the weaknesses of the deterministic filter and unified the limitation latent in two kinds of Bayesian estimators. They further explained why the conjunctive de-blurring algorithm (CODA) was able to handle rather large blurs beyond Bayesian estimation. Finally, they proposed a method to overcome several unreported limitations of the CODA. Carlavan and Blanc-Féraud [6] presented two recent methods to estimate this regularizing parameter, and they first proposed an improvement of these estimators that takes advantage of confocal images. Following these estimators, they secondly propose to express the problem of the de-convolution of Poisson noisy images as the minimization of a new constrained problem. The proposed that constrained formulation was well suited to this application domain, as it was directly expressed using the antilog likelihood of the Poisson distribution, and therefore does not require any approximation. They showed how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and they presented results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, they especially focused on the dual-tree complex wavelet transform and on the dictionary composed of curvelet and an undecimated wavelet transform.

Ponti et al. [17] presented a restoration approach through band extrapolation and de-convolution that deals with the noise. An extrapolation algorithm using constraints on both spatial and frequency domains with a smoothing operator were combined with the Richardson-Lucy iterative algorithm. The results of the method for simulated data were compared with those obtained by the original Richardson-Lucy algorithm and also regularized by total variation. The extrapolation of frequencies was also analyzed both in synthetic and in real images. The method improved the results with higher signal-to-noise ratio (SNR) and quality index values, performing band extrapolation, and achieving a better visualization of the three-dimensional (3D) structures. Takeda and Milanfar [20] developed de-blurring with a 3D space-time-invariant PSF, instead of removing the motion blur as a spatial blur. Instead of de-blurring video frames individually, a fully 3D de-blurring method was proposed to reduce motion blur from a single motion-blurred video to produce a high-resolution video in both space and time. Unlike other existing approaches, the proposed de-blurring kernel was free from knowledge of the local motions. Most important, due to its inherent locally adaptive nature, 3D de-blurring was capable of automatically de-blurring the portions of the sequence that were motion blurred, without segmentation and without adversely affecting the rest of the spatiotemporal domain, where such blur was not present. Their method was a two-step approach: first, they upscale the input video in space and time without explicit estimates of local motions and then perform 3D de-blurring to obtain the restored sequence.

3 Contribution of the Paper

Blind restoration employing ID and whiteness measures are proposed in this paper. The contribution of the work lies in addition of two whiteness measures, i.e. holoentropy and mean-square contingency coefficient. This measure exhibits a clear lowest point and helps select the iteration number. The existing whiteness measures include covariance, weighted co-variance, entropy, block covariance, block weighted co-variance, and block entropy [2]. A major contribution has been made also in the image estimation module where the best de-blurred image is selected based on the metrics of ISNR calculated for all eight whiteness measures. The proposed method consists in selecting the regularization value or the final iteration of the algorithm even under noisy conditions.

4 Proposed Blind Restoration Technique

Blind restoration using ID technique and whiteness measures is proposed in this paper. The block diagram of the proposed technique is given in Figure 1.

Figure 1: Block Diagram of the Proposed Technique.
Figure 1:

Block Diagram of the Proposed Technique.

4.1 Image De-blurring Module

The blurred image is de-blurred in this module with the use of image priors, image edge response, and power spectral density [1].

Initially, the blurred image Z is obtained with the use of motion filter. The input image g is blurred with an unknown PSF h. The blurring process [1] can be given as

(1) Z=hg+η.

Here, “⋅” is the convolution operator and η is the additive white Gaussian noise involved. The notion behind the de-blurring process is the fact that most images tend to have sparse leading edges, and edges of the blurred images are less sparse as an area of the blurred image edges are larger. The prior and the cost function are modified accordingly such that lower-frequency components are given more weights. The initial iterations correspond to the estimation of lower frequency, and later iterations give the estimation of higher frequency. The modification also aims to reduce the noise encountered. The minimization cost function [1] is given as

(2) Cs(g,h)=12||zhg||2+βr[f(g)].

Here, h is the PSF kernel to be found, β is the scaling factor, and r[f(g)] is the regularization function, where f(g) is the edge response. The cost function is minimized with the use of a conjugate gradient method. The scaling parameter is at first a large value and then decreased over iterations.

The edge responses [1] of the blurred image are defined by the function f(g) given by

(3) f(g)=ϕqϕ(g)2;qϕ(g)=dϕg.

Normally, ϕ values of 0, 45, 90, and 135 are taken into consideration. Image priors consider that the edges are spare and edge intensities are independent of each another. The sparse prior [1] with density given the edge intensity for a pixel j represented by fj(g) is defined by

(4) p[fj(g)]αek[fj(g)+]q.

k adjusts for the scale of edge intensities, q controls the prior’s sparsity, and ∈ is a small parameter. Taking the noise into consideration as Gaussian likelihood [1] is given by

(5) p(g,h|z)αe12σ2||zhg||2jek[fj(g)+]q.

The log-likelihood maximization is similar to having the cost function minimized.

(6) L(g,h|z)=12σ2||zhg||2kj[fj(g)+]q.

Maximizing this likelihood is equivalent to minimizing the cost function [1]

(7) Cs(g,h)=12||zhg||2+λj[fj(g)+]q.

Here, the λ=2 regularization parameter functions over the edge response and regularize was chosen, which favors the sharp edges, or the priors are selected that reach a sparser edge response.

4.2 Whiteness Measures Module

In this module, eight whiteness measures are found out from the de-blurred images. The existing whiteness measures include covariance, weighted co-variance, entropy, block covariance, block weighted co-variance, and block entropy [2]. Apart from these, mean-square contingency coefficient and holoentropy whiteness measures are proposed in this paper. The selection of the stopping criteria is based on the measures of the fitness of the image estimate and the blur estimate. This is analyzed based on the residual image given by

(8) d=Zh^g^.

The nature of the residual is then compared with the additive noise n. The motivation behind employing the whiteness measures to find the adequacy of the image and the blur estimates is the fact that noise taken into consideration is spectrally white. Initially, the image is set to zero mean and unit variance. Let this residual image represented as d:

(9) dddvar(d).

Here, d′ is the sample mean and var(d) is the sample variance of d. The auto-covariance [2] of the normalized residual image (having two-dimensional lag (x, y) can be estimated by

(10) Rdd(x,y)=Cg,kd(g,k)d(gx,ky),

where summation is carried out over the de-blurred image and C represents an irrelevant constant. The δ function can be considered as the auto-covariance of a spectrally white image. It can be mathematically represented as

(11) δ(x,y)=1,forx=y=0andδ(x,y)=0forallothercases.

Hence, the distance between the auto-covariance function and the δ function forms a whiteness measure and is termed covariance whiteness measure. Considering a (2M+1)×(2M+1) window, the covariance whiteness measure can be defined as

(12) WR(d)=(x,y)=(M,M)(x,y)(0,0)MM(Rdd(x,y))2.

As the auto-covariance for large lags is smaller than for small lags, weight is incorporated to the normal covariance whiteness measure to have a weighted version [1] given by

(13) WRΔ(d)=(x,y)=(M,M)(x,y)(0,0)MMΔ(x,y)(Rdd(x,y))2,

where Δ is the weight matrix. The next whiteness measure is the entropy, for which let the power spectral density being represented by ψdd and the transformation can be expressed as

(14) ψdd=ϕ(Rdd).

Here, ϕ is the discrete Fourier transform. The white signal has flat power spectral density due to the fact that its auto-covariance is a δ function. Entropy measure [2] is employed to evaluate the flatness of the power spectral density. Hence, the entropy whiteness measure can be given by

(15) WE(d)=(m,n)ψ(m,n)logψ(m,n),whereψ(m,n)=ψ(m,n)m,nψ(m,n).

The above whiteness measures are found out, assuming that the residual image is stationary. However, it may not be stationary and, in this case, the measures are modified by employing the image block concept. The block-based auto-covariance is given by

(16) Rddb(x,y)=g,kPbd(g,k)d(gx,ky).

Here, b represents the index of the image block and Pb is a pixel set in the block. Here, the difference is that the residual image is normalized to zero mean and unit variance in a block-by-block fashion rather than the whole image. The whiteness measures of covariance, weighted covariance, and entropy are computed block-wise to have block covariance whiteness measure, block weighted covariance whiteness measure, and block entropy whiteness measure [2]. Let these be represented by WR(d),WRΔ(d) and WE(d).

Holoentropy is another whiteness measure used in the proposed technique. Let there be a number of objects represented by oi for (0<ik) having attributes represented by ai for (0<im). Let the entropy, mutual information, and correlation be represented by Eo, Io, and Ro. The holoentropy denoted by Eho(a) can be defined as the summation of entropy and the total correlation of the random vector a, and can be given as the summation of entropy on all attributes. This can be defined as

(17) Eho(a)=Eo(a)+Ro(a)=k=1mEo(ak).

Normalized power spectral density ψ′(m, n) is calculated in a block-by-block fashion and used as a vector a. Holoentropy considered gives equal weight to all attributes; however, in real conditions, this may not give the best results. Hence, weighted holoentropy by weighting attributes can give increased effectiveness. Here, weighting is carried out such that the more important of those attributes have small entropy values. Weighting is carried out using the reverse sigmoid function of the entropy given by

(18) wo(ai)=2(111+exp(Eo(ai)).

The weighted holoentropy Wo(ai) is the summation of weighted entropy in each attribute of the random vector a.

(19) Wo(a)=i=1mwo(ai)Eo(ai).

The best image selection is the one with a minimum value of weighted holoentropy.

The final whiteness measure taken is the mean-square contingency coefficient. The mean-square contingency coefficient is a measure of correlation and defined as

(20) ϕ2=1min{d1,d2}1i=1d1j=1d2ψijψiψjψiψj,

where normalized power spectral density ψij is calculated in a block-by-block fashion, and d1 and d2 are the sizes of the block. ψj and ψi are sums of normalized power spectral density ψij along the row and column (marginal totals) and calculated as

(21) ψj=j=1d2ψij,
(22) ψi=i=1d1ψij.

Having computed the contingency coefficient in a block-by-block fashion and then averaged to select the best image estimate, best image selection is done when the mean-square contingency coefficient is minimum.

4.3 Image Estimation Module

In this module, image estimation is proposed by choosing the best de-blurred image with the employment of whiteness measures and evaluation metrics. The evaluation metrics employed are SNR and MMSE. After the de-blurring process, a set of de-blurred images is obtained based on the number of iterations (denoted by N) in the de-blurring process, and these are represented as D={D1, D2, …, DN}. From this set, the best image is selected based on criteria by calculating the whiteness measure for every image. The eight whiteness measures are covariance, weighted co-variance, entropy, block covariance, block weighted co-variance, block entropy, mean-square contingency coefficient, and holoentropy.

From the set of images, eight images are selected initially based on the whiteness measure. The images are selected based on the maximum values for the first six measures and minimum for the remaining two measures. Let the selected images be represented as I={I1, I2, …, I8} such that

(23) I1=Di having maximum WR; 0<iN,
(24) I2=Di having maximum WRΔ; 0<iN,
(25) I3=Di having maximum WE; 0<iN,
(26) I4=Di having maximum WR; 0<iN,
(27) I5=Di having maximum WRΔ; 0<iN,
(28) I6=Di having maximum WE; 0<iN,
(29) I7=Di having minimum Wo; 0<iN,
(30) I8=Di having minimum Ws; 0<iN.

That is, images are selected having maximum covariance WR, weighted co-variance WRΔ, entropy WE, block covariance WR, block weighted co-variance WRΔ, and block entropy WE. The last two images are those having minimum mean-square contingency coefficient Ws and holoentropy measures Wo. From these eight images, the best de-blurred image is selected based on ISNR values. For this, the ISNR is computed for all eight selected images and that image having maximum ISNR value is taken as the estimated de-blurred image Ie. This can be represented as

(31) Ie=Ik having maximum ISNR  0<k8.

Hence, the de-blurred image is obtained using whiteness measures and evaluation matrices.

5 Results and Discussion

The results obtained for the proposed technique is given in this section. Section 5.1 gives the dataset description, experimental setup, and evaluation metrics employed. Section 5.2 gives the simulation results, and Section 5.3 gives the performance analysis.

5.1 Dataset Description, Experimental Setup, and Evaluation Metrics

The proposed technique is implemented in MATLAB in a system having 8 GB RAM and 2.6 GHz Intel i-7 processor. The evaluation metrics used are ISNR, MSE, and structural similarity index (SSIM) [23, 14].

The dataset consists of various standard images such as Lena, Barbara, Goldhill, Baboon, Boats, Cameraman, and other images obtained from public databases. Figure 2 gives the sample images.

Figure 2: Sample Images from the Database.
Figure 2:

Sample Images from the Database.

5.2 Simulation Results

The simulation results obtained for the proposed technique are given in this section. Table 1 shows the simulation results for three images with degradation parameters length-10, theta-190, and noise level-40 dB. Here, the images obtained for whiteness measures such as covariance (represented as M1), weighted co-variance (represented as M2), entropy (represented as M3), block covariance (represented as M4), block weighted co-variance (represented as M5), block entropy (represented as M6), mean-square contingency coefficient (represented as P1), and holoentropy (represented as P2) are given along with the final restored image.

Table 1:

Simulation Results.

Input image
Blurred image
M1
M2
M3
M4
M5
M6
P1
P2
Restored image

Figure 3 shows graphs of various whiteness measures vs. iteration. Graphs are drawn for cameraman image with degradation parameters length-10, theta-110, and noise level-30 dB. The maximum ISNR obtained is for iteration number 22, which is shown in the graph of ISNR vs. iteration, which matches with the selection of iteration number based on whiteness measure by maximizing already existing whiteness measures such as covariance, weighted co-variance entropy, block covariance, block weighted co-variance, and block entropy, and by minimizing newly proposed mean-square contingency coefficient and holoentropy.

Figure 3: Performance of Various Whiteness Measures vs. Iteration.
Figure 3:

Performance of Various Whiteness Measures vs. Iteration.

5.3 Performance Analysis

In this section, the proposed technique is evaluated using evaluation metrics employed such as ISNR, MSE, and SSIM with degradation parameters length-10, theta-190, and noise level-40 dB. The tables below give the evaluation metric value obtained for different images using various whiteness measures.

Inferences from Tables 25:

Table 2:

ISNR Values.

ISNR M1 M2 M3 M4 M5 M6 P1 P2 ΔISNR
Lena 8.443142 8.443142 8.443142 9.060074 8.443142 9.060074 8.443142 9.060074 0.45
Barbara 6.926847 6.926847 6.926847 6.926847 6.926847 6.926847 6.482260 6.926847 1.06
Goldhill 8.186265 7.605211 7.605211 8.186265 8.186265 8.186265 8.186265 8.186265 0.45
Baboon 2.948940 2.948940 2.948940 2.830106 2.830106 2.830106 2.261107 2.830106 0.0
Boats 6.216329 6.216329 6.216329 6.216329 5.694853 6.216329 5.694853 6.216329 0.11
Cameraman 8.591647 8.896186 8.591647 8.591647 8.591647 8.591647 8.591647 8.591647 0.13
N1 6.086094 6.086094 6.086094 6.086094 6.086094 6.086094 6.086094 6.086094 0.56
N2 1.278102 1.278102 1.278102 1.278102 1.278102 1.278102 1.278102 1.278102 0.0
N3 13.06572 13.06572 13.06572 13.06572 12.37824 13.06572 12.37824 13.06572 0.0
Table 3:

MSE Values.

MSE M1 M2 M3 M4 M5 M6 P1 P2
Lena 0.017781 0.017781 0.017781 0.017934 0.017781 0.017934 0.017781 0.017934
Barbara 0.026958 0.026958 0.026958 0.026958 0.026958 0.026958 0.026787 0.026958
Goldhill 0.030533 0.030391 0.030391 0.030533 0.030533 0.030533 0.030533 0.030533
Baboon 0.017527 0.017527 0.017527 0.017234 0.017234 0.017234 0.016818 0.017234
Boats 0.020274 0.020274 0.020274 0.020274 0.019495 0.020274 0.019495 0.020274
Cameraman 0.015298 0.015494 0.015298 0.015298 0.015298 0.015298 0.015298 0.015298
N1 0.033993 0.033993 0.033993 0.033993 0.033993 0.033993 0.033993 0.033993
N2 0.013537 0.013537 0.013537 0.013537 0.013537 0.013537 0.013537 0.013537
N3 0.028189 0.028189 0.028189 0.028189 0.027328 0.028189 0.027328 0.028189
Table 4:

SSIM Values.

SSIM M1 M2 M3 M4 M5 M6 P1 P2
Lena 0.993980 0.993980 0.993980 0.993974 0.993980 0.993974 0.993980 0.993974
Barbara 0.991350 0.991350 0.991350 0.991350 0.991350 0.991350 0.991380 0.991350
Goldhill 0.991157 0.991157 0.991157 0.991157 0.991157 0.991157 0.991157 0.991157
Baboon 0.995878 0.995878 0.995878 0.995925 0.995925 0.995925 0.995991 0.995925
Boats 0.994407 0.994407 0.994407 0.994407 0.994485 0.994407 0.994485 0.994407
Cameraman 0.995577 0.995548 0.995577 0.995577 0.995577 0.995577 0.995577 0.995577
N1 0.989409 0.989409 0.989409 0.989409 0.989409 0.989409 0.989409 0.989409
N2 0.995833 0.995833 0.995833 0.995833 0.995833 0.995833 0.995833 0.995833
N3 0.990696 0.990696 0.990696 0.990696 0.990779 0.990696 0.990779 0.990696
Table 5:

ISNR Values for Boat Image by Varying Parameters.

ISNR M1 M2 M3 M4 M5 M6 P1 P2 ΔISNR
L=8, theta=120, noise=35 dB 5.03328 5.03328 5.03328 5.03328 5.03328 5.03328 5.27706 5.03328 0.02594
L=10, theta=190, noise=40 dB 6.21632 6.21632 6.21632 6.21632 5.69485 6.21632 5.69485 6.21632 0.10898
L=12, theta=230, noise=45 dB 7.81704 7.81704 7.81704 7.81704 7.81704 7.81704 7.81704 7.81704 0.55816
L=15, theta=210, noise=50 dB 9.80050 9.80050 9.80050 9.80050 9.80050 9.80050 9.80050 9.80050 0.1375
  • The tables give the performance evaluation of the proposed technique. Table 2 gives the ISNR values obtained and ΔISNR (maximum ISNR along iterations−maximum ISNR obtained out of eight whiteness measures). It shows that all the ISNR values are optimal but the highest ISNR is achieved in Barbara image, i.e. 1.06. However, the minimum ISNR is reached in Baboon, i.e. 0.0; the ISNR value changes based on the performance of particular images. Table 3 gives the MSE values, and Table 4 gives the SSIM values. The values are obtained for nine images including Lena, Barbara, Goldhill, Baboon, Boats, Cameraman, and other three noise images N1, N2, and N3.

  • The results are obtained for eight whiteness measures including covariance (represented as M1), weighted co-variance (represented as M2), entropy (represented as M3), block covariance (represented as M4), block weighted co-variance (represented as M5), block entropy (represented as M6), mean-square contingency coefficient (represented as P1), and holoentropy (represented as P2).

  • The highest ISNR value obtained is 13.06572, and the ΔISNR average value is 0.3066 dB loss with respect to the best ISNR. The obtained evaluation matric values confirm the effectiveness of the proposed technique.

  • Tables 57 give the performance analysis by varying the length, theta, and noise level. We take into consideration four cases as given in the table. Here, ISNR, MSE, and SSIM are also calculated for all eight whiteness measures for boat image and the ΔISNR average value was 0.21685 dB.

Table 6:

MSE Values for Boat Image by Varying Parameters.

MSE M1 M2 M3 M4 M5 M6 P1 P2
L=8, theta=120, noise=35 dB 0.01968 0.01968 0.01968 0.01968 0.01968 0.01968 0.01973 0.01968
L=10, theta=190, noise=40 dB 0.02027 0.02027 0.02027 0.02027 0.01949 0.02027 0.01949 0.02027
L=12, theta=230, noise=45 dB 0.01949 0.01949 0.01949 0.01949 0.01949 0.01949 0.01949 0.01949
L=15, theta=210, noise=50 dB 0.03541 0.03541 0.03541 0.03541 0.03541 0.03541 0.03541 0.03541
Table 7:

SSIM Values for Boat Image by Varying Parameters.

SSIM M1 M2 M3 M4 M5 M6 P1 P2
L=8, theta=120, noise=35 dB 0.99441 0.99441 0.99441 0.99441 0.99441 0.99441 0.99441 0.99441
L=10, theta=190, noise=40 dB 0.994407 0.994407 0.994407 0.99440 0.994485 0.99440 0.994485 0.99440
L=12, theta=230, noise=45 dB 0.99448 0.99448 0.99448 0.99448 0.99448 0.99448 0.99448 0.99448
L=15, theta=210, noise=50 dB 0.99343 0.99343 0.99343 0.99343 0.99343 0.99343 0.99343 0.99343

Figure 4 shows the comparative analysis of six images based on ISNR values with respect to existing techniques. For the comparison of filters, the proposed model motion filter reaches the maximum value when compared to regularization filter, Wiener filter, and Kalman filter.

Figure 4: Comparison Analysis.
Figure 4:

Comparison Analysis.

6 Conclusion

In this paper, blind de-blurring and restoration employing ID and whiteness measures of the residual image are proposed. The technique has three modules, namely ID module, whiteness measures module, and image estimation module. New whiteness measures of holoentropy and mean-square contingency coefficient have been proposed in the whiteness measures module. The choice of the stopping condition depends on the measures of the fitness of image measure and the blur estimate. Good estimates of both the image and the blurring operator are obtained considering all eight whiteness measures, and the best one is selected based on ISNR. The results are obtained for all eight whiteness measures by employing evaluation metrics of ISNR, MSE, and SSIM. The results are obtained for standard images degraded with uniform motion blurs, and performance analysis is made by varying parameters. The best image is selected on the basis of the whiteness measures and proves to be effective for various images with ISNR losses with respect to best ISNR of only 0.3066 dB on average. The quality of the restored image is good even when the blurred image has noise, and promises the iteration number selection that is the best compromise between image detail and artifacts. The disadvantage of the strategy is that it requires manual halting, which compares to picking the final estimation of the regularization parameter. In fact, modifying the regularization parameter as well as finding regularization parameter for iterative (daze or not) ID algorithms is a long-standing, yet open, research area. In further research, we intend to expand our method to deal with other ID applications, for example de-blurring video sequences or out-of-center de-blurring. Our strategies can likewise be connected to a hybrid image system or joined with coded exposure photography.

Bibliography

[1] M. S. C. Almeida and L.B. Almeida, Blind and semi-blind deblurring of natural images, IEEE Trans. Image Process. 19 (2010), 36–52.10.1109/TIP.2009.2031231Search in Google Scholar PubMed

[2] M. S. C. Almeida and M. A. T. Figueiredo, Parameter estimation for blind and non-blind deblurring using residual whiteness measure, IEEE Trans. Image Process. 19 (2013), 2751–2763.10.1109/TIP.2013.2257810Search in Google Scholar PubMed

[3] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Imag. Sci. 2 (2009), 183–202.10.1137/080716542Search in Google Scholar

[4] J. M. Bioucas-Dias and M. A. T. Figueiredo, A new TwIST: two-step iterative shrinkage-thresholding algorithms for image restoration, IEEE Trans. Image Process. 16 (2007), 2992–3004.10.1109/TIP.2007.909319Search in Google Scholar PubMed

[5] J. M. C. Brown, J. E. Gillam, D. M. Paganin and M. R. Dimmock, Laplacian erosion: an image deblurring technique for multi-plane gamma-cameras, IEEE Trans. Nucl. Sci. 60 (2013), 3333–3342.10.1109/TNS.2013.2264946Search in Google Scholar

[6] M. Carlavan and L. Blanc-Féraud, Sparse Poisson noisy image deblurring, IEEE Trans. Image Process. 21 (2012), 1834–1846.10.1109/TIP.2011.2175934Search in Google Scholar PubMed

[7] P. L. Combettes, Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections, IEEE Trans. Image Process. 6 (1997), 493–506.10.1109/83.563316Search in Google Scholar PubMed

[8] A. Danielyan, V. Katkovnik and K. Egiazarian, BM3D frames and variational image deblurring, IEEE Trans. Image Process. 21 (2012), 1718–1728.10.1109/TIP.2011.2176954Search in Google Scholar PubMed

[9] L. Dascal, M. Zibulevsky and R. Kimmel, Signal Denoising by Constraining the Residual to be Statistically Noise-Similar, Department of Computer Science, University of Israel, Technion, Israel, 2008.Search in Google Scholar

[10] W. Dong, L. Zhang, G. Shi and X. Wu, Image deblurring and supper-resolution by adaptive sparse domain selection and adaptive regularization, IEEE Trans. Image Process. 20 (2011), 2378–2386.10.1109/TIP.2011.2109730Search in Google Scholar PubMed

[11] Y. C. Eldar, Generalized SURE for exponential families: applications to regularization, IEEE Trans. Sig. Process. 57 (2009), 471–481.10.1109/TSP.2008.2008212Search in Google Scholar

[12] R. Giryes, M. Elad and Y. C. Eldar, The projected GSURE for automatic parameter tuning in iterative shrinkage methods, Appl. Comput. Harmon. Anal. 30 (2011), 407–422.10.1016/j.acha.2010.11.005Search in Google Scholar

[13] H. Ji and K. Wang, Robust image deblurring with an inaccurate blur kernel, IEEE Trans. Image Process. 21 (2012), 1624–1634.10.1109/TIP.2011.2171699Search in Google Scholar PubMed

[14] E. L. Lehmann and C. George, Theory of Point Estimation, Springer-Verlag, New York, 1998.Search in Google Scholar

[15] L. Lucy, An iterative technique for the rectification of observed distributions, Astronom. J. 79 (1974), 745–754.10.1086/111605Search in Google Scholar

[16] J. Oliveira, M. Figueiredo and J. Bioucas-Dias, Parametric blur estimation for blind restoration of natural images: linear motion and out-of-focus, IEEE Trans. Image Process. 23 (2014), 466–477.10.1109/TIP.2013.2286328Search in Google Scholar PubMed

[17] M. P. Ponti Jr., N. D. A. Mascarenhas, P. J. S. G. Ferreira and C. A. T. Suazo, Three-Dimensional Noisy Image Restoration Using Filtered Extrapolation and Deconvolution, Springer, Berlin, 2011.10.1007/s11760-011-0216-xSearch in Google Scholar

[18] W. H. Richardson, Bayesian-based iterative method of image restoration, J. Opt. Soc. Am. 62 (1972), 55–59.10.1364/JOSA.62.000055Search in Google Scholar

[19] Y.-W. Tai, X. Chen, S. Kim, S. J. Kim, F. Li, J. Yang, J. Yu, Y. Matsushita and M. S. Brown, Nonlinear camera response functions and image deblurring: theoretical analysis and practice, IEEE Trans. Pattern Anal. Mach. Intell. 35 (2013), 2498–2312.10.1109/TPAMI.2013.40Search in Google Scholar PubMed

[20] H. Takeda and P. Milanfar, Removing motion blur with space-time processing, IEEE Trans. Image Process. 20 (2011), 2990–3000.10.1109/TIP.2011.2131666Search in Google Scholar PubMed

[21] A. M. Thompson, J. Brown, J. Kay and D. Titterington, A study of methods of choosing the smoothing parameter in image restoration by regularization, IEEE Trans. Pattern Anal. Mach. Intell. 13 (1991), 326–339.10.1109/34.88568Search in Google Scholar

[22] E. Veraa, M. Vegab, R. Molinac and A. K. Katsaggelos, A novel iterative image restoration algorithm using nonstationary image priors, in: Proceedings of IEEE International Conference on Image Processing (ICIP), Brussels, pp. 3518–3521, 2011.10.1109/ICIP.2011.6116456Search in Google Scholar

[23] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process. 13 (2004), 600–612.10.1109/TIP.2003.819861Search in Google Scholar

[24] C. Wang, L. Sun, P. Cui, J. Zhang and S. Yang, Analyzing image deblurring through three paradigms, IEEE Trans. Image Process. 21 (2012), 115–122.10.1109/TIP.2011.2159985Search in Google Scholar PubMed

[25] N. Wiener, Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications, MIT Press, Cambridge, MA, Wiley and Sons, New York, 1949.10.7551/mitpress/2946.001.0001Search in Google Scholar

[26] X. Zhu and P. Milanfar, Automatic parameter selection for denoising algorithms using a no reference measure of image content, IEEE Trans. Image Process. 19 (2010), 3116–3132.10.1109/TIP.2010.2052820Search in Google Scholar PubMed

Received: 2018-04-08
Published Online: 2018-06-18

©2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Articles in the same Issue

  1. An Optimized K-Harmonic Means Algorithm Combined with Modified Particle Swarm Optimization and Cuckoo Search Algorithm
  2. Texture Feature Extraction Using Intuitionistic Fuzzy Local Binary Pattern
  3. Leaf Disease Segmentation From Agricultural Images via Hybridization of Active Contour Model and OFA
  4. Deadline Constrained Task Scheduling Method Using a Combination of Center-Based Genetic Algorithm and Group Search Optimization
  5. Efficient Classification of DDoS Attacks Using an Ensemble Feature Selection Algorithm
  6. Distributed Multi-agent Bidding-Based Approach for the Collaborative Mapping of Unknown Indoor Environments by a Homogeneous Mobile Robot Team
  7. An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data
  8. Combined Multi-Agent Method to Control Inter-Department Common Events Collision for University Courses Timetabling
  9. An Improved Particle Swarm Optimization Algorithm for Global Multidimensional Optimization
  10. A Kernel Probabilistic Model for Semi-supervised Co-clustering Ensemble
  11. Pythagorean Hesitant Fuzzy Information Aggregation and Their Application to Multi-Attribute Group Decision-Making Problems
  12. Using an Efficient Optimal Classifier for Soil Classification in Spatial Data Mining Over Big Data
  13. A Bayesian Multiresolution Approach for Noise Removal in Medical Magnetic Resonance Images
  14. Gbest-Guided Artificial Bee Colony Optimization Algorithm-Based Optimal Incorporation of Shunt Capacitors in Distribution Networks under Load Growth
  15. Graded Soft Expert Set as a Generalization of Hesitant Fuzzy Set
  16. Universal Liver Extraction Algorithm: An Improved Chan–Vese Model
  17. Software Effort Estimation Using Modified Fuzzy C Means Clustering and Hybrid ABC-MCS Optimization in Neural Network
  18. Handwritten Indic Script Recognition Based on the Dempster–Shafer Theory of Evidence
  19. An Integrated Intuitionistic Fuzzy AHP and TOPSIS Approach to Evaluation of Outsource Manufacturers
  20. Automatically Assess Day Similarity Using Visual Lifelogs
  21. A Novel Bio-Inspired Algorithm Based on Social Spiders for Improving Performance and Efficiency of Data Clustering
  22. Discriminative Training Using Noise Robust Integrated Features and Refined HMM Modeling
  23. Self-Adaptive Mussels Wandering Optimization Algorithm with Application for Artificial Neural Network Training
  24. A Framework for Image Alignment of TerraSAR-X Images Using Fractional Derivatives and View Synthesis Approach
  25. Intelligent Systems for Structural Damage Assessment
  26. Some Interval-Valued Pythagorean Fuzzy Einstein Weighted Averaging Aggregation Operators and Their Application to Group Decision Making
  27. Fuzzy Adaptive Genetic Algorithm for Improving the Solution of Industrial Optimization Problems
  28. Approach to Multiple Attribute Group Decision Making Based on Hesitant Fuzzy Linguistic Aggregation Operators
  29. Cubic Ordered Weighted Distance Operator and Application in Group Decision-Making
  30. Fault Signal Recognition in Power Distribution System using Deep Belief Network
  31. Selector: PSO as Model Selector for Dual-Stage Diabetes Network
  32. Oppositional Gravitational Search Algorithm and Artificial Neural Network-based Classification of Kidney Images
  33. Improving Image Search through MKFCM Clustering Strategy-Based Re-ranking Measure
  34. Sparse Decomposition Technique for Segmentation and Compression of Compound Images
  35. Automatic Genetic Fuzzy c-Means
  36. Harmony Search Algorithm for Patient Admission Scheduling Problem
  37. Speech Signal Compression Algorithm Based on the JPEG Technique
  38. i-Vector-Based Speaker Verification on Limited Data Using Fusion Techniques
  39. Prediction of User Future Request Utilizing the Combination of Both ANN and FCM in Web Page Recommendation
  40. Presentation of ACT/R-RBF Hybrid Architecture to Develop Decision Making in Continuous and Non-continuous Data
  41. An Overview of Segmentation Algorithms for the Analysis of Anomalies on Medical Images
  42. Blind Restoration Algorithm Using Residual Measures for Motion-Blurred Noisy Images
  43. Extreme Learning Machine for Credit Risk Analysis
  44. A Genetic Algorithm Approach for Group Recommender System Based on Partial Rankings
  45. Improvements in Spoken Query System to Access the Agricultural Commodity Prices and Weather Information in Kannada Language/Dialects
  46. A One-Pass Approach for Slope and Slant Estimation of Tri-Script Handwritten Words
  47. Secure Communication through MultiAgent System-Based Diabetes Diagnosing and Classification
  48. Development of a Two-Stage Segmentation-Based Word Searching Method for Handwritten Document Images
  49. Pythagorean Fuzzy Einstein Hybrid Averaging Aggregation Operator and its Application to Multiple-Attribute Group Decision Making
  50. Ensembles of Text and Time-Series Models for Automatic Generation of Financial Trading Signals from Social Media Content
  51. A Flame Detection Method Based on Novel Gradient Features
  52. Modeling and Optimization of a Liquid Flow Process using an Artificial Neural Network-Based Flower Pollination Algorithm
  53. Spectral Graph-based Features for Recognition of Handwritten Characters: A Case Study on Handwritten Devanagari Numerals
  54. A Grey Wolf Optimizer for Text Document Clustering
  55. Classification of Masses in Digital Mammograms Using the Genetic Ensemble Method
  56. A Hybrid Grey Wolf Optimiser Algorithm for Solving Time Series Classification Problems
  57. Gray Method for Multiple Attribute Decision Making with Incomplete Weight Information under the Pythagorean Fuzzy Setting
  58. Multi-Agent System Based on the Extreme Learning Machine and Fuzzy Control for Intelligent Energy Management in Microgrid
  59. Deep CNN Combined With Relevance Feedback for Trademark Image Retrieval
  60. Cognitively Motivated Query Abstraction Model Based on Associative Root-Pattern Networks
  61. Improved Adaptive Neuro-Fuzzy Inference System Using Gray Wolf Optimization: A Case Study in Predicting Biochar Yield
  62. Predict Forex Trend via Convolutional Neural Networks
  63. Optimizing Integrated Features for Hindi Automatic Speech Recognition System
  64. A Novel Weakest t-norm based Fuzzy Fault Tree Analysis Through Qualitative Data Processing and Its Application in System Reliability Evaluation
  65. FCNB: Fuzzy Correlative Naive Bayes Classifier with MapReduce Framework for Big Data Classification
  66. A Modified Jaya Algorithm for Mixed-Variable Optimization Problems
  67. An Improved Robust Fuzzy Algorithm for Unsupervised Learning
  68. Hybridizing the Cuckoo Search Algorithm with Different Mutation Operators for Numerical Optimization Problems
  69. An Efficient Lossless ROI Image Compression Using Wavelet-Based Modified Region Growing Algorithm
  70. Predicting Automatic Trigger Speed for Vehicle-Activated Signs
  71. Group Recommender Systems – An Evolutionary Approach Based on Multi-expert System for Consensus
  72. Enriching Documents by Linking Salient Entities and Lexical-Semantic Expansion
  73. A New Feature Selection Method for Sentiment Analysis in Short Text
  74. Optimizing Software Modularity with Minimum Possible Variations
  75. Optimizing the Self-Organizing Team Size Using a Genetic Algorithm in Agile Practices
  76. Aspect-Oriented Sentiment Analysis: A Topic Modeling-Powered Approach
  77. Feature Pair Index Graph for Clustering
  78. Tangramob: An Agent-Based Simulation Framework for Validating Urban Smart Mobility Solutions
  79. A New Algorithm Based on Magic Square and a Novel Chaotic System for Image Encryption
  80. Video Steganography Using Knight Tour Algorithm and LSB Method for Encrypted Data
  81. Clay-Based Brick Porosity Estimation Using Image Processing Techniques
  82. AGCS Technique to Improve the Performance of Neural Networks
  83. A Color Image Encryption Technique Based on Bit-Level Permutation and Alternate Logistic Maps
  84. A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition
  85. Database Creation and Dialect-Wise Comparative Analysis of Prosodic Features for Punjabi Language
  86. Trapezoidal Linguistic Cubic Fuzzy TOPSIS Method and Application in a Group Decision Making Program
  87. Histopathological Image Segmentation Using Modified Kernel-Based Fuzzy C-Means and Edge Bridge and Fill Technique
  88. Proximal Support Vector Machine-Based Hybrid Approach for Edge Detection in Noisy Images
  89. Early Detection of Parkinson’s Disease by Using SPECT Imaging and Biomarkers
  90. Image Compression Based on Block SVD Power Method
  91. Noise Reduction Using Modified Wiener Filter in Digital Hearing Aid for Speech Signal Enhancement
  92. Secure Fingerprint Authentication Using Deep Learning and Minutiae Verification
  93. The Use of Natural Language Processing Approach for Converting Pseudo Code to C# Code
  94. Non-word Attributes’ Efficiency in Text Mining Authorship Prediction
  95. Design and Evaluation of Outlier Detection Based on Semantic Condensed Nearest Neighbor
  96. An Efficient Quality Inspection of Food Products Using Neural Network Classification
  97. Opposition Intensity-Based Cuckoo Search Algorithm for Data Privacy Preservation
  98. M-HMOGA: A New Multi-Objective Feature Selection Algorithm for Handwritten Numeral Classification
  99. Analogy-Based Approaches to Improve Software Project Effort Estimation Accuracy
  100. Linear Regression Supporting Vector Machine and Hybrid LOG Filter-Based Image Restoration
  101. Fractional Fuzzy Clustering and Particle Whale Optimization-Based MapReduce Framework for Big Data Clustering
  102. Implementation of Improved Ship-Iceberg Classifier Using Deep Learning
  103. Hybrid Approach for Face Recognition from a Single Sample per Person by Combining VLC and GOM
  104. Polarity Analysis of Customer Reviews Based on Part-of-Speech Subcategory
  105. A 4D Trajectory Prediction Model Based on the BP Neural Network
  106. A Blind Medical Image Watermarking for Secure E-Healthcare Application Using Crypto-Watermarking System
  107. Discriminating Healthy Wheat Grains from Grains Infected with Fusarium graminearum Using Texture Characteristics of Image-Processing Technique, Discriminant Analysis, and Support Vector Machine Methods
  108. License Plate Recognition in Urban Road Based on Vehicle Tracking and Result Integration
  109. Binary Genetic Swarm Optimization: A Combination of GA and PSO for Feature Selection
  110. Enhanced Twitter Sentiment Analysis Using Hybrid Approach and by Accounting Local Contextual Semantic
  111. Cloud Security: LKM and Optimal Fuzzy System for Intrusion Detection in Cloud Environment
  112. Power Average Operators of Trapezoidal Cubic Fuzzy Numbers and Application to Multi-attribute Group Decision Making
Downloaded on 12.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jisys-2017-0140/html
Scroll to top button