Home TGV-based multiplicative noise removal approach: Models and algorithms
Article
Licensed
Unlicensed Requires Authentication

TGV-based multiplicative noise removal approach: Models and algorithms

  • Yiming Gao and Xiaoping Yang EMAIL logo
Published/Copyright: February 14, 2018

Abstract

Total variation (TV) based models have been used widely in multiplicative denoising problem. However, these models are always accompanied by an unsatisfactory effect named staircase due to the property of BV space. In this paper, we present two high-order variational models based on total generalized variation (TGV) for two kinds of multiplicative noises. The proposed models reduce the staircase while preserving the edges. In the meantime we develop an efficient algorithm which is called Prediction-Correction proximal alternative direction method of multipliers (PADMM) to solve our models. Moreover, we show the convergence of our algorithm under certain conditions. Numerical experiments demonstrate that our high-order models outperform the classical TV-based models in PSNR and SSIM values.

Award Identifier / Grant number: 11531005

Award Identifier / Grant number: 91330101

Funding statement: This work is supported by National Natural Science Foundation of China (No. 11531005 and No. 91330101).

A Appendix

We will consider the convergence of our Prediction-Correction ADMM algorithm with dealing with our proposed models in this section, some notations and details can be seen in [18]. Generally, our TGV multiplicative removal models can be characterized as the following constraint minimization problem:

(A.1){minx,y,t,pθ1(x)+θ2(y)+θ3(t)+θ4(p),subject toAx+By+Ct=b1,Ly+Ep=b2,x𝒳,y𝒴,t𝒯,p𝒫.

The augmented Lagrangian function is written as

L𝒜(x,y,t,p)=θ1(x)+θ2(y)+θ3(t)+θ4(p)-λ1,Ax+By+Ct-b1+β12Ax+By+Ct-b12-λ2,Ly+Ep-b2+β22Ly+Ep-b22,λ1Λ1,λ2Λ2,

and the Prediction-Correction method generates the predictor firstly.

Prediction.

(1) From given (xk,yk,tk,pk,λ1k,λ2k)Ω=𝒳×𝒴×𝒯×𝒫×Λ1×Λ2, we obtain x~k,y~k,t~k and p~k in the following parallel manner:

(A.2)x~k=argminxθ1(x)-λ1k,Ax+Byk+Ctk-b1+β12Ax+Byk+Ctk-b12,y~k=argminyθ2(y)-λ1k,Axk+By+Ctk-b1+β12Axk+By+Ctk-b12-λ2k,Ly+Epk-b2+β22Ly+Epk-b22,t~k=argmintθ3(t)-λ1k,Axk+Byk+Ct-b1+β12Axk+Byk+Ct-b12,p~k=argminpθ4(p)-λ2k,Lyk+Ep-b2+β22Lyk+Ep-b22.

(2) Update λ1~k, λ2~k by

(A.3)λ1~k=λ1k-β1(Ax~k+By~k+Ct~k-b1),λ2~k=λ2k-β2(Ly~k+Ep~k-b2).

Note that the solution (x~k,y~k,z~k,p~k) of (A.2) satisfies the variational inequalities

(A.4){θ1(x)-θ1(x~k)+(x-x~k)T(-ATλ1k+β1AT(Ax~k+Byk+Ctk-b1))0,θ2(y)-θ2(y~k)+(y-y~k)T(-BTλ1k-LTλ2k+β1BT(Axk+By~k+Ctk-b1)+β2LT(Ly~k+Epk-b2))0,θ3(t)-θ3(t~k)+(t-t~k)T(-CTλ1k+β1CT(Axk+Byk+Ct~k-b1))0,θ4(p)-θ4(p~k)+(p-p~k)T(-ETλ2k+β2ET(Lyk+Ep~k-b2))0

for all (x,y,t,w) and formula (A.3) can be rewritten as

(A.5)(λ1-λ1~k)T(λ1~k-λ1k+β1(Ax~k+By~k+Ct~k-b1))0,(λ2-λ2~k)T(λ2~k-λ2k+β2(Ly~k+Ep~k-b2))0

for all (λ1,λ2).

By denoting

u=(x,y,t,p),θ(u)=θ1(x)+θ2(y)+θ3(t)+θ4(p)

and w=(x,y,t,p,λ1,λ2), we can combine (A.4) and (A.5) into a variational inequality which is represented as follows:

(A.6)θ(u)-θ(u~k)+(x-x~ky-y~kt-t~kp-p~kλ1-λ1~kλ2-λ2~k)T{(-ATλ1~k-BTλ1~k-LTλ2~k-CTλ1~k-ETλ2~kAx~k+By~k+Ct~k-b1Ly~k+Ep~k-b2)+β1(ATBTCT000)(A(xk-x~k)+B(yk-y~k)+C(tk-t~k))+β2(0LT0ET00)(L(yk-y~k)+E(pk-p~k))+(β1ATA000000β1BTB+β2LTL000000β1CTC000000β2ETE0000001β1I0000001β2I)(x~k-xky~k-ykt~k-tkp~k-pkλ1~k-λ1kλ2~k-λ2k)}0

for all wΩ.

Lemma 1.

Given wk=(xk,yk,tk,pk,λ1k,λ2k), let w~k=(x~k,y~k,t~k,p~k,λ1~k,λ2~k)Ω be generated from wk by (A.2) and (A.3). Denote by w the solution (the saddle point) of the Lagrange function of our primal minimization problem (A.1). Then we have

(A.7)(w~k-w)TH(wk-w~k)(w~k-w)T(F(w~k)+η1(u1k,u1~k)+η2(u2k,u2~k)),

where

η1(u1k,u1~k)=β1(ATBTCT000)(A(xk-x~k)+B(yk-y~k)+C(tk-t~k)),
η2(u2k,u2~k)=β2(0LT0ET00)(L(yk-y~k)+E(pk-p~k))

and

H=(β1ATA000000β1BTB+β2LTL000000β1CTC000000β2ETE0000001β1I0000001β2I).

Proof.

We set (x,y,t,p,λ1,λ2)=(x,y,t,p,λ1,λ2) in (A.6); the assertion follows directly. ∎

Since F is monotone and w~kΩ, it follows that

(A.8)(w~k-w)TF(w~k)(w~k-w)TF(w)0.

In addition, by using

Ax+By+Ct=b1,Ly+Ep=b2

and

β1(Ax~k+By~k+Ct~k-b1)=(λ1k-λ1~k),β2(Ly~k+Ep~k-b2)=(λ2k-λ2~k),

we have

(A.9)(w~k-w)T[η1(u1k,u1~k)+η2(u2k,u2~k)]=[A(xk-x~k)+B(yk-y~k)+C(tk-t~k)]Tβ1(Ax~k+By~k+Ct~k-b1)+[L(yk-y~k)+E(pk-p~k)]Tβ2(Ly~k+Ep~k-b2)=(λ1k-λ1~k)T[A(xk-x~k)+B(yk-y~k)+C(tk-t~k)]+(λ2k-λ2~k)T[L(yk-y~k)+E(pk-p~k)].

Lemma 2.

Given wk=(xk,yk,tk,pk,λ1k,λ2k), let w~k=(x~k,y~k,t~k,p~k,λ1~k,λ2~k)Ω be generated from wk by (A.2) and (A.3). Then we have

(A.10)(wk-w)TH(wk-w~k)φ(wk,w~k),wΩ,

where

φ(wk,w~k)=wk-w~kH2+(λ1k-λ1~k)T[A(xk-x~k)+B(yk-y~k)+C(tk-t~k)]+(λ2k-λ2~k)T[L(yk-y~k)+E(pk-p~k)].

Proof.

Firstly, using (A.7), (A.8) and (A.9) we obtain that

(w~k-w)TH(wk-w~k)(λ1k-λ1~k)T[A(xk-x~k)+B(yk-y~k)+C(tk-t~k)]+(λ2k-λ2~k)T[L(yk-y~k)+E(pk-p~k)].

The assertion of this lemma follows from the last inequality and the definition of φ(wk,w~k) directly. ∎

Now, we consider the right-hand side of (A.10). Note that

φ(wk,w~k)=wk-w~kH2+(λ1k-λ1~k)T[A(xk-x~k)+B(yk-y~k)+C(tk-t~k)]
+(λ2k-λ2~k)T[L(yk-y~k)+E(pk-p~k)]
=(xk-x~kyk-y~ktk-t~kpk-p~kλ1k-λ1~kλ2k-λ2~k)T(β1ATA00012AT00β1BTB+β2LTL0012BT12DT00β1CTC012CT0000β2ETE012ET12A12B12C01β1I0012D012E01β2I)(xk-x~kyk-y~ktk-t~kpk-p~kλ1k-λ1~kλ2k-λ2~k)
=(xk-x~kyk-y~ktk-t~kλ1k-λ1~k)T(β1ATA0012AT0β1BTB012BT00β1CTC12CT12A12B12C1β1I)(xk-x~kyk-y~ktk-t~kλ1k-λ1~k)
+(yk-y~kpk-p~kλ2k-λ2~k)T(β2LTL012LT0β2ETE12ET12L12E1β2I)(yk-y~kpk-p~kλ2k-λ2~k)
=(β1A(xk-x~k)β1B(yk-y~k)β1C(tk-t~k)1β1(λ1k-λ1~k))T(I0012I0I012I00I12I12I12I12II)(β1A(xk-x~k)β1B(yk-y~k)β1C(tk-t~k)1β1(λ1k-λ1~k))
+(β2L(yk-y~k)β2E(pk-p~k)1β2(λ2k-λ2~k))T(I012I0I12I12I12II)(β2L(yk-y~k)β2E(pk-p~k)1β2(λ2k-λ2~k)).

Because the eigenvalues of the matrix

(1001201012001121212121)

are 1,1,1±32 and the eigenvalues of the matrix

(1012011212121)

are 1,1±22, the smallest eigenvalues of the two matrices are 1-32 and 1-22, respectively. Therefore,

(A.11)φ(wk,w~k)2-32(β1A(xk-x~k)β1B(yk-y~k)β1C(tk-t~k)1β1(λ1k-λ1~k))T(β1A(xk-x~k)β1B(yk-y~k)β1C(tk-t~k)1β1(λ1k-λ1~k))+2-22(β2L(yk-y~k)β2E(pk-p~k)1β2(λ2k-λ2~k))T(β2L(yk-y~k)β2E(pk-p~k)1β2(λ2k-λ2~k))2-32wk-w~kH2.

Correction.

Based on the predictor by (A.2) and (A.3), we update the new iterate wk+1 by

(A.12)wk+1=wk-αk(wk-w~k),

where

αk=φ(wk,w~k)wk-w~kH2.

By using (A.10) and (A.12), we obtain

wk+1-wH2=(wk-w)-αk(wk-w~k)H2=wk-wH2-2αk(wk-w)TH(wk-w~k)+αk2wk-w~kH2wk-wH2-2αkφ(wk,w~k)+αk2wk-w~kH2=wk-wH2-αkφ(wk,w~k).

Furthermore, based on (A.11), we can obtain the convergence condition

wk+1-wH2wk-wH2-7-434wk-w~kH2.

References

[1] G. Aubert and J.-F. Aujol, A variational approach to removing multiplicative noise, SIAM J. Appl. Math. 68 (2008), no. 4, 925–946. 10.1137/060671814Search in Google Scholar

[2] K. Bredies, K. Kunisch and T. Pock, Total generalized variation, SIAM J. Imaging Sci. 3 (2010), no. 3, 492–526. 10.1137/090769521Search in Google Scholar

[3] K. Bredies and H. P. Sun, Preconditioned Douglas–Rachford algorithms for TV- and TGV-regularized variational imaging problems, J. Math. Imaging Vision 52 (2015), no. 3, 317–344. 10.1007/s10851-015-0564-1Search in Google Scholar

[4] K. Bredies and T. Valkonen, Inverse problems with second-order total generalized variation constraints, Ninth International Conference on Sampling Theory and Applications (Singapore 2011), Nanyang Technological University, Singapore (2011), https://imsc.uni-graz.at/bredies/papers/SampTA2011.pdf. Search in Google Scholar

[5] A. Chambolle and P.-L. Lions, Image recovery via total variation minimization and related problems, Numer. Math. 76 (1997), no. 2, 167–188. 10.1007/s002110050258Search in Google Scholar

[6] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision 40 (2011), no. 1, 120–145. 10.1007/s10851-010-0251-1Search in Google Scholar

[7] R. H. Chan, M. Tao and X. Yuan, Linearized alternating direction method of multipliers for constrained linear least-squares problem, East Asian J. Appl. Math. 2 (2012), no. 4, 326–341. 10.4208/eajam.270812.161112aSearch in Google Scholar

[8] R. H. Chan, J. Yang and X. Yuan, Alternating direction method for image inpainting in wavelet domains, SIAM J. Imaging Sci. 4 (2011), no. 3, 807–826. 10.1137/100807247Search in Google Scholar

[9] T. Chan, A. Marquina and P. Mulet, High-order total variation-based image restoration, SIAM J. Sci. Comput. 22 (2000), no. 2, 503–516. 10.1137/S1064827598344169Search in Google Scholar

[10] D.-Q. Chen and L.-Z. Cheng, Fast linearized alternating direction minimization algorithm with adaptive parameter selection for multiplicative noise removal, J. Comput. Appl. Math. 257 (2014), 29–45. 10.1016/j.cam.2013.08.012Search in Google Scholar

[11] Y. Dong and T. Zeng, A convex variational model for restoring blurred images with multiplicative noise, SIAM J. Imaging Sci. 6 (2013), no. 3, 1598–1625. 10.1137/120870621Search in Google Scholar

[12] V. Dutt and J. Greenleaf, Adaptative speckle reduction filter for logcompressed b-scan images, IEEE Trans. Med. Imag. 15 (1996), no. 6, 802–813. 10.1109/42.544498Search in Google Scholar PubMed

[13] J. Eckstein and D. P. Bertsekas, On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Program. 55 (1992), no. 3, 293–318. 10.1007/BF01581204Search in Google Scholar

[14] E. Esser, Applications of Lagrangian-based alternating direction methods and connections to split Bregman, UCLA CAM Report 09-31, UCLA, Los Angeles, 2009. Search in Google Scholar

[15] T. Goldstein, B. O’Donoghue, S. Setzer and R. Baraniuk, Fast alternating direction optimization methods, preprint (2012), ftp://ftp.math.ucla.edu/pub/camreport/cam12-35.pdf. 10.1137/120896219Search in Google Scholar

[16] T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci. 2 (2009), no. 2, 323–343. 10.1137/080725891Search in Google Scholar

[17] X. H. Hao, S. K. Gao and X. R. Gao, A novel multiscale nonlinear thresholding method for ultrasonic speckle suppressing, IEEE Trans. Med. Imag. 18 (1999), 787–794. 10.1109/42.802756Search in Google Scholar PubMed

[18] B. S. He, Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities, Comput. Optim. Appl. 42 (2009), no. 2, 195–212. 10.1007/s10589-007-9109-xSearch in Google Scholar

[19] B. S. He, L. Z. Liao, D. Han and H. Yang, A new inexact alternating directions method for monotone variational inequalities, Math. Program. 92 (2002), no. 1, 103–118. 10.1007/s101070100280Search in Google Scholar

[20] B. S. He, H. Yang and S. L. Wang, Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities, J. Optim. Theory Appl. 106 (2000), no. 2, 337–356. 10.1023/A:1004603514434Search in Google Scholar

[21] J. Huang and X. P. Yang, Fast reduction of speckle noise in real ultrasound images, Signal Process. 93 (2013), 684–694. 10.1016/j.sigpro.2012.09.005Search in Google Scholar

[22] Y.-M. Huang, D.-Y. Lu and T. Zeng, Two-step approach for the restoration of images corrupted by multiplicative noise, SIAM J. Sci. Comput. 35 (2013), no. 6, 2856–2873. 10.1137/120898693Search in Google Scholar

[23] Y.-M. Huang, M. K. Ng and Y.-W. Wen, A new total variation method for multiplicative noise removal, SIAM J. Imaging Sci. 2 (2009), no. 1, 20–40. 10.1137/080712593Search in Google Scholar

[24] Z. Jin and X. Yang, A variational model to remove the multiplicative noise in ultrasound images, J. Math. Imaging Vision 39 (2011), no. 1, 62–74. 10.1007/s10851-010-0225-3Search in Google Scholar

[25] D. Kaplan and Q. Ma, On the statistical characteristics of the logcompressed Rayleigh signals: Theoretical formulation and experimental results, J. Acoust. Soc. Amer. 95 (1994), 1396–1400. 10.1121/1.408579Search in Google Scholar

[26] K. Krissian, R. Kikinis, C. F. Westin and K. Vosburgh, Speckle constrained filtering of ultrasound images, IEEE Comput. Vis. Pattern Recogn. 2 (2005), 547–552. 10.1109/CVPR.2005.331Search in Google Scholar

[27] M. Lysaker, A. Lundervold and X. C. Tai, Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time, IEEE Trans. Image Process. 12 (2003), no. 12, 1579–1590. 10.1109/TIP.2003.819229Search in Google Scholar PubMed

[28] M. K. Ng, F. Wang and X. Yuan, Inexact alternating direction methods for image recovery, SIAM J. Sci. Comput. 33 (2011), no. 4, 1643–1668. 10.1137/100807697Search in Google Scholar

[29] M. Nikolova, Weakly constrained minimization: Application to the estimation of images and signals involving constant regions, J. Math. Imaging Vision 21 (2004), no. 2, 155–175. 10.1023/B:JMIV.0000035180.40477.bdSearch in Google Scholar

[30] Y. Ouyang, Y. Chen, G. Lan and E. Pasiliao, Jr., An accelerated linearized alternating direction method of multipliers, SIAM J. Imaging Sci. 8 (2015), no. 1, 644–681. 10.1137/14095697XSearch in Google Scholar

[31] L. I. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D 60 (1992), no. 1–4, 259–268. 10.1016/0167-2789(92)90242-FSearch in Google Scholar

[32] J. Shi and S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model, SIAM J. Imaging Sci. 1 (2008), no. 3, 294–321. 10.1137/070689954Search in Google Scholar

[33] G. Steidl and T. Teuber, Removing multiplicative noise by Douglas–Rachford splitting methods, J. Math. Imaging Vision 36 (2010), no. 2, 168–184. 10.1007/s10851-009-0179-5Search in Google Scholar

[34] T. Valkonen, K. Bredies and F. Knoll, Total generalized variation in diffusion tensor imaging, SIAM J. Imaging Sci. 6 (2013), no. 1, 487–525. 10.1137/120867172Search in Google Scholar

[35] L. Vese, A study in the BV space of a denoising-deblurring variational problem, Appl. Math. Optim. 44 (2001), no. 2, 131–161. 10.1007/s00245-001-0017-7Search in Google Scholar

[36] H. Woo and S. Yun, Proximal linearized alternating direction method for multiplicative denoising, SIAM J. Sci. Comput. 35 (2013), no. 2, B336–B336. 10.1137/11083811XSearch in Google Scholar

[37] C. Wu and X.-C. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models, SIAM J. Imaging Sci. 3 (2010), no. 3, 300–339. 10.1137/090767558Search in Google Scholar

[38] C. Wu, J. Zhang, Y. Duan and X.-C. Tai, Augmented Lagrangian method for total variation based image restoration and segmentation over triangulated surfaces, J. Sci. Comput. 50 (2012), no. 1, 145–166. 10.1007/s10915-011-9477-3Search in Google Scholar

[39] Y.-H. Xiao and H.-N. Song, An inexact alternating directions algorithm for constrained total variation regularized compressive sensing problems, J. Math. Imaging Vision 44 (2012), no. 2, 114–127. 10.1007/s10851-011-0314-ySearch in Google Scholar

[40] J. Yang and X. Yuan, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Math. Comp. 82 (2013), no. 281, 301–329. 10.1090/S0025-5718-2012-02598-1Search in Google Scholar

[41] J. Yang and Y. Zhang, Alternating direction algorithms for 1-problems in compressive sensing, SIAM J. Sci. Comput. 33 (2011), no. 1, 250–278. 10.1137/090777761Search in Google Scholar

[42] W. Zhu and T. Chan, Image denoising using mean curvature of image surface, SIAM J. Imaging Sci. 5 (2012), no. 1, 1–32. 10.1137/110822268Search in Google Scholar

Received: 2016-08-18
Revised: 2018-01-11
Accepted: 2018-01-29
Published Online: 2018-02-14
Published in Print: 2018-12-01

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 7.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jiip-2016-0051/html
Scroll to top button