Home Robust signal recovery via ℓ1–2/ℓ𝑝 minimization with partially known support
Article
Licensed
Unlicensed Requires Authentication

Robust signal recovery via ℓ1–2/ℓ𝑝 minimization with partially known support

  • Jing Zhang EMAIL logo and Shuguang Zhang
Published/Copyright: October 20, 2022

Abstract

In this paper, we study the robust recovery of signals for general noise by 1 - 2 / p ( 2 p < + ) minimization with partially known support (PKS). A recovery condition for 1 - 2 / p minimization by incorporating prior support information is established, and an error estimate is obtained. In particular, the obtained results not only provide a new theoretical guarantee to robustly recover signals for general noise, but also improve and generalize the state-of-the-art ones. In addition, a series of numerical experiments are carried out to confirm the validity of the proposed method, which show that incorporating prior support information for 1 - 2 / p minimization exhibits better recovery performance than 1 - 2 / p minimization.

MSC 2010: 49M05; 65K05; 90C26; 90C90

Award Identifier / Grant number: 17LZXY13

Funding statement: This work was supported by the Longshan academic talent research supporting program of SWUST (No. 17LZXY13).

A Appendix

Proof of Lemma 2.3

Suppose y 0 = l ; then 𝒚 is an 𝑙-sparse vector.

Case 1: For l k 2 , by Lemma 2.2 and the fact that y 2 l y k 2 η , we have

| J p ( A x ) , A y | ( μ p ) 2 C p ( A , k 1 , l ) x 2 y 2 ( μ p ) 2 C p ( A , k 1 , k 2 ) k 2 η x 2 .

Case 2: For l > k 2 , we will prove it by mathematical induction. The proof technique is similar to [2, Lemma 5.1]. Assume (2.1) holds for l - 1 . We can write 𝒚 as y = i = 1 l c i z i , where c 1 c 2 c l > 0 , { z i } i = 1 l are different unit vectors with one entry ± 1 and the other entries zero. As we know, y 1 = i = 1 l c i k 2 η ( l - 1 ) η . So

1 Ω { 1 j l - 1 : c j + c j + 1 + + c l ( l - j ) η } ,

which implies Ω is not empty. We choose the largest element j Ω , which means

(A.1) c j + c j + 1 + + c l ( l - j ) η ,
(A.2) c j + 1 + c j + 2 + + c l > ( l - j - 1 ) η .
In order to decompose 𝒚 into the sum of a series of ( l - 1 ) -sparse signals, we define

ξ w = i = j l c i l - j - c w , j w l , ζ w = ξ w i = j l ξ i i = 1 j - 1 c i z i + i = j l ξ w z i - ξ w z w , j w l .

Then, for j w l , we have

ξ w ξ j = i = j l c i l - j - c j = i = j + 1 l c i - ( l - j - 1 ) c j l - j i = j + 1 l c i - ( l - j - 1 ) η l - j > 0 ,

where the last inequality follows from (A.2). By simple computation, we also have

(A.3) i = j l ξ i = ( l - j + 1 ) i = j l c i l - j - i = j l c i = i = j l c i l - j ,
w = j l ζ w = i = 1 j - 1 c i z i + w = j l ξ w i = j l z i - w = j l ξ w z w = i = 1 j - 1 c i z i + i = j l c i l - j i = j l z i - w = j l ( i = j l c i l - j - c w ) z w = i = 1 j - 1 c i z i + i = j l c i l - j i = j l z i - i = j l c i l - j w = j l z w + w = j l c w z w = i = 1 l c i z i = y ,
where we use (A.3) and the definition of ξ w in the second equality. On the other hand, from the definition of ζ w , we can get

ζ w 1 = ξ w i = j l ξ i i = 1 j - 1 c i + ( l - j ) ξ w = ξ w i = j l ξ i ( i = 1 j - 1 c i + i = j l c i ) = ξ w i = j l ξ i y 1 ξ w i = j l ξ i k 2 η ,

where the second equality is from (A.3), and

ζ w = max { ξ w i = j l ξ i c 1 , , ξ w i = j l ξ i c j - 1 , ξ w } max { ξ w i = j l ξ i η , ξ w ( i = j l c i ) ( l - j ) i = j l ξ i } ξ w i = j l ξ i η ,

where the second inequality is due to y η and (A.3), and the last inequality is because of (A.1). From the definition of ζ w , we know ζ w is ( l - 1 ) -sparse, so we can get the following inequality by the induction assumption:

| J p ( A x ) , A y | = | J p ( A x ) , w = j l A ζ w | w = j l | J p ( A x ) , A ζ w | w = j l ( μ p ) 2 C p ( A , k 1 , k 2 ) x 2 k 2 ξ w i = j l ξ i η = ( μ p ) 2 C p ( A , k 1 , k 2 ) k 2 η x 2 .

B Appendix

Proof of Theorem 3.1

Let x = x + h , and let T 1 be the supports of the largest 𝑘 entries in absolute value of the h T c , where h T c is the restriction of 𝒉 to the index set T c . Denote T ¯ 0 = T T 0 , T ¯ 1 = T T 1 , T ¯ 0 c = ( T T 0 ) c , T ¯ 1 c = ( T T 1 ) c .

Since x is the solution of (1.4), we have

(B.1) ( x + h ) T c 1 - ( x + h ) T c 2 x T c 1 - x T c 2 .

Due to the face that T c = T 0 T ¯ 0 c , it is easy to get

(B.2) ( x + h ) T c 1 - ( x + h ) T c 2 = ( x + h ) T 0 1 + ( x + h ) T ¯ 0 c 1 - ( x + h ) T c 2 x T 0 1 - h T 0 1 + h T ¯ 0 c 1 - x T ¯ 0 c 1 - ( x + h ) T c 2 ,
(B.3) x T c 1 - x T c 2 = x T 0 1 + x T ¯ 0 c 1 - x T c 2 .
Combining (B.1), (B.2), (B.3), we can get

(B.4) h T ¯ 0 c 1 h T 0 1 + h T c 2 + 2 x T ¯ 0 c 1 .

Then, based on the fact that

h T ¯ 0 c 1 h T ¯ 1 c 1 , h T 0 1 h T 1 1 , h T c 2 h 2 ,

(B.4) becomes

(B.5) h T ¯ 1 c 1 h T 1 1 + 2 x T ¯ 0 c 1 + h 2 .

We know that

h T ¯ 1 0 s + k , h T ¯ 1 c h T 1 1 k h T ¯ 1 2 k + 2 x T ¯ 0 c 1 + h 2 k , h T ¯ 1 c 1 k ( h T ¯ 1 2 k + 2 x T ¯ 0 c 1 + h 2 k ) .

Then, applying Lemma 2.3 with

k 1 = k + s , k 2 = k , η = h T ¯ 1 2 k + 2 x T ¯ 0 c 1 + h 2 k ,

we get

| J p ( A h T ¯ 1 ) , A h T ¯ 1 c | ( μ p ) 2 C p ( A , k + s , k ) k h T ¯ 1 2 ( h T ¯ 1 2 k + 2 x T ¯ 0 c 1 + h 2 k ) .

Therefore, we have

(B.6) | J p ( A h T ¯ 1 ) , A h | | J p ( A h T ¯ 1 ) , A h T ¯ 1 | - | J p ( A h T ¯ 1 ) , A h T ¯ 1 c | = A h T ¯ 1 p 2 - | J p ( A h T ¯ 1 ) , A h T ¯ 1 c | ( μ p ) 2 ( 1 - δ k + s ) h T ¯ 1 2 2 - ( μ p ) 2 C p ( A , k + s , k ) h T ¯ 1 2 ( h T ¯ 1 2 + 2 x T ¯ 0 c 1 + h 2 k ) .

Additionally, by the Hölder inequality with a = p p - 1 and b = p ,

(B.7) | J p ( A h T ¯ 1 ) , A h | J p ( A h T ¯ 1 ) p p - 1 A h p = A h T ¯ 1 p A h p μ p 1 + δ k + s h T ¯ 1 2 ( A x - b p + A x - b p ) 2 μ p ε 1 + δ k + s h T ¯ 1 2 ,

so by combing (B.6) and (B.7), we get

μ p ( 1 - δ k + s - C p ( A , k + s , k ) ) h T ¯ 1 2 2 1 + δ k + s ε + μ p C p ( A , k + s , k ) ( 2 x T ¯ 0 c 1 + h 2 ) k ,

using condition (3.1) that guarantees 1 - δ k + s - C p ( A , k + s , k ) > 0 , so that

h T ¯ 1 2 2 1 + δ k + s ε μ p ( 1 - δ k + s - C p ( A , k + s , k ) ) + C p ( A , k + s , k ) ( 2 x T ¯ 0 c 1 + h 2 ) k ( 1 - δ k + s - C p ( A , k + s , k ) ) .

From (B.5), we can get

h T ¯ 1 c 1 h T ¯ 1 1 + 2 x T ¯ 0 c 1 + h 2 ,

and by applying Lemma 2.5 with d = k + s and γ = 2 x T ¯ 0 c 1 + h 2 , we further get

h T ¯ 1 c 2 h T ¯ 1 2 + 2 x T ¯ 0 c 1 + h 2 k + s .

Hence

h 2 = h T ¯ 1 2 2 + h T ¯ 1 c 2 2 h T ¯ 1 2 2 + ( h T ¯ 1 2 + 2 x T ¯ 0 c 1 + h 2 k + s ) 2 2 h T ¯ 1 2 + 2 x T ¯ 0 c 1 + h 2 k + s 2 ( 2 1 + δ k + s ε μ p ( 1 - δ k + s - C p ( A , k + s , k ) ) + C p ( A , k + s , k ) ( 2 x T ¯ 0 c 1 + h 2 ) k ( 1 - δ k + s - C p ( A , k + s , k ) ) ) + 2 x T ¯ 0 c 1 + h 2 k + s .

Simply arranging the above inequality, then we have

B ( k , s ) h 2 2 2 ( k + s ) ( 1 + δ k + s ) μ p ε + ( 2 ( 1 - δ k + s ) + 2 ( 2 ( 1 + s k ) - 1 ) C p ( A , k + s , k ) ) x T ¯ 0 c 1 ,

where

B ( k , s ) = ( k + s - 1 ) ( 1 - δ k + s ) - ( k + s + 2 ( 1 + s k ) - 1 ) C p ( A , k + s , k ) .

Using condition (3.1), we can get B ( k , s ) > 0 ; therefore,

h 2 2 2 ( k + s ) ( 1 + δ k + s ) μ p B ( k , s ) ε + 2 ( 1 - δ k + s ) + 2 ( 2 ( 1 + s k ) - 1 ) C p ( A , k + s , k ) B ( k , s ) x T ¯ 0 c 1 = 2 2 ( k + s ) ( 1 + δ k + s ) μ p B ( k , s ) ε + 2 ( 1 - δ k + s ) + 2 ( 2 ( 1 + s k ) - 1 ) C p ( A , k + s , k ) B ( k , s ) r - r T 0 1

References

[1] S. Boyd, N. Parikh, E. Chu and B. Peleato, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn. 3 (2011), no. 1, 1–122. 10.1561/9781601984616Search in Google Scholar

[2] T. T. Cai and A. Zhang, Compressed sensing and affine rank minimization under restricted isometry, IEEE Trans. Signal Process. 61 (2013), no. 13, 3279–3290. 10.1109/TSP.2013.2259164Search in Google Scholar

[3] T. T. Cai and A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Appl. Comput. Harmon. Anal. 35 (2013), no. 1, 74–93. 10.1016/j.acha.2012.07.010Search in Google Scholar

[4] T. T. Cai and A. Zhang, Sparse representation of a polytope and recovery in sparse signals and low-rank matrices, IEEE Trans. Inform. Theory 60 (2014), no. 1, 122–132. 10.1109/TIT.2013.2288639Search in Google Scholar

[5] E. J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris 346 (2008), no. 9–10, 589–592. 10.1016/j.crma.2008.03.014Search in Google Scholar

[6] E. J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), no. 2, 489–509. 10.1109/TIT.2005.862083Search in Google Scholar

[7] E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005), no. 12, 4203–4215. 10.1109/TIT.2005.858979Search in Google Scholar

[8] W. Chen and Y. Li, Recovery of signals under the condition on RIC and ROC via prior support information, Appl. Comput. Harmon. Anal. 46 (2019), no. 2, 417–430. 10.1016/j.acha.2018.02.003Search in Google Scholar

[9] W. G. Chen, Y. L. Li and G. Q. Wu, Recovery of signals under the high order RIP condition via prior support information, Signal Process. 153 (2018), 83–94. 10.1016/j.sigpro.2018.06.027Search in Google Scholar

[10] I. Daubechies, M. Defrise and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. 57 (2004), no. 11, 1413–1457. 10.1002/cpa.20042Search in Google Scholar

[11] I. Daubechies, R. DeVore, M. Fornasier and C. S. Güntürk, Iteratively reweighted least squares minimization for sparse recovery, Comm. Pure Appl. Math. 63 (2010), no. 1, 1–38. 10.1002/cpa.20303Search in Google Scholar

[12] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), no. 4, 1289–1306. 10.1109/TIT.2006.871582Search in Google Scholar

[13] M. P. Friedlander, H. Mansour, R. Saab and Ö. Yilmaz, Recovering compressively sampled signals using partial support information, IEEE Trans. Inform. Theory 58 (2012), no. 2, 1122–1134. 10.1109/TIT.2011.2167214Search in Google Scholar

[14] J. Fuchs, Fast implementation of a 1 - 1 regularized sparse representations algorithm, International Conference on Acoustics, Speech, and Signal Processing, IEEE Press, Piscataway (2009), 3329–3332. 10.1109/ICASSP.2009.4960337Search in Google Scholar

[15] H. M. Ge, J. M. Wen and W. G. Chen, The null space property of the truncated 1 - 2 -minimization, IEEE Signal Process. Lett. 25 (2018), no. 8, 1261–1265. 10.1109/LSP.2018.2852138Search in Google Scholar

[16] M. Grant and S. Boyd, CVX: matlab software for disciplined convex programming, version 2.1, Online (2014), http://cvxr.com/cvx. Search in Google Scholar

[17] T. Ince, A. Nacaroglu and N. Watsuji, Nonconvex compressed sensing with partially known signal support, Signal Process. 93 (2013), 338–344. 10.1016/j.sigpro.2012.07.011Search in Google Scholar

[18] L. Jacques, A short note on compressed sensing with partially known signal support, Signal Process. 90 (2010), no. 12, 3308–3312. 10.1016/j.sigpro.2010.05.025Search in Google Scholar

[19] L. Jacques, D. K. Hammond and J. M. Fadili, Dequantizing compressed sensing: when oversampling and non-Gaussian constraints combine, IEEE Trans. Inform. Theory 57 (2011), no. 1, 559–571. 10.1109/TIT.2010.2093310Search in Google Scholar

[20] L. W. Kang and C. S. Lu, Distributed compressive video sensing, International Conference on Acoustics, Speech, and Signal Processing, IEEE Press, Piscataway (2009), 1169–1172. 10.1109/ICASSP.2009.4959797Search in Google Scholar

[21] M. A. Khajehnejad, W. Xu, A. S. Avestimehr and B. Hassibi, Weighted 1 minimization for sparse recovery with prior information, IEEE International Symposium on Information Theory, IEEE Press, Piscataway (2009), 483–487. Search in Google Scholar

[22] M. A. Khajehnejad, W. Xu, A. S. Avestimehr and B. Hassibi, Analyzing weighted 1 minimization for sparse recovery with nonuniform sparse models, IEEE Trans. Signal Process. 59 (2011), no. 5, 1985–2001. 10.1109/TSP.2011.2107904Search in Google Scholar

[23] Y. Lou and M. Yan, Fast L1–L2 minimization via a proximal operator, J. Sci. Comput. 74 (2018), no. 2, 767–785. 10.1007/s10915-017-0463-2Search in Google Scholar

[24] C. Lu, Z. Lin and S. Yan, Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization, IEEE Trans. Image Process. 24 (2015), no. 2, 646–654. 10.1109/TIP.2014.2380155Search in Google Scholar PubMed

[25] T.-H. Ma, Y. Lou and T.-Z. Huang, Truncated l 1 - 2 models for sparse recovery and rank minimization, SIAM J. Imaging Sci. 10 (2017), no. 3, 1346–1380. 10.1137/16M1098929Search in Google Scholar

[26] Q. Mo and S. Li, New bounds on the restricted isometry constant δ 2 k , Appl. Comput. Harmon. Anal. 31 (2011), no. 3, 460–468. 10.1016/j.acha.2011.04.005Search in Google Scholar

[27] A. A. Saleh, F. Alajaji and W. Y. Chan, Compressed sensing with non-Gaussian noise and partial support information, IEEE Signal Process. Lett. 22 (2015), no. 10, 1703–1707. 10.1109/LSP.2015.2426654Search in Google Scholar

[28] N. Vaswani and W. Lu, Modified-CS: Modifying compressive sensing for problems with partially known support, IEEE Trans. Signal Process. 58 (2010), no. 9, 4595–4607. 10.1109/ISIT.2009.5205717Search in Google Scholar

[29] W. D. Wang and J. J. Wang, An improved sufficient condition of 1 - 2 minimisation for robust signal recovery, Electron. Lett. 55 (2019), no. 12, 1199–1201. 10.1049/el.2019.2205Search in Google Scholar

[30] W. D. Wang, J. J. Wang and Z. L. Zhang, Robust signal recovery with hightly coherent measurement matrices, IEEE Signal Process. Lett. 24 (2017), no. 3, 304–308. 10.1109/LSP.2016.2626308Search in Google Scholar

[31] L. Weizman, Y. Eldar and D. Bashat, Compressed sensing for longitudinal MRI: An adaptive-weighted approach, Medical Phys. 42 (2015), no. 9, 5195–5208. 10.1118/1.4928148Search in Google Scholar PubMed

[32] L. Yan, Y. Shin and D. Xiu, Sparse approximation using 1 - 2 minimization and its application to stochastic collocation, SIAM J. Sci. Comput. 39 (2017), no. 1, A229–A254. 10.1137/15M103947XSearch in Google Scholar

[33] P. Yin, Y. Lou, Q. He and J. Xin, Minimization of 1 - 2 for compressed sensing, SIAM J. Sci. Comput. 37 (2015), no. 1, A536–A563. 10.1137/140952363Search in Google Scholar

[34] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for l 1 -minimization with applications to compressed sensing, SIAM J. Imaging Sci. 1 (2008), no. 1, 143–168. 10.1137/070703983Search in Google Scholar

[35] W.-J. Zeng, H. C. So and X. Jiang, Outlier-robust greedy pursuit algorithms in p -space for sparse approximation, IEEE Trans. Signal Process. 64 (2016), no. 1, 60–75. 10.1109/TSP.2015.2477047Search in Google Scholar

[36] R. Zhang and S. Li, A proof of conjecture on restricted isometry property constants δ t k ( 0 < t < 4 3 ) , IEEE Trans. Inform. Theory 64 (2018), no. 3, 1699–1705. 10.1109/TIT.2017.2705741Search in Google Scholar

[37] Z. Zhou and J. Yu, Recovery analysis for weighted mixed 2 / p minimization with 0 < p 1 , J. Comput. Appl. Math. 352 (2019), 210–222. 10.1016/j.cam.2018.11.031Search in Google Scholar

Received: 2020-05-02
Revised: 2022-03-04
Accepted: 2022-07-04
Published Online: 2022-10-20
Published in Print: 2023-02-01

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 14.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jiip-2020-0049/html
Scroll to top button