Home Sparse signal recovery using a new class of random matrices
Article
Licensed
Unlicensed Requires Authentication

Sparse signal recovery using a new class of random matrices

  • Enrico Au-Yeung EMAIL logo
Published/Copyright: January 28, 2017

Abstract

We propose a new class of random matrices that enables the recovery of signals with sparse representation in a known basis with overwhelmingly high probability. To construct a matrix in this class, we begin with a fixed non-random matrix that satisfies two very general conditions. Then we decompose the matrix into pieces of sparse matrices. A random sum (involving Bernoulli random variables) of these pieces of sparse matrices is used to construct the final matrix. We say that the random matrix is the randomized Bernoulli transform of the original matrix. The random matrix is not created by filling all its entries with random variables, as in the case of Gaussian or Bernoulli matrices. Therefore, as a benefit, far fewer number of random variables are needed to generate this new type of random matrices. We prove that the number of samples needed to recover a random signal is proportional to the sparsity of the signal, up to a logarithmic factor, and hence this number is nearly optimal.

MSC 2010: 42; 46

Acknowledgements

The author is grateful to his mentor Professor Özgür Yılmaz. His insight has substantially improved the quality of this paper. The author likes to thank the anonymous referee who has provided many helpful comments to improve the manuscript.

References

[1] Baraniuk R., Davenport M., DeVore R. and Wakin M., A simple proof of the restricted isometry property for random matrices, Constr. Approx. 28 (2008), no. 3, 253–263. 10.1007/s00365-007-9003-xSearch in Google Scholar

[2] Blumensath T. and Davies M., Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal. 27 (2009), no. 3, 265–74. 10.1016/j.acha.2009.04.002Search in Google Scholar

[3] Bourgain J. and Tzafriri L., Invertibility of large submatrices with applications to the geometry of banach spaces and harmonic analysis, Israel J. Math. 57 (1987), no. 2, 137–224. 10.1007/BF02772174Search in Google Scholar

[4] Bruckstein A., Donoho D. L. and Elad M., From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Rev. 51 (2009), no. 1, 34–81. 10.1137/060657704Search in Google Scholar

[5] Buchholz A., Operator Khintchine inequality in non-commutative probability, Math. Ann. 319 (2001), no. 1, 1–16. 10.1007/PL00004425Search in Google Scholar

[6] Candes E., The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris 346 (2008), no. 9–10, 589–592. 10.1016/j.crma.2008.03.014Search in Google Scholar

[7] Candes E., Romberg J. and Tao T., Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), no. 2, 489–509. 10.1109/TIT.2005.862083Search in Google Scholar

[8] Candes E., Romberg J. and Tao T., Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59 (2006), no. 8, 1207–1223. 10.1002/cpa.20124Search in Google Scholar

[9] Cohen A., Dahmen W. and DeVore R., Compressed sensing and best k-term approximation, J. Amer. Math. Soc. 22 (2009), no. 1, 211–231. 10.1090/S0894-0347-08-00610-3Search in Google Scholar

[10] Donoho D. L., Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), no. 4, 1289–1306. 10.1109/TIT.2006.871582Search in Google Scholar

[11] Donoho D. L. and Tanner J., Counting faces of randomly projected polytopes when the projection radically lowers dimension, J. Amer. Math. Soc. 22 (2009), no. 1, 1–53. 10.1090/S0894-0347-08-00600-0Search in Google Scholar

[12] Foucart S., Sparse recovery algorithms: Sufficient conditions in terms of restricted isometry constants, Approximation Theory XIII: San Antonio 2010, Springer Proc. Math. 13, Springer, New York (2012), 65–77. 10.1007/978-1-4614-0772-0_5Search in Google Scholar

[13] Foucart S. and Rauhut H., A Mathematical Introduction to Compressive Sensing, Appl. Numer. Harmon. Anal., Birkhäuser, New York, 2013. 10.1007/978-0-8176-4948-7Search in Google Scholar

[14] Gilbert A. and Tropp J., Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory 53 (2007), no. 12, 4655–4666. 10.1109/TIT.2007.909108Search in Google Scholar

[15] Haagerup U. and Musat M., On the best constants in noncommutative Khintchine-type inequalities, J. Funct. Anal. 250 (2007), no. 2, 588–624. 10.1016/j.jfa.2007.05.014Search in Google Scholar

[16] Lust-Piquard F. and Pisier G., Noncommutative Khintchine and Paley inequalities, Ark. Mat. 29 (1991), no. 2, 241–260. 10.1007/BF02384340Search in Google Scholar

[17] Pfander G. and Rauhut H., Sparsity in time-frequency representations, J. Fourier Anal. Appl. 16 (2010), no. 2, 233–260. 10.1007/s00041-009-9086-9Search in Google Scholar

[18] Rauhut H., Random sampling of sparse trigonometric polynomials, Appl. Comput. Harmon. Anal. 22 (2007), no. 1, 16–42. 10.1016/j.acha.2006.05.002Search in Google Scholar

[19] Rauhut H., Compressive sensing and structured random matrices, Theoretical Foundations and Numerical Methods for Sparse Recovery (Vienna 2009), Radon Ser. Comput. Appl. Math., Walter de Gruyter, Berlin (2010), 1–92. Search in Google Scholar

[20] Rauhut H. and Ward R., Sparse Legendre expansions via l1-minimization, J. Approx. Theory 164 (2012), no. 5, 517–533. 10.1016/j.jat.2012.01.008Search in Google Scholar

[21] Rudelson M. and Vershynin R., On sparse reconstruction from Fourier and Gaussian measurements, Comm. Pure Appl. Math. 61 (2008), no. 8, 1025–1045. 10.1002/cpa.20227Search in Google Scholar

[22] Tropp J. A., Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory 50 (2004), no. 10, 2231–2242. 10.1109/TIT.2004.834793Search in Google Scholar

[23] Tropp J. A., Recovery of short, complex linear combinations via l1 minimization, IEEE Trans. Inform. Theory 51 (2005), no. 4, 1568–1570. 10.1109/TIT.2005.844057Search in Google Scholar

[24] Tropp J. A., Just relax: Convex programming methods for identifying sparse signals in noise, IEEE Trans. Inform. Theory 52 (2006), no. 3, 1030–1051. 10.1109/TIT.2005.864420Search in Google Scholar

Received: 2016-4-21
Revised: 2016-11-12
Accepted: 2016-12-5
Published Online: 2017-1-28
Published in Print: 2017-4-1

© 2017 by De Gruyter

Downloaded on 4.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/apam-2016-0039/html
Scroll to top button