Startseite A perturbation analysis based on group sparse representation with orthogonal matching pursuit
Artikel Open Access

A perturbation analysis based on group sparse representation with orthogonal matching pursuit

  • Chunyan Liu , Feng Zhang EMAIL logo , Wei Qiu , Chuan Li und Zhenbei Leng
Veröffentlicht/Copyright: 17. November 2020

Abstract

In this paper, by exploiting orthogonal projection matrix and block Schur complement, we extend the study to a complete perturbation model. Based on the block-restricted isometry property (BRIP), we establish some sufficient conditions for recovering the support of the block 𝐾-sparse signals via block orthogonal matching pursuit (BOMP) algorithm. Under some constraints on the minimum magnitude of the nonzero elements of the block 𝐾-sparse signals, we prove that the support of the block 𝐾-sparse signals can be exactly recovered by the BOMP algorithm in the case of 2 and 2 / bounded total noise if 𝑨 satisfies the BRIP of order K + 1 with

δ K + 1 < 1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

In addition, we also show that this is a sharp condition for exactly recovering any block 𝐾-sparse signal with the BOMP algorithm. Moreover, we also give the reconstruction upper bound of the error between the recovered block-sparse signal and the original block-sparse signal. In the noiseless and perturbed case, we also prove that the BOMP algorithm can exactly recover the block 𝐾-sparse signal under some constraints on the block 𝐾-sparse signal and

δ K + 1 < 2 + 2 2 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

Finally, we compare the actual performance of perturbed OMP and perturbed BOMP algorithm in the numerical study. We also present some numerical experiments to verify the main theorem by using the completely perturbed BOMP algorithm.

MSC 2010: 49N45; 47A55; 94A12

1 Introduction

Compressive Sensing (CS), pioneered by Donoho and Candès, Romberg and Tao [23, 6], is an efficient data acquisition paradigm which has captured lots of attention in different fields, including machine learning, signal processing, pattern recognition, information theory, image processing, engineering and communication [5, 2, 18, 21, 13].

In essence, a crucial concern in CS is how to recover a sparse or compressible sparse signal from an underdetermined system of linear equations. Generally, we consider the model y = A x , where y C n (ℂ denotes the complex field) are the observed measurements, A C n × m ( n m ) is a given measurement matrix and x C m is an unknown signal that needs to be recovered. In practice, we usually run into the case that the observation vector 𝒚 is disturbed by 𝒗, and consider the CS model

(1.1) y ^ = A x + v ,

where v C n is the additive noise independent of 𝒙. However, the measurement matrix is affected by various external environment and internal interference in the compression sampling so that the actual measurement matrix is perturbed more or less, and hence the measurement results have a large deviation. In practical application, the measurement matrix error is often unavoidable, such as the error in analog-to-digital signal conversion and the lattice error in signal sampling. The perturbed case can be realized in source separation [4], remote sensing [12], radar [14] and others. That is to say, not only the observation vector 𝒚 is perturbed by 𝒗, but also the measurement matrix 𝑨 is perturbed by 𝑬 in practical applications ( E C n × m is a perturbation matrix). In such a scenario, there will be an extra multiplicative noise E x in (1.1) when the perturbed sensing matrix is A ^ = A + E . In 2010, Herman and Strohmer [15] proposed the following completely perturbed 1 minimization model to recover stably the signal 𝒙 from the measurements y ^ :

(1.2) min x 1 such that A ^ x - y ^ 2 ϵ A , K , y ,

where ϵ A , K , y is the total noise. They showed that the signal 𝒙 could be reliably recovered by solving (1.2) under the condition δ 2 K 2 / ( 1 + ϵ A ( 2 K ) ) 2 - 1 . In [16], Ince and Nacaroglu established a sufficient condition to guarantee stable recovery of the signal by solving a completely perturbed p minimization problem.

However, nonzero elements of some real-world signals may appear in a few fixed blocks [20], which is different from the structure of a conventional sparse signal. The block-sparse signals appear in various applications, such as DNA microarrays [24], equalization of sparse communication channels [26], multiple measurement signals [7] and clustering of data in multiple subspaces [11]. From the mathematical viewpoint, a block-structured signal 𝒙 over the block index set I = { d 1 , , d N } can be modeled as follows:

x = [ x 1 x d 1 x [ 1 ] x d 1 + 1 x d 1 + d 2 x [ 2 ] x m - d N + 1 x m x [ N ] ] T ,

where x [ i ] denotes the 𝑖th block of 𝒙, d i is the block size for the 𝑖th block. If 𝒙 has at most 𝐾 nonzero blocks, i.e., x 2 , 0 = i = 1 N I ( x [ i ] 2 ) K , we call such 𝒙 block 𝐾-sparse signal. Next, the mixed 2 / 1 norm is defined as x 2 , 1 = i = 1 N x [ i ] 2 ; the mixed 2 / norm is x 2 , = max { x [ 1 ] 2 , , x [ N ] 2 } , and the x 2 , 2 is replaced by x 2 , i.e., x 2 , 2 = x 2 . Especially, the block-sparse signal is converted into a conventional sparse signal when the block size d 1 = = d N = 1 . In contrast to the block-sparse signal, the conventional sparse signal will be called sparse signal in this paper.

Block orthogonal matching pursuit (BOMP) is a widely used greedy algorithm which can quickly get a local optimal solution instead of seeking the global optimal solution. The basic idea of the BOMP algorithm is to select the column with the strongest linear correlation between the measurement matrix and the current residual, add all columns of the group to which the column belongs to the label set at a time, and finally determine the support of the block-sparse signal by iteration [8]. Compared with the standard OMP algorithm [27], Eldar and Kuppinger [8] have shown that the BOMP algorithm can reconstruct block-sparse signals better.

For any set Λ { 1 , 2 , , N } , let A ^ [ Λ ] denote the submatrix of A ^ that contains only the blocks indexed by Λ, and let x [ Λ ] be the subvector of 𝒙 that contains only the blocks indexed by Λ. For instance, if Λ = { 2 , 3 , 5 } , we have A ^ [ Λ ] = [ A ^ [ 2 ] , A ^ [ 3 ] , A ^ [ 5 ] ] and x [ Λ ] = ( x [ 2 ] , x [ 3 ] , x [ 5 ] ) T . Then the BOMP algorithm for A ^ is formally described in Algorithm 1 [25].

Algorithm 1

Algorithm 1 (The BOMP algorithm for A ^ )

Input: measurements y ^ , the perturbed sensing matrix A ^ and sparsity 𝐾.

Initialize: k = 0 , r 0 = y ^ , Λ 0 = .

Iterate: repeat until a stopping criterion is met.

  1. k = k + 1 ,

  2. λ k = arg max 1 i N A ^ [ i ] r k - 1 2 ,

  3. Λ k = Λ k - 1 { λ k } ,

  4. x ^ [ Λ k ] = arg min y ^ - A ^ [ Λ k ] x 2 , where supp ( x ) = { i x [ i ] 2 0 } = Λ k ,

  5. r k = y ^ - A ^ [ Λ k ] x ^ [ Λ k ] .

Output: x ^ = arg min supp ( x ) Λ K y ^ - A ^ x 2 .

In the following, the block-restricted isometry property (BRIP) of sensing matrix 𝑨 has been proposed to analyze the recovery performance of algorithms for block-sparse signals [9], that is, a matrix 𝑨 satisfies the BRIP of order 𝐾 if there exists a smallest positive constant δ B K ( 0 , 1 ) such that

( 1 - δ B K ) x 2 2 A x 2 2 ( 1 + δ B K ) x 2 2

for all block 𝐾-sparse 𝒙 over the block index set ℐ, where δ B K is the block-restricted isometry constant (BRIC). Whenever this causes no confusion, we simply denote δ B K by δ K in the rest of this paper. Without loss of generality, we assume that the block entries of a block 𝐾-sparse signal x C m are ordered by

x [ 1 ] 2 x [ 2 ] 2 x [ K ] 2 0 ,

with x [ K + 1 ] 2 = x [ K + 2 ] 2 = = x [ N ] 2 = 0 .

In this paper, we investigate the block-sparse compressed sensing with the BOMP algorithm for the completely perturbed model. As there are several types of noise, we only consider 2 bounded noise and 2 / bounded noise for the block-sparse compressed sensing of the completely perturbed model, respectively, i.e.,

(1.3) A ^ x - y ^ 2 ϵ A , K , y ,
(1.4) A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ ,
where ϵ A , K , y ′′ = A 2 ( 1 + ϵ A ) ϵ A , K , y (the proof is relegated to Section 2), and the matrix A ^ can be formed as follows:

A ^ = [ A ^ 1 A ^ d 1 A ^ [ 1 ] A ^ d 1 + 1 A ^ d 1 + d 2 A ^ [ 2 ] A ^ m - d N + 1 A ^ m A ^ [ N ] ] .

To investigate the theoretical analysis of the block-sparse compressed sensing with the BOMP algorithm for the completely perturbed model in (1.3) and (1.4), the restricted isometry property (RIP) for A ^ (see [15]) is extended to the BRIP for A ^ in Section 2 (see [29]).

In recent years, many researchers have been trying to find out the RIP-based condition for guaranteeing the exact or stable recovery of 𝐾-sparse signals by using the OMP algorithm. For the unperturbed case, if δ K + 1 < 1 / K + 1 , the 𝐾-sparse signal 𝒙 can be reliably recovered by using the OMP algorithm to solve (1.1) in 𝐾 iterations; see, e.g., [28]. Mo [22] has proved that δ K + 1 < 1 / K + 1 is a sharp condition for exact or stable recovery of 𝐾-sparse signal via the OMP algorithm. Especially, Wen and Zhu [32] have shown that δ K + 1 1 / K + 1 may fail to recover a 𝐾-sparse signal 𝒙 with the OMP algorithm in 𝐾 iterations. Wen, Zhou and Wang [31] established a sufficient condition to guarantee exact or stable recovery of supp ( x ) with the OMP algorithm in 𝐾 iterations. In particular, Liu and Fang have improved the sufficient condition in [19]. In addition, under some constraints on the minimum magnitude of the nonzero block elements of block 𝐾-sparse signal 𝒙 (i.e., min i supp ( x ) x [ i ] 2 ) and assuming 𝑨 satisfies the BRIP of order K + 1 with δ K + 1 < 1 / K + 1 , Wen, Zhou and Liu [30] have proved that the block 𝐾-sparse signal can be exactly or stably recovered by using the BOMP algorithm in 𝐾 iterations. If K 1 and 1 / K + 1 t < 1 , [30] shows that there exists a matrix 𝑨 satisfying the BRIP with δ K + 1 = t and a block 𝐾-sparse signal 𝒙 such that the BOMP may fail to recover 𝒙 in 𝐾 iterations.

From the applied viewpoint, it is important to recover the block-sparse signal in the setting of complete perturbation. In this paper, with the BRIP condition, we study exact support recovery of block-sparse signals for the block-sparse compressed sensing of the completely perturbed model with the BOMP algorithm in complex settings. The main results of this paper are summarized as follows. Firstly, if 𝑨 satisfies the BRIP of order K + 1 with

(1.5) δ K + 1 < 1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1

and min i supp ( x ) x [ i ] 2 exceeds a certain lower bound in the block-sparse compressed sensing of the completely perturbed model, we show that the support of the block 𝐾-sparse signal can be exactly or stably recovered with a certain stopping criterion under either the 2 or 2 / bounded total noise. In addition, we provide the reconstruction upper bound of the error between the recovered block-sparse signal and the original block-sparse signal. Secondly, we show that (1.5) is a sharp condition for exactly recovering any block 𝐾-sparse signal with the BOMP algorithm. Thirdly, when the measurement matrix 𝑨 is perturbed by 𝑬 in the noiseless case, we also prove that the BOMP algorithm can exactly recover the block 𝐾-sparse signal under some constraints on the block 𝐾-sparse signal and

δ K + 1 < 2 + 2 2 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

Finally, we conduct some numerical experiments to verify Theorem 3.5 by using the block-sparse algorithm with complete perturbation.

The rest of the paper is organized as follows. In Section 2, we give some notations and lemmas. Section 3 gives our main theoretical results. Section 4 provides a series of numerical experiments to support the theoretical results. Finally, we summarize this paper in Section 5.

2 Notations and preliminaries

In this section, we first introduce some symbols which will be used in the rest of this paper.

Notation

Let Ω k = { 1 , 2 , , k } with k N + and Ω 0 = . For two sets Λ and Γ, let Λ \ Γ = { i i Λ , i Γ } . Let Λ = { λ 1 , λ 2 , , λ | Λ | } . Let A ^ [ Λ ] be the submatrix of the block matrix A ^ to the index set Λ. Let x [ Λ ] be the subvector of the block vector 𝒙 to the index set Λ. Let A ^ [ Λ ] be the transpose of A ^ [ Λ ] , and let A ^ [ Λ ] denote the conjugate transposition of A ^ [ Λ ] , where | Λ | is the cardinality of Λ. Let H ^ denote a subspace of C n . Let P [ H ^ ] represent an orthogonal projection block matrix onto H ^ . Let P [ H ^ ] = I n - P [ H ^ ] be an orthogonal projection block matrix onto the orthogonal complement of H ^ , where I n denotes the identity block matrix. Let H ^ Λ = span { A ^ [ λ 1 ] , A ^ [ λ 2 ] , , A ^ [ λ | Λ | ] } represent the column space of A ^ [ Λ ] , H = 0 , and let

P [ H ^ Λ ] = A ^ [ Λ ] ( A ^ [ Λ ] A ^ [ Λ ] ) - 1 A ^ [ Λ ] .

Let the scalar x , y = y x denote the Euclidean inner product of the complex block vectors 𝒙 and 𝒚. For a block matrix B C n × n ,

B = [ B 11 B 12 B 21 B 22 ] ,

where the block B 11 is a 𝑘-order invertible block matrix with 0 < k < n . From [33], letting C k ( B ) denote the Schur complement of B 11 in 𝑩, we have C k ( B ) = B 22 - B 21 B 11 - 1 B 12 and C 0 ( B ) = B .

Similar to [15], we define some notations which will be used in the rest of this paper. We first quantify the perturbations 𝑬 and 𝒗 with the relative bounds

(2.1) E 2 A 2 ϵ A , E 2 ( K ) A 2 ( K ) ϵ A ( K ) , v 2 y 2 ϵ y ,

where 2 represents the spectral norm and 2 ( K ) stands for the largest spectral norm taken over all 𝐾-block submatrices. We also define the constants

γ K = x K c 2 x K 2 , s K = x K c 2 , 1 K x K 2 , α A = A 2 1 - δ K , κ A ( K ) = 1 + δ K 1 - δ K .

The x K is the best 𝐾-term approximation of 𝒙, and it contains the 𝐾-largest blocks of 𝒙 in index set ℐ.

Before presenting our main lemmas, we first give the BRIP [29] of matrix A ^ .

Definition 2.1

Definition 2.1 (BRIP for A ^ )

For the BRIC δ K ( K = 1 , 2 , ) for 𝑨 and ϵ A ( K ) associated with matrix 𝑬, fix the constant δ ^ K , max = ( 1 + δ K ) ( 1 + ϵ A ( K ) ) 2 - 1 . Then the matrix A ^ = A + E satisfies the BRIP of order 𝐾 if there exists a smallest nonnegative number δ ^ K δ ^ K , max such that ( 1 - δ ^ K ) x 2 2 A ^ x 2 2 ( 1 + δ ^ K ) x 2 2 for all x R N that are block 𝐾-sparse over the block index set ℐ. Similar to [15], we have the following inequality: 1 - ( 1 - δ K ) ( 1 - ϵ A ( K ) ) 2 δ ^ K δ ^ K , max = ( 1 + δ K ) ( 1 + ϵ A ( K ) ) 2 - 1 .

Next, we will give some evident properties of the block-sensing matrix A ^ , orthogonal projection matrix and the block Schur complement of matrices.

Lemma 2.2

For any complex vectors α , β C n , it holds that P [ H ] α , P [ H ] β = P [ H ] α , β = α , P [ H ] β and P [ H ] α 2 α 2 .

Lemma 2.3

If the matrix A ^ satisfies the BRIP of orders K 1 and K 2 with K 1 < K 2 , we have δ ^ K 1 δ ^ K 2 .

Lemma 2.4

Let the matrix A ^ satisfy the BRIP of order K + 1 , and let Λ = { λ 1 , λ 2 , , λ | Λ | } Ω N with | Λ | K + 1 . Then it holds that

( 1 - δ ^ K + 1 ) I | Λ | A ^ * [ Λ ] A ^ [ Λ ] ( 1 + δ ^ K + 1 ) I | Λ | ,
( 1 - δ ^ K + 1 ) ξ 2 2 i = 1 | Λ | ξ [ i ] A ^ [ Λ i ] 2 2 ( 1 + δ ^ K + 1 ) ξ 2 2 ,
where ξ = ( ξ [ 1 ] , ξ [ 2 ] , , ξ [ | Λ | ] ) T

Lemma 2.5

Lemma 2.5 ([33])

For any block matrices B , C , if B C > 0 , then C k ( B ) C k ( C ) .

Lemma 2.6

For any block vector x C n , x 2 , 1 n x 2 , x 2 n x 2 , and x 2 , x 2 x 2 , 1 .

Lemma 2.7

In (1.1), let the matrix A ^ = A + E be given. If the 2 bounded noise for the block-sparse compressed sensing of the completely perturbed model is such that A ^ x - y ^ 2 ϵ A , K , y , we have A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , where the total noise parameter is

ϵ A , K , y = ( ϵ A K κ A ( K ) + ϵ A α A r K 1 - κ A ( K ) ( r K + s K ) + ϵ y ) y 2 0 , ϵ A , K , y ′′ = A 2 ( 1 + ϵ A ) ϵ A , K , y .

Proof

Using Lemma 2.6, we have

A ^ ( A ^ x - y ^ ) 2 , A ^ ( A ^ x - y ^ ) 2 A ^ 2 A ^ x - y ^ 2 ,

Applying A ^ 2 = A ^ 2 = A + E 2 and A ^ x - y ^ 2 ϵ A , K , y , we can easily get

A ^ ( A ^ x - y ^ ) 2 , A ^ 2 ϵ A , K , y = A + E 2 ϵ A , K , y ( A 2 + E 2 ) ϵ A , K , y ,

Using the inequality E 2 A 2 ϵ A , this leads to

A ^ ( A ^ x - y ^ ) 2 , ( A 2 + A 2 ϵ A ) ϵ A , K , y .

So the lemma is proved. ∎

Next, we will give some important lemmas, and the proof will be provided in Appendix A. All the following lemmas are based on the assumption that A ^ satisfies the BRIP of order K + 1 .

Lemma 2.8

In (1.1), let

P [ H ^ Ω s ] y ^ = i = 1 s A ^ [ i ] x ^ [ i ] = A ^ [ Ω s ] x ^ with x ^ = ( x ^ [ 1 ] , x ^ [ 2 ] , , x ^ [ s ] ) T .

Suppose A ^ x = A ^ [ Ω s ] x [ Ω s ] , where s K and x [ 1 ] 2 x [ 2 ] 2 x [ s ] 2 0 . For any s < j N and

Λ k = { , k = 0 , { λ 1 , λ 2 , , λ k } , 1 k < s ,

with { λ 1 , λ 2 , , λ k } Ω s , if we define

S 0 Λ k = max 1 i s A ^ [ i ] * P [ H ^ Λ k ] y ^ 2 , S j Λ k = A ^ [ j ] * P [ H ^ Λ k ] y ^ 2 ,

then there exists θ j [ 0 , 2 π ] such that

(2.2) S 0 Λ k - S j Λ k x ^ [ Ω s \ Λ k ] 2 2 x ^ [ Ω s \ Λ k ] 2 , 1 - δ ^ K + 1 x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 x ^ [ Ω s \ Λ k ] 2 , 1 2 + x ^ [ Ω s \ Λ k ] 2 2 - Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( A ^ x - y ^ ) ,

where A ^ ~ [ j ] = e i θ j A ^ [ j ] .

Lemma 2.9

In (1.1), if A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , then we have

(2.3) ( P [ H ^ Ω s ] ( A ^ x - y ^ ) ) * A ^ [ j ] 2 = ( A ^ x - y ^ ) * ( P [ H ^ Ω s ] A ^ [ j ] ) 2 ( 1 + s δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′

for s K and 1 j N .

Lemma 2.10

In (1.1), let P [ H ^ Ω s ] y ^ = i = 1 s A ^ [ i ] x ^ [ i ] = A ^ [ Ω s ] x ^ , and suppose that A ^ x = A ^ [ Ω s ] x [ Ω s ] with s K . If A ^ x - y ^ 2 ϵ A , K , y , then we have

(2.4) min 1 i s x ^ [ i ] 2 min 1 i s x [ i ] 2 - ϵ A , K , y 1 - δ ^ K + 1 .

For A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , we also have

(2.5) min 1 i s x ^ [ i ] 2 min 1 i s x [ i ] 2 - K ϵ A , K , y ′′ 1 - δ ^ K + 1 .

Lemma 2.11

In (1.1), suppose that

A ^ x = A ^ [ Ω s ] x [ Ω s ] with s K and x [ 1 ] 2 x [ 2 ] 2 x [ s ] 2 0 .

If A ^ x - y ^ 2 ϵ A , K , y , then we have

(2.6) P [ H ^ Ω 0 ] y ^ 2 P [ H ^ Ω 1 ] y ^ 2 P [ H ^ Ω s - 1 ] y ^ 2 1 - δ ^ K + 1 min 1 i s x [ i ] 2 - ϵ A , K , y .

For A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , we also have

(2.7) A ^ P [ H ^ Ω s ] y ^ 2 , ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ ,
(2.8) A ^ P [ H ^ Ω s ] y ^ 2 , ( 1 - δ ^ K + 1 ) min 1 i s x [ i ] 2 - ( 1 + K - 1 δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′
for all k < s

3 Main theoretical results

In this section, we give the following main results which will be proved in Appendix B. The following results are based on the assumption that supp ( x ) = { i x [ i ] 2 0 } = Ω s = { 1 , 2 , , s } with s K .

Theorem 3.1

For given relative perturbations ϵ A , ϵ A ( K ) , ϵ A ( K + 1 ) and ϵ y in (2.1), under A ^ x - y ^ 2 ϵ A , K , y , assume the BRIC for matrix 𝑨 satisfies

(3.1) δ K + 1 < 1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

Then the BOMP algorithm with the stopping criterion r k 2 ϵ A , K , y exactly recovers supp ( x ) in 𝐾 iterations provided that

(3.2) min i supp ( x ) x [ i ] 2 > ϵ A , K , y 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) + ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 ϵ A , K , y 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 .

Remark 3.2

Theorem 3.1 provides a sufficient condition (3.2) for recovering the support of the block 𝐾-sparse signals via the BOMP algorithm in the setting of complete perturbation. It has been proven in [30] that if there exists a matrix 𝑨 satisfying the BRIP with 1 K + 1 δ K + 1 < 1 , the BOMP algorithm may fail to recover block 𝐾-sparse signal 𝒙 in 𝐾 iterations. Similar to the proof in [30], we can show that the BOMP algorithm may fail to recover 𝒙 in 𝐾 iterations when there exists a matrix A ^ satisfying the BRIP with 1 K + 1 δ ^ K + 1 < 1 . According to the above inequality, we have 1 K + 1 δ ^ K + 1 ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 . Namely, if there exists a matrix 𝑨 which satisfies the BRIP with

1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 δ K + 1 < 1 ,

the BOMP algorithm may fail to recover a block 𝐾-sparse signal 𝒙 in 𝐾 iterations. The results show that (3.1) is a sharp sufficient condition for exact recovery of the block 𝐾-sparse signal within completely perturbed model with the BOMP algorithm in 𝐾 iterations. Condition (3.1) of Theorem 3.1 generalizes the condition of [30] to the block-sparse compressed sensing of the completely perturbed model.

In addition, as a special case of Theorem 3.1, E = 0 , which means that there is no perturbation in the measurement matrix 𝑨; then δ ^ K + 1 = δ K + 1 . Obviously, when d i = 1 ( i = 1 , 2 , , N ), E = 0 , and we obtain a sufficient condition for recovering 𝐾-sparse signals by the OMP algorithm in 𝐾 iterations [19]. Theorem 1 in [19] is included in our theorem.

Theorem 3.3

For given relative perturbations ϵ A , ϵ A ( K ) , ϵ A ( K + 1 ) and ϵ y in (2.1), under A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , assume the BRIC for matrix 𝑨 satisfies

δ K + 1 < 1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

Then the BOMP algorithm with the stopping criterion

A ^ r k 2 , ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′

exactly recovers supp ( x ) in 𝐾 iterations provided that

(3.3) min i supp ( x ) x [ i ] 2 > K ϵ A , K , y ′′ 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) + [ 1 + K ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 .

Remark 3.4

From Theorem 3.3, we can see that (3.3) is a sufficient condition for the recovery of the block 𝐾-sparse signals with the BOMP algorithm. Obviously, when d i = 1 ( i = 1 , 2 , , N ), E = 0 , we obtain a sufficient condition for recovering 𝐾-sparse signals by the OMP algorithm in 𝐾 iterations [19]. Theorem 2 in [19] is included in our theorem.

In the following, we will provide the approximation precision between the output signal and the original signal, which is characterized by a total noise and BRIC when supp ( x ) = { i x [ i ] 2 0 } can be exactly and stably recovered in the presence of total noise.

Theorem 3.5

In (1.1), let | supp ( x ) | = s K . If the matrix 𝑨 satisfies the BRIP of order 𝐾 and the BOMP algorithm can recover exactly supp ( x ) in 𝑠 iterations, i.e., Λ s = supp ( x ) , then, for A ^ x - y ^ 2 ϵ A , K , y ,

(3.4) x - x [ Λ s ] 2 ϵ A , K , y 2 - ( 1 + δ K ) ( 1 + ϵ A ( K ) ) 2 .

For A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ ,

(3.5) x - x [ Λ s ] 2 s 2 - ( 1 + δ K ) ( 1 + ϵ A ( K ) ) 2 ϵ A , K , y ′′ .

Remark 3.6

Under the 2 and 2 / bounded total noise, (3.4) and (3.5) in Theorem 3.5 offer upper bound estimations, which explicitly characterize the relationship between the output block-sparse signal and the original block-sparse signal. The results show that the approximation precision is controlled by the total noise and BRIC. Particularly, they reveal that the support of the block-sparse signal can be exactly recovered by the block-sparse compressed sensing of the completely perturbed model with the BOMP algorithm without the noise.

To simplify the notation, for any x C m , we define

g ( x ) = max { x [ Λ ] 2 , 1 x [ Λ ] 2 | Λ Ω N , Λ } .

Using Lemma 2.6, if 𝒙 is a block 𝐾-sparse signal, we have 1 g ( x ) K . In the noiseless case, when the measurement matrix 𝑨 is perturbed by 𝑬, we prove that the BOMP algorithm can also exactly recover the block 𝐾-sparse signal under some constraints on the block signal and

0 < δ K + 1 < 2 + 2 2 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

Theorem 3.7

In (1.1), in the noiseless case and when the measurement matrix 𝑨 is perturbed by 𝑬, if the matrix 𝑨 satisfies the BRIP of order K + 1 with

δ K + 1 < 2 + 2 2 ( 1 + ϵ A ( K + 1 ) ) 2 - 1

and

(3.6) g ( x ) < 2 ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - ( 1 + δ K + 1 ) 2 ( 1 + ϵ A ( K + 1 ) ) 4 ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ,

then the BOMP algorithm can exactly recover the block 𝐾-sparse signal in 𝐾 iterations.

Remark 3.8

Since g ( x ) 1 for any block 𝐾-sparse signal 𝒙. To ensure that the condition (3.6) is holds, it is necessary to ensure

2 ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - ( 1 + δ K + 1 ) 2 ( 1 + ϵ A ( K + 1 ) ) 4 ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 > 1

According to the above inequality, we have

( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 < 2 2 , i.e., δ K + 1 < 2 + 2 2 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

4 Numerical experiments

In this section, we conduct some numerical experiments to verify our theoretical results by using the block-sparse algorithm with complete perturbation. We also investigate the performance of perturbed OMP and perturbed BOMP algorithm in processing block-sparse signals [3, 10]. Without loss of generality, we consider the block-sparse signal 𝒙 with even block size, i.e., d i = d ( i = 1 , , m ), and let the signal length be N = 256 . In each experiment, we randomly generate a block-sparse signal 𝒙, where the elements of each 𝒙 are drawn from the standard Gaussian distribution, and randomly generate a 128 × 256 measurement matrix 𝑨 from Gaussian distribution [1, 17]. We generate the measurements y = A x + v by using 𝒙 and 𝐴, where 𝒗 is the Gaussian noise. In addition, 𝐸 is a random Gaussian matrix, where E 2 = ϵ A A 2 (the value of ϵ A is not fixed). The perturbed matrix is A ^ = A + E . In each experiment, we select the average value of 200 independent trails as the final statistical results.

As the first experiment, we compare the performance of perturbed OMP and perturbed BOMP algorithm in processing block-sparse signals. Some reconstruction errors x * - x 2 are listed in Table 1. Without loss of generality, we choose signals whose block size 𝑑 ranges from 2 to 32 with the block sparsity K = 32 , 16 , 8 , 4 , 2 as the test signals and use the reconstruction error to measure the algorithm performance for perturbation level ϵ A = 0 , 0.05 , 0.1 , 0.2 . It is easy to see that the reconstruction error decreases with the decrease of the perturbation level ϵ A . Table 1 shows that the perturbed BOMP algorithm is more effective than the perturbed OMP algorithm in reconstructing block-sparse signals. In other words, signal structure is very important in signal reconstruction. Specifically, when the perturbation level is ϵ A = 0 and block sparsity is K = 2 , the reconstruction performance of the perturbed BOMP algorithm is better than other cases.

Table 1

Restorative comparison of reconstruction errors between perturbed OMP and BOMP algorithm.

ϵ A d = 2 d = 4 d = 8 d = 16 d = 32
OMP 0 4.69 4.72 4.53 4.69 4.83
0.05 4.71 5.01 4.82 4.71 4.96
0.1 5.32 5.38 5.39 5.54 5.32
0.2 6.38 6.53 6.23 6.50 6.34

Block-OMP 0 2.18 0.89 0.59 0.57 0.56
0.05 2.20 0.91 0.76 0.69 0.69
0.1 2.94 1.41 1.09 0.99 0.96
0.2 4.51 2.47 1.70 1.68 1.66

In this part, we conduct some numerical experiments to verify our theoretical results by using the completely perturbed BOMP algorithm for Theorem 3.5. With

δ K = 0.1 ( < 1 K ( 1 + ϵ A ( K ) ) 2 + 1 ( 1 + ϵ A ( K ) ) 2 - 1 ) ,

we produce the signals with 64 blocks uniformly at random (i.e., d = 4 ) and choose 10 as the best block number for the sparse approximation. Table 2 and Table 3 show the theoretical error bound x - x * 2 and x - x * 2 , for various number of samples, where the perturbation level is ϵ A = 0 , 0.05 , 0.1 , 0.15 . The above tables show that the smaller the perturbation and the more samples, the better the performance of the perturbed BOMP algorithm. As the results show, the reconstruction error bound is lower than the theoretical error bound when the number of samples is M = 112 , 128 , 144 . However, the reconstruction performance of the perturbed BOMP algorithm is poor when the number of samples is M = 96 .

Table 2

Theoretical verification of x - x * 2 for various number of samples.

ϵ A M = 96 M = 112 M = 128 M = 144
x - x * 2 0 0.0961 0.4425 0.7674 1.1725
0.05 0.0659 0.4021 0.7725 1.1320
0.1 0.0722 0.4047 0.7941 1.1739
0.15 0.0775 0.4095 0.8062 1.1926

Theoretical threshold 0 0.0516 0.4470 0.9076 1.4541
0.05 0.0556 0.4569 0.9124 1.4640
0.1 0.0595 0.4550 0.9040 1.4684
0.15 0.0631 0.4553 0.9137 1.4728
Table 3

Theoretical verification of x - x * 2 , for various number of samples.

ϵ A M = 96 M = 112 M = 128 M = 144
x - x * 2 , 0 0.0961 0.4425 0.7674 1.1725
0.05 0.0659 0.4021 0.7725 1.1320
0.1 0.0722 0.4047 0.7941 1.1739
0.15 0.0775 0.4095 0.8062 1.1926

Theoretical threshold 0 0.0516 0.4470 0.9076 1.4541
0.05 0.0556 0.4569 0.9124 1.4640
0.1 0.0595 0.4550 0.9040 1.4684
0.15 0.0631 0.4553 0.9137 1.4728

In Figure 1, we compare the theoretical error bound and the reconstruction error of Theorem 3.5 for various block-sparsity levels with perturbation level ϵ A = 0 , 0.05 , 0.1 , 0.15 . Figure 1 (a) and (c) show that decreasing ϵ A improves the recovery performance of the perturbed BOMP algorithm for a smaller block-sparsity level. Figure 1 (a) and (b) indicate that x - x * 2 is lower than the theoretical error bound. Similarly, Figure 1 (c) and (d) present the relationship between x - x * 2 , and the theoretical error bound in different block-sparsity levels and ϵ A = 0 , 0.05 , 0.1 , 0.15 , respectively.

Figure 1

The figure plots theoretical error bound, x - x * 2 and x - x * 2 , versus the perturbation level ϵ A for various block-sparsity levels. (a) x - x * 2 versus block-sparsity level for the block size d = 4 . (b) The theoretical error bound of (3.4) for δ K = 0.1 . (c) x - x * 2 , versus block-sparsity level for the block size d = 4 . (d) The theoretical error bound of (3.5) for δ K = 0.1 .

(a)
(a)
(b)
(b)
(c)
(c)
(d)
(d)

5 Conclusion

In this paper, we discuss the sufficient conditions for recovering the support of a block 𝐾-sparse signal under the BRIP using the BOMP algorithm in both the 2 and 2 / bounded total noise. Under some constraints on the minimum magnitude of the nonzero elements of the block 𝐾-sparse signals, we prove that the support of the block 𝐾-sparse signals can be exactly recovered by the BOMP algorithm in both the 2 and 2 / bounded total noise if 𝑨 satisfies the BRIP of order K + 1 with

δ K + 1 < 1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

In addition, we also give the reconstruction upper bound of the error between the recovered block-sparse signal and the original block-sparse signal. In the noiseless case, when the measurement matrix 𝑨 is perturbed by 𝑬, we also prove that the BOMP algorithm can exactly recover the block 𝐾-sparse signal under some constraints on the block 𝐾-sparse signal and

δ K + 1 < 2 + 2 2 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 .

Finally, a series of numerical experiments are carried out to verify our theoretical results. The results obtained are helpful in further discussing and studying the perturbed BOMP algorithm.

Award Identifier / Grant number: 61673015

Award Identifier / Grant number: KJQN201802002

Award Identifier / Grant number: KJQN201802001

Funding statement: This research was supported by Natural Science Foundation of China (No. 61673015), the Science and Technology Research Program of Chongqing Municipal Education Commission (No. KJQN201802002, No. KJQN201802001).

Appendix A Appendix

Proof of Lemma 2.8

Let Λ k = Ω k . Then we have H ^ Λ k = H ^ Ω k . If P [ H ^ Ω k ] y ^ , A ^ [ j ] = e i θ j A ^ [ j ] * P [ H ^ Ω k ] y ^ 2 , then we have S j Ω k = A ^ ~ [ j ] * P [ H ^ Ω k ] y ^ 2 , where A ^ ~ [ j ] = e i θ j A ^ [ j ] . For any J a = { j 1 , j 2 , , j a } Ω s , let

A ^ ~ J a , j = [ A ^ [ j 1 ] , A ^ [ j 2 ] , , A ^ [ j a ] , A ^ ~ [ j ] ] .

Let A ^ ~ [ J a , j ] = A ^ [ J a { j } ] U with U = diag { I [ j 1 ] , I [ j 2 ] , , I [ j a ] , e i θ j I [ j ] } . Thus, we have

A ^ ~ * [ J a , j ] A ^ ~ [ J a , j ] = U * A ^ * [ J a { j } ] A ^ [ J a { j } ] U .

It is clear that | J a { j } | K + 1 and U U = I m + 1 . Then, by Lemma 2.4, we obtain

(A.1) ( 1 - δ ^ K + 1 ) I [ a + 1 ] A ^ ~ * [ J a , j ] A ^ ~ [ J a , j ] ( 1 + δ ^ K + 1 ) I [ a + 1 ] .

From Lemma 2.2, we can easily get

P [ H ^ Ω k ] P [ H ^ Ω s ] = P [ H ^ Ω s ] P [ H ^ Ω k ] = P [ H ^ Ω s ] ,
P [ H ^ Ω k ] A ^ [ i ] = 0 , with  1 i k ,
and

P [ H ^ Ω s ] y ^ = P [ H ^ Ω s ] [ A ^ x ^ + ( y ^ - A ^ x ^ ) ] = P [ H ^ Ω s ] [ A ^ [ Ω s ] x ^ [ Ω s ] + ( y ^ - A ^ x ^ ) ] = P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) .

For any t > 0 , we have

( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ - P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 - ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ + P [ H ^ Ω k ] A ^ ~ [ j ] 2 2
= 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ 2 2 - 4 t Re A ^ ~ [ j ] , P [ H ^ Ω s ] v ^
= 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , P [ H ^ Ω k ] y ^ - 4 t S j Ω k
= 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , P [ H ^ Ω k ] ( P [ H ^ Ω s ] y ^ )
    + 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , P [ H ^ Ω k ] ( P [ H ^ Ω s ] y ^ ) - 4 t S j Ω k
= 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , P [ H ^ Ω k ] ( i = 1 s x ^ [ i ] A ^ [ i ] )
    + 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) - 4 t S j Ω k
= 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , i = k + 1 s x ^ [ i ] A ^ [ i ]
    + 4 t x ^ [ Ω s \ Ω k ] 1 y ^ , P [ H ^ Ω k ] ( P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) ) - 4 t S j Ω k
(A.2) = 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω k ] y ^ , i = k + 1 s x ^ [ i ] A ^ [ i ] + 4 t x ^ [ Ω s \ Ω k ] 1 y ^ , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) - 4 t S j Ω k .
Since

(A.3) ( i = k + 1 s x ^ [ i ] A ^ [ i ] ) * P [ H ^ Ω k ] y ^ 2 x ^ [ Ω s \ Ω k ] 1 S 0 Ω k

and

(A.4) y ^ , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) = P [ H ^ Ω s ] y ^ , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) = P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 ,

by (A.2), (A.3), (A.4), this leads to

(A.5) ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ - P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 - ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ + P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 4 t ( S 0 Ω k - S j Ω k ) + 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 .

Let

ξ = ( ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) x ^ [ k + 1 ] , , ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) x ^ [ s ] , - 1 ) T ,
η = ( ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) x ^ [ k + 1 ] , , ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) x ^ [ s ] , 1 ) T .
According to the above equality, we have

( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ - P [ H ^ Ω k ] A ^ ~ [ j ] = ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] ( P [ H ^ Ω s ] y ^ ) - P [ H ^ Ω k ] A ^ ~ [ j ] + ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] ( P [ H ^ Ω s ] y ^ ) = P [ H ^ Ω k ] ( i = k + 1 s ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) x ^ [ i ] A ^ [ i ] - A ^ ~ [ j ] ) + ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) = P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ + ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) .

It is easy to see that

P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ , ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) = ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] , ( y ^ - A ^ x ^ ) = - ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] A ^ ~ [ j ] , ( y ^ - A ^ x ^ ) .

From the above equality, we thus have

(A.6) ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ - P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 = P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ + ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 = P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ 2 2 + ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) 2 P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 - 2 ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) = ξ * A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ + ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) 2 P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 - 2 ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) .

Similar to the proof of (A.6), we can easily get

(A.7) ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ + P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 = η * A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] η + ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) 2 P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 + 2 ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) .

By using (A.6) and (A.7), this leads to

(A.8) ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ - P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 - ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) P [ H ^ Ω k ] y ^ + P [ H ^ Ω k ] A ^ ~ [ j ] 2 2 = ξ * A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ - η * A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] η + 4 t x ^ [ Ω s \ Ω k ] 1 P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) 2 2 - 4 t Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( y ^ - A ^ x ^ ) .

Matrix A ^ ~ [ Ω s , j ] = [ A ^ [ 1 ] , A ^ [ 2 ] , , A ^ [ s ] , A ^ [ j ] ] is divided into matrix A ^ ~ [ Ω s , j ] = [ A ^ [ Ω k ] , A ^ ~ [ Ω s \ Ω k , j ] ] . According to some simple calculations, we have

A ^ ~ * [ Ω s , j ] A ^ ~ [ Ω s , j ] = [ A ^ * [ Ω k ] A ^ [ Ω k ] A ^ * [ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] A ^ ~ * [ Ω s \ Ω k , j ] A ^ [ Ω k ] A ^ ~ * [ Ω s \ Ω k , j ] A ^ ~ [ Ω s \ Ω k , j ] ] .

Then the Schur complement of A ^ ~ * [ Ω s , j ] A ^ ~ [ Ω s , j ] in A ^ * [ Ω k ] A ^ [ Ω k ] is defined as

(A.9) C k ( A ^ ~ * [ Ω s , j ] A ^ ~ [ Ω s , j ] ) = A ^ ~ * [ Ω s \ Ω k , j ] A ^ ~ [ Ω s \ Ω k , j ] - A ^ ~ * [ Ω s \ Ω k , j ] A ^ [ Ω k ] ( A ^ * [ Ω k ] A ^ [ Ω k ] ) - 1 A ^ * [ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] = A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ]

From (2.5) and (A.1), this leads to

(A.10) ( 1 - δ ^ K + 1 ) I s - k + 1 C k ( A ^ ~ * [ Ω s , j ] A ^ ~ [ Ω s , j ] ) ( 1 + δ ^ K + 1 ) I s - k + 1

By combining (2.4), (A.9) and (A.10), we obtain

(A.11) ξ * A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] ξ = ξ * A ^ ~ * [ Ω s \ Ω k , j ] ( I n - A ^ [ Ω k ] ( A ^ [ Ω k ] A ^ [ Ω k ] ) - 1 A ^ [ Ω k ] ) A ^ ~ [ Ω s \ Ω k , j ] ξ = ξ * C k ( A ^ ~ * [ Ω s , j ] A ^ ~ [ Ω s , j ] ) ξ ( 1 - δ ^ K + 1 ) ξ * ξ = ( 1 - δ ^ K + 1 ) [ ( t + 1 x ^ [ Ω s \ Ω k ] 1 ) 2 x ^ [ Ω s \ Ω k ] 2 2 + 1 ] .

Similarly, we obtain

(A.12) η * A ^ ~ * [ Ω s \ Ω k , j ] P [ H ^ Ω k ] A ^ ~ [ Ω s \ Ω k , j ] η = η * C k ( A ^ ~ * [ Ω s , j ] A ^ ~ [ Ω s , j ] ) η ( 1 + δ ^ K + 1 ) η * η = ( 1 + δ ^ K + 1 ) [ ( t - 1 x ^ [ Ω s \ Ω k ] 1 ) 2 x ^ [ Ω s \ Ω k ] 2 2 + 1 ] .

By using (A.5), (A.8), (A.11) and (A.12), we can easily get

(A.13) S 0 Ω k - S j Ω k x ^ [ Ω s \ Ω k ] 2 2 x ^ [ Ω s \ Ω k ] 1 - δ K + 1 2 ( t x ^ [ Ω s \ Ω k ] 2 2 + 1 t ( x ^ [ Ω s \ Ω k ] 2 2 x ^ [ Ω s \ Ω k ] 1 2 + 1 ) - Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( A ^ x - y ^ ) ,

According to the arithmetic-geometric inequality and (A.13), we have

S 0 Ω k - S j Ω k max t > 0 { x ^ [ Ω s \ Ω k ] 2 2 x ^ [ Ω s \ Ω k ] 1 - δ ^ K + 1 2 ( t x ^ [ Ω s \ Ω k ] 2 2 + 1 t ( x ^ [ Ω s \ Ω k ] 2 2 x ^ [ Ω s \ Ω k ] 1 2 + 1 ) ) - Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( A ^ x - y ^ ) } = x ^ [ Ω s \ Ω k ] 2 2 x ^ [ Ω s \ Ω k ] 1 - δ ^ K + 1 x ^ [ Ω s \ Ω k ] 2 x ^ [ Ω s \ Ω k ] 1 x ^ [ Ω s \ Ω k ] 2 2 + x ^ [ Ω s \ Ω k ] 1 2 - Re A ^ ~ [ j ] , P [ H ^ Ω s ] ( A ^ x - y ^ ) .

So the lemma is proved. ∎

Proof of Lemma 2.9

When j s , we have

( P [ H ^ Ω s ] ( A ^ x - y ^ ) ) * A ^ [ j ] 2 = ( A ^ x - y ^ ) * ( P [ H ^ Ω s ] A ^ [ j ] ) 2 = 0 ,

so the lemma holds evidently.

When j > s , let P [ H ^ Ω s ] A ^ [ j ] = i = 1 s A ^ [ i ] z [ i ] = A ^ [ Ω s ] Z with Z = ( z [ 1 ] , z [ 2 ] , , z [ s ] ) T . From Lemma 2.4, this leads to

(A.14) P [ H ^ Ω s ] A ^ [ j ] 2 2 = A ^ [ Ω s ] Z 2 2 = Z * A ^ * [ Ω s ] A ^ [ Ω s ] Z ( 1 - δ ^ K + 1 ) Z 2 2 .

For any t 0 , we then have

(A.15) P [ H ^ Ω s ] A ^ [ j ] 2 2 = P [ H ^ Ω s ] A ^ [ j ] , P [ H ^ Ω s ] A ^ [ j ] = Re P [ H ^ Ω s ] A ^ [ j ] , i = 1 s A ^ [ i ] z [ i ] = Re A ^ [ j ] , P [ H ^ Ω s ] ( i = 1 s A ^ [ i ] z [ i ] ) = Re A ^ [ j ] , i = 1 s A ^ [ i ] z [ i ] = 1 4 t ( i = 1 s A ^ [ i ] z [ i ] + t A ^ [ j ] 2 2 - i = 1 s A ^ [ i ] z [ i ] - t A ^ [ j ] 2 2 ) 1 4 t ( ( 1 + δ ^ K + 1 ) ( Z 2 2 + t 2 ) - ( 1 - δ ^ K + 1 ) ( Z 2 2 + t 2 ) ) = δ ^ K + 1 2 t ( Z 2 2 + t 2 ) = δ ^ K + 1 2 ( 1 t Z 2 2 + t ) .

By using the arithmetic-geometric inequality, (A.14) and (A.15), then

( 1 - δ ^ K + 1 ) Z 2 2 min t > 0 { δ ^ K + 1 2 ( 1 t Z 2 2 + t ) } = δ ^ K + 1 Z 2 .

This implies

Z 2 δ ^ K + 1 1 - δ ^ K + 1 .

From Lemma 2.6 and the inequality

( A ^ x - y ^ ) A ^ [ j ] 2 = ( A ^ [ j ] ) ( A ^ x - y ^ ) 2 A ^ ( A ^ x - y ^ ) 2 , = max 1 j N ( A ^ [ j ] ) ( A ^ x - y ^ ) 2 ϵ A , K , y ′′ ,

we have

( A ^ x - y ^ ) * ( P [ H ^ Ω s ] A ^ [ j ] ) 2 = ( A ^ x - y ^ ) * ( A ^ [ j ] - P [ H ^ Ω s ] A ^ [ j ] ) 2 ( A ^ x - y ^ ) * A ^ [ j ] 2 + ( A ^ x - y ^ ) * P [ H ^ Ω s ] A ^ [ j ] 2 ϵ A , K , y ′′ + ( A ^ x - y ^ ) * i = 1 s A ^ [ i ] z [ i ] 2 ϵ A , K , y ′′ + Z 1 ϵ A , K , y ′′ ϵ A , K , y ′′ + s Z 2 ϵ A , K , y ′′ ( 1 + s δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ .

So the lemma is proved. ∎

Proof of Lemma 2.10

Using the equality

P [ H ^ Ω s ] y ^ = i = 1 s A ^ [ i ] x ^ [ i ] = A ^ [ Ω s ] x ^ ,

we have

(A.16) P [ H ^ Ω s ] ( A ^ x - y ^ ) = A ^ [ Ω s ] x - A ^ [ Ω s ] x ^ = A ^ [ Ω s ] ( x - x ^ ) .

Let V = x - x ^ , i.e., V [ i ] = x [ i ] - x ^ [ i ] for any 1 i s . When V = 0 , (2.4), (2.5) hold evidently.

When V 0 , for A ^ x - y ^ 2 ϵ A , K , y , by Lemma 2.2, Lemma 2.4 and (A.16), we obtain

(A.17) ( ϵ A , K , y ) 2 P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 2 = A ^ [ Ω s ] V 2 2 = V * A ^ * [ Ω s ] A ^ [ Ω s ] V ( 1 - δ ^ K + 1 ) V 2 2 .

From Lemma 2.6 and (A.17), this leads to

(A.18) V 2 , V 2 ϵ A , K , y 1 - δ ^ K + 1 .

Thus, it follows from (A.18) that

min 1 i s x ^ [ i ] 2 = min 1 i s x [ i ] + ( x ^ [ i ] - x [ i ] ) 2 min 1 i s x [ i ] 2 - max 1 i s x ^ [ i ] - x [ i ] 2 = min 1 i s x [ i ] 2 - V 2 , min 1 i s x [ i ] 2 - ϵ A , K , y 1 - δ ^ K + 1 .

So (2.4) holds.

For A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , we have

(A.19) P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 2 = P [ H ^ Ω s ] ( A ^ x - y ^ ) , P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 = P [ H ^ Ω s ] ( A ^ x - y ^ ) , i = 1 s A ^ [ i ] ( x [ i ] - x ^ [ i ] ) 2 = ( A ^ x - y ^ ) , P [ H ^ Ω s ] ( i = 1 s A ^ [ i ] ( x [ i ] - x ^ [ i ] ) ) 2 ( A ^ [ Ω s ] ) * ( A ^ x - y ^ ) 2 , V 2 , 1 V 2 , 1 ϵ A , K , y ′′ .

From Lemma 2.6, (A.17) and (A.19), we can easily get

(A.20) V 2 , V 2 ϵ A , K , y 1 - δ ^ K + 1 V 2 , 1 V 2 s ϵ A , K , y 1 - δ ^ K + 1 K ϵ A , K , y 1 - δ ^ K + 1 .

Then it follows from (A.20) that

min 1 i s x ^ [ i ] 2 = min 1 i s x [ i ] + ( x ^ [ i ] - x [ i ] ) 2 min 1 i s x [ i ] 2 - max 1 i s x ^ [ i ] - x [ i ] 2 = min 1 i s x [ i ] 2 - V 2 , min 1 i s x [ i ] 2 - K ϵ A , K , y 1 - δ ^ K + 1 .

Therefore, (2.5) holds. So the lemma is proved. ∎

Proof of Lemma 2.11

For A ^ x - y ^ 2 ϵ A , K , y and A ^ x = A ^ [ Ω s ] x [ Ω s ] , we have

P [ H ^ Ω j ] ( A ^ x - y ^ ) 2 A ^ x - y ^ 2 ϵ A , K , y with  0 j N .

Since H ^ Ω 0 H ^ Ω 1 H ^ Ω s - 1 , we can easily get H ^ Ω 0 H ^ Ω 1 H ^ Ω s - 1 . Then we have

P [ H ^ Ω 0 ] y ^ 2 P [ H ^ Ω 1 ] y ^ 2 P [ H ^ Ω s - 1 ] y ^ 2 .

And also noting that

(A.21) P [ H ^ Ω s - 1 ] y ^ 2 = P [ H ^ Ω s - 1 ] ( A ^ [ Ω s ] x [ Ω s ] - A ^ x + y ^ ) 2 P [ H ^ Ω s - 1 ] A ^ [ Ω s ] x [ Ω s ] 2 - P [ H ^ Ω s - 1 ] ( A ^ x - y ^ ) 2 P [ H ^ Ω s - 1 ] A ^ [ Ω s ] x [ Ω s ] 2 - ϵ A , K , y ,

we can easily get

(A.22) P [ H ^ Ω s - 1 ] A ^ [ Ω s ] x [ Ω s ] = P [ H ^ Ω s - 1 ] A ^ [ Ω s \ Ω s - 1 ] x [ Ω s \ Ω s - 1 ] = x [ s ] P [ H ^ Ω s - 1 ] A ^ [ s ]

and

(A.23) C s - 1 ( A ^ * [ Ω s ] A ^ [ Ω s ] ) = A ^ * [ s ] A ^ [ s ] - A ^ * [ s ] A ^ [ Ω s - 1 ] ( A ^ * [ Ω s - 1 ] A ^ [ Ω s - 1 ] ) - 1 A ^ * [ Ω s - 1 ] A ^ [ s ] = A ^ * [ Ω s ] P [ H ^ Ω s - 1 ] A ^ [ s ] .

From Lemma 2.4, Lemma 2.5, (A.22) and (A.23), we have

(A.24) P [ H ^ Ω s - 1 ] A ^ [ Ω s ] x [ Ω s ] 2 2 = x [ s ] 2 2 A ^ * [ Ω s ] P [ H ^ Ω s - 1 ] A ^ [ s ] = x [ s ] 2 2 C s - 1 ( A ^ * [ Ω s ] A ^ [ Ω s ] ) ( 1 - δ ^ K + 1 ) ( min 1 i s x [ i ] 2 ) 2 .

By combining (A.21) and (A.24), therefore, (2.6) holds.

For A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ and A ^ x = A ^ [ Ω s ] x [ Ω s ] , then it follows from Lemma 2.9 that

A ^ P [ H ^ Ω s ] y ^ 2 , = A ^ P [ H ^ Ω s ] ( A ^ [ Ω s ] x [ Ω s ] - A ^ x + y ^ ) 2 , A ^ P [ H ^ Ω s ] A ^ [ Ω s ] x [ Ω s ] 2 , + A ^ P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 , = A ^ P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 , = max s < j N ( P [ H ^ Ω s ] ( A ^ x - y ^ ) ) * A ^ [ j ] 2 ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ ,

so (2.7) holds.

Note that, for k < s , we have

(A.25) A ^ P [ H ^ Ω k ] y ^ 2 , = A ^ P [ H ^ Ω k ] ( A ^ [ Ω s ] x [ Ω s ] - A ^ x + y ^ ) 2 , A ^ P [ H ^ Ω k ] A ^ [ Ω s ] x [ Ω s ] 2 , - A ^ P [ H ^ Ω k ] ( A ^ x - y ^ ) 2 , .

Using Lemma 2.4 and Lemma 2.5, we have

(A.26) P [ H ^ Ω k ] A ^ [ Ω s ] x [ Ω s ] 2 2 = P [ H ^ Ω k ] A ^ [ Ω s \ Ω k ] x [ Ω s \ Ω k ] 2 2 = x * [ Ω s \ Ω k ] A ^ * [ Ω s \ Ω k ] P [ H ^ Ω k ] A ^ [ Ω s \ Ω k ] x [ Ω s \ Ω k ] = x * [ Ω s \ Ω k ] A ^ * [ Ω s \ Ω k ] ( I n - A ^ [ Ω k ] ( A ^ [ Ω k ] A ^ [ Ω k ] ) - 1 A ^ [ Ω k ] ) A ^ [ Ω s \ Ω k ] x [ Ω s \ Ω k ] = x * [ Ω s \ Ω k ] C k ( A ^ * [ Ω s ] A ^ [ Ω s ] ) x [ Ω s \ Ω k ] ( 1 - δ ^ K + 1 ) x [ Ω s \ Ω k ] 2 2 .

On the other hand, we have

(A.27) P [ H ^ Ω k ] A ^ [ Ω s ] x [ Ω s ] 2 2 = P [ H ^ Ω k ] A ^ x 2 2 = P [ H ^ Ω k ] A ^ x , P [ H ^ Ω k ] i = 1 s A ^ [ i ] x [ i ] 2 = P [ H ^ Ω k ] A ^ x , i = k + 1 s A ^ [ i ] x [ i ] 2 = i = k + 1 s x [ i ] P [ H ^ Ω k ] A ^ x , A ^ [ i ] 2 x [ Ω s \ Ω k ] 2 , 1 max 1 i s A ^ [ i ] * ( P [ H ^ Ω k ] A ^ x ) 2 = x [ Ω s \ Ω k ] 2 , 1 A ^ [ Ω s ] * ( P [ H ^ Ω k ] A ^ x ) 2 , x [ Ω s \ Ω k ] 2 , 1 A ^ * ( P [ H ^ Ω k ] A ^ x ) 2 , .

From Lemma 2.6, (A.26) and (A.27), we get

(A.28) A ^ P [ H ^ Ω k ] A ^ [ Ω s ] x [ Ω s ] 2 , ( 1 - δ ^ K + 1 ) x [ Ω s \ Ω k ] 2 2 x [ Ω s \ Ω k ] 2 , 1 1 - δ ^ K + 1 s - k x [ Ω s \ Ω k ] 2 1 - δ ^ K + 1 s - k s - k min 1 i s x [ i ] 2 ( 1 - δ ^ K + 1 ) min 1 i s x [ i ] 2 .

In the following, by Lemma 2.9 and a simple calculation, we have

(A.29) A ^ P [ H ^ Ω k ] ( A ^ x - y ^ ) 2 , = max 1 i N ( P [ H ^ Ω k ] ( A ^ x - y ^ ) ) * A ^ [ i ] 2 = max 1 i N ( A ^ x - y ^ ) * ( P [ H ^ Ω k ] A ^ [ i ] ) 2 ( 1 + k δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ ( 1 + s - 1 δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ ( 1 + K - 1 δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ .

That is, the above (A.25), (A.28) and (A.29) show that (2.8) holds. So the lemma holds. ∎

Appendix B Appendix

Proof of Theorem 3.1

The proof can be divided into two steps in the first part of Theorem 3.1. We first prove that the BOMP algorithm selects correct indexes in all iterations. Next, we prove that BOMP performs exactly 𝑠 iterations. Induction is the first step in proof. Suppose that the BOMP algorithm selects correct indexes in the first 𝑘 iterations, i.e., Λ k Ω s with k < s . The assumption that we have Λ 0 = Ω s when k = 0 holds evidently. Next, we need to show that the BOMP algorithm selects a correct index in the k + 1 -th iteration, i.e., λ k + 1 Ω s .

Since r k = P [ H ^ Ω 0 ] y ^ , by Lemma 2.8, we have S 0 Λ k - S j Λ k > 0 for any k = 0 , 1 , , s - 1 with s < j N . Let P [ H ^ Ω s ] y ^ = i = 1 s A ^ [ i ] x ^ [ i ] = A ^ [ Ω s ] x ^ and A ^ x = i = 1 s A ^ [ i ] x [ i ] . Thus, from (2.4) and (3.2), we have

min 1 i s x ^ [ i ] 2 min 1 i s x [ i ] 2 - ϵ A , K , y 1 - δ ^ K + 1 > ϵ A , K , y 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) + ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 ϵ A , K , y 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 - ϵ A , K , y 1 - δ ^ K + 1 > ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 ϵ A , K , y 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 > 0 .

Thus, it suffices to show that x [ 2 ] 2 x [ s ] 2 0 . According to the definition of BRIP, it is easy to check that

A ^ ~ [ j ] 2 = A ^ [ j ] 2 1 + δ ^ K + 1 .

Thus, we obtain

(B.1) ( P [ H ^ Ω s ] ( A ^ x - y ^ ) ) * A ^ ~ [ j ] 2 P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 A ^ ~ [ j ] 2 1 + δ ^ K + 1 ϵ A , K , y .

By (2.2) and (B.1), we have

(B.2) S 0 Λ k - S j Λ k x ^ [ Ω s \ Λ k ] 2 2 x ^ [ Ω s \ Λ k ] 2 , 1 - δ ^ K + 1 x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 x ^ [ Ω s \ Λ k ] 2 , 1 2 + x ^ [ Ω s \ Λ k ] 2 2 - 1 + δ ^ K + 1 ϵ A , K , y = x ^ [ Ω s \ Λ k ] 2 ( x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 - δ ^ K + 1 1 + ( x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 ) 2 - 1 + δ ^ K + 1 ϵ A , K , y x ^ [ Ω s \ Λ k ] 2 ) .

Let f ( x ) = x - δ ^ K + 1 1 + x 2 with x [ 0 , + ) . By simple calculations, we have f ( x ) = 1 - δ ^ K + 1 x 1 + x 2 > 0 , where x [ 0 , + ) and 0 < δ ^ K + 1 1 , i.e., the function f ( x ) = x - δ ^ K + 1 1 + x 2 is monotonously increasing on the interval [ 0 , + ) . It is easy to verify that

x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 1 s - k and x ^ [ Ω s \ Λ k ] 2 s - k min 1 i s x ^ [ i ] 2

by Lemma 2.6. Using (B.2), we obtain

S 0 Λ k - S j Λ k x ^ [ Ω s \ Λ k ] 2 ( 1 s - k - δ ^ K + 1 s - k + 1 s - k - 1 + δ ^ K + 1 ϵ A , K , y s - k min 1 i s x ^ [ i ] 2 ) .

We use (2.4), (3.2) and (B.2) to get

S 0 Λ k - S j Λ k , x ^ [ Ω s \ Λ k ] 2 s - k ( 1 - δ ^ K + 1 s - k + 1 - 1 + δ ^ K + 1 ϵ A , K , y min 1 i s x ^ [ i ] 2 ) > x ^ [ Ω s \ Λ k ] 2 s - k [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ( K + 1 - s - k + 1 ) > x ^ [ Ω s \ Λ k ] 2 s - k [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ( K + 1 - s + 1 ) 0 .

Then the BOMP algorithm selects a correct index in the k + 1 -th iteration. Next, we show that the BOMP algorithm with the stopping criterion r k 2 ϵ A , K , y performs exactly 𝑠 iterations, which is equivalent to show that r k 2 > ϵ A , K , y for k = 0 , 1 , 2 , , s - 1 and r s 2 ϵ A , K , y . Since the BOMP algorithm selects a correct index in each iteration under (3.2), we assume that Λ k = Ω k for 0 k s . It follows from (2.6) and (3.2) that

r 0 2 r 1 2 r s - 1 2 1 - δ ^ K + 1 min 1 i s x [ i ] 2 - ϵ A , K , y 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) min 1 i s x [ i ] 2 - ϵ A , K , y > 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 ϵ A , K , y 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ϵ A , K , y 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 [ 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 ϵ A , K , y ,

i.e., the BOMP algorithm does not terminate before performing the 𝑠-th iteration.

By Lemma 2.2, we have

r s 2 = P [ H ^ Ω s ] y ^ 2 = P [ H ^ Ω s ] ( A ^ [ Ω s ] x [ Ω s ] - A ^ x + y ^ ) 2 = P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 A ^ x - y ^ 2 ϵ A , K , y .

Thus, the BOMP algorithm terminates after performing the 𝑠-th iteration, i.e., the BOMP algorithm performs 𝑠 iterations. So Theorem 3.1 is proved. ∎

Proof of Theorem 3.3

For any A ^ [ j ] with s < j N and any k N with 0 k < s , we first prove S 0 Λ k - S j Λ k > 0 for Λ k Ω s . It follows from (2.5) and (3.3) that

min 1 i s x ^ [ i ] 2 min 1 i s x [ i ] 2 - K ϵ A , K , y ′′ 1 - δ ^ K + 1 > K ϵ A , K , y ′′ 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) + [ 1 + K ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 - K ϵ A , K , y ′′ 1 - δ ^ K + 1 > [ 1 + K ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1 > 0 .

Thus, it suffices to show that x [ 2 ] 2 x [ s ] 2 0 . Then, by (2.3), we obtain

( P [ H ^ Ω s ] ( A ^ x - y ^ ) ) * A ^ ~ [ j ] 2 = ( A ^ x - y ^ ) * ( P [ H ^ Ω s ] A ^ [ j ] ) 2 ( 1 + s δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ .

By (2.2) and (B.1), we have

(B.3) S 0 Λ k - S j Λ k x ^ [ Ω s \ Λ k ] 2 2 x ^ [ Ω s \ Λ k ] 2 , 1 - δ ^ K + 1 x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 x ^ [ Ω s \ Λ k ] 2 , 1 2 + x ^ [ Ω s \ Λ k ] 2 2 - ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ = x ^ [ Ω s \ Λ k ] 2 ( x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 - δ ^ K + 1 1 + ( x ^ [ Ω s \ Λ k ] 2 x ^ [ Ω s \ Λ k ] 2 , 1 ) 2 - ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ x ^ [ Ω s \ Λ k ] 2 ) x ^ [ Ω s \ Λ k ] 2 ( 1 s - k - δ ^ K + 1 s - k + 1 s - k - ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ s - k min 1 i s x ^ [ i ] 2 ) .

By combining (2.5), (3.3) and (B.3), we have

S 0 Λ k - S j Λ k x ^ [ Ω s \ Λ k ] 2 s - k ( 1 - δ ^ K + 1 s - k + 1 - ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ min 1 i s x ^ [ i ] 2 ) > x ^ [ Ω s \ Λ k ] 2 s - k [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ( K + 1 - s - k + 1 ) > x ^ [ Ω s \ Λ k ] 2 s - k [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ( K + 1 - s + 1 ) 0 .

Thus, the BOMP algorithm selects a correct index at each iteration. Next, we show that the BOMP algorithm with the stopping criterion

A ^ r k 2 , ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′

performs exactly 𝑠 iterations, which is equivalent to show that

A ^ r k 2 , > ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ for k = 0 , 1 , 2 , , s - 1 ,
A ^ r s 2 , ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ .
Similarly, we assume that Λ k = Ω k for 0 k s . It follows from (2.8) and (3.3) that, for any 0 k < s ,
A ^ r k 2 , = A ^ P [ H ^ Ω k ] y ^ 2 ,
[ 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] min 1 i s x [ i ] 2 - ( 1 + K - 1 δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′
> K ϵ A , K , y ′′ + [ 1 - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) + K ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1
- ( 1 + K - 1 [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ) ϵ A , K , y ′′
= K ϵ A , K , y ′′ + ϵ A , K , y ′′ + [ ( K + K + 1 - 1 ) ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K + 1
- ( 1 + K - 1 [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ) ϵ A , K , y ′′
K ϵ A , K , y ′′ + ϵ A , K , y ′′ + [ ( K + K + 1 - 1 ) ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ]
- ( 1 + K - 1 [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] ) ϵ A , K , y ′′
= K ϵ A , K , y ′′ + [ ( K + K + 1 - K - 1 - 1 ) ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] ϵ A , K , y ′′ 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ]
= { 1 + K [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] } ϵ A , K , y ′′
+ { K - 1 + ( K + 1 - K - 1 - 1 ) ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] } ϵ A , K , y ′′ .
When

δ K + 1 < 1 K + 1 ( 1 + ϵ A ( K + 1 ) ) 2 + 1 ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ,

if K = 1 , it is obvious that

K - 1 + ( K + 1 - K - 1 - 1 ) ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 0

for K > 1 . It is easy to check that

K + 1 - K - 1 - 1 < 0 , ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 1 ( K + 1 - 1 ) , K > K - 1 K + 1 - 1 .

Thus,

K - 1 + ( K + 1 - K - 1 - 1 ) ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] K - K - 1 K + 1 - 1 > 0 .

So the BOMP algorithm does not terminate before performing the 𝑠-th iteration.

In the following, by (2.7), we can easily get

A ^ r s 2 , = A ^ P [ H ^ Ω s ] y ^ 2 , ( 1 + K δ ^ K + 1 1 - δ ^ K + 1 ) ϵ A , K , y ′′ { 1 + K [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ] } ϵ A , K , y ′′ .

Therefore, the BOMP algorithm terminates after performing the 𝑠-th iteration, i.e., the BOMP algorithm performs 𝑠 iterations. So Theorem 3.3 is proved. ∎

Proof of Theorem 3.5

Since

(B.4) A ^ [ Ω s ] x = P [ H ^ Ω s ] y ^ = P [ H ^ Ω s ] ( A ^ x - ( A ^ x - y ^ ) ) = P [ H ^ Ω s ] ( A ^ [ Ω s ] x [ Ω s ] - ( A ^ x - y ^ ) ) = A ^ [ Ω s ] x [ Ω s ] - P [ H ^ Ω s ] ( A ^ x - y ^ ) ,

from Lemma 2.4, (B.4) and the definition of BRIP, for A ^ x - y ^ 2 ϵ A , K , y , we have

x - x [ Λ s ] 2 A ^ [ Ω s ] ( x - x [ Λ s ] ) 2 1 - δ ^ K = P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 1 - δ ^ K
A ^ x - y ^ 2 1 - δ ^ K ϵ A , K , y 1 - δ ^ K ϵ A , K , y 2 - ( 1 + δ K ) ( 1 + ϵ A ( K ) ) 2 ,
and for A ^ ( A ^ x - y ^ ) 2 , ϵ A , K , y ′′ , we also have

(B.5) ( 1 - δ ^ K ) x - x [ Λ s ] 2 2 A ^ [ Ω s ] ( x - x [ Λ s ] ) 2 2 = P [ H ^ Ω s ] ( A ^ x - y ^ ) 2 2 = A ^ [ Ω s ] ( A ^ * [ Ω s ] A ^ [ Ω s ] ) - 1 A ^ * [ Ω s ] ( A ^ x - y ^ ) 2 2 = ( A ^ x - y ^ ) * A ^ [ Ω s ] ( A ^ * [ Ω s ] A ^ [ Ω s ] ) - 1 A ^ * [ Ω s ] ( A ^ x - y ^ ) .

Using ( 1 - δ ^ K ) I s A ^ * [ Ω s ] A ^ [ Ω s ] ( 1 + δ ^ K ) I s , we have

(B.6) 1 1 + δ ^ K I s ( A ^ * [ Ω s ] A ^ [ Ω s ] ) - 1 1 1 - δ ^ K I s .

It follows from (B.5) and (B.6) that

( 1 - δ ^ K ) x - x [ Λ s ] 2 2 1 1 - δ ^ K A ^ * [ Ω s ] ( A ^ x - y ^ ) 2 2 .

According to the above inequality, we have

x - x [ Λ s ] 2 1 1 - δ ^ K A ^ * [ Ω s ] ( A ^ x - y ^ ) 2 s 1 - δ ^ K A ^ * [ Ω s ] ( A ^ x - y ^ ) 2 , s 1 - δ ^ K A ^ * ( A ^ x - y ^ ) 2 , s 1 - δ ^ K ϵ A , K , y ′′ s 2 - ( 1 + δ K ) ( 1 + ϵ A ( K ) ) 2 ϵ A , K , y ′′ .

So Theorem 3.5 is proved. ∎

Proof of Theorem 3.7

We first suppose that the BOMP algorithm selects correct indexes in the first 𝑘 iterations, i.e., Λ k Ω s with k < s . Since Λ 0 = ϕ Ω s , the assumption holds obviously when k = 0 . Next, we prove S 0 Λ k - S j Λ k > 0 for all s < j N . When A ^ x - y ^ 2 = 0 , we have x ^ = x [ Ω s ] . It follows from (2.2) that

S 0 Λ k - S j Λ k x [ Ω s \ Λ k ] 2 2 x [ Ω s \ Λ k ] 2 , 1 - δ ^ K + 1 x [ Ω s \ Λ k ] 2 x [ Ω s \ Λ k ] 2 , 1 x [ Ω s \ Λ k ] 2 , 1 2 + x [ Ω s \ Λ k ] 2 2 = x [ Ω s \ Λ k ] 2 x [ Ω s \ Λ k ] 2 , 1 x [ Ω s \ Λ k ] 2 , 1 2 + x [ Ω s \ Λ k ] 2 2 ( 1 1 + ( x [ Ω s \ Λ k ] 2 , 1 x [ Ω s \ Λ k ] 2 ) 2 - δ ^ K + 1 ) .

According to the definition of g ( x ) , we have g ( x ) x [ Ω s \ Λ k ] 2 , 1 x [ Ω s \ Λ k ] 2 . It follows from (3.6) that

S 0 Λ k - S j Λ k x [ Ω s \ Λ k ] 2 x [ Ω s \ Λ k ] 2 , 1 x [ Ω s \ Λ k ] 2 , 1 2 + x [ Ω s \ Λ k ] 2 2 ( 1 1 + g 2 ( x ) - δ ^ K + 1 ) x [ Ω s \ Λ k ] 2 x [ Ω s \ Λ k ] 2 , 1 x [ Ω s \ Λ k ] 2 , 1 2 + x [ Ω s \ Λ k ] 2 2 [ 1 1 + g 2 ( x ) - ( ( 1 + δ K + 1 ) ( 1 + ϵ A ( K + 1 ) ) 2 - 1 ) ] > 0 .

Then the BOMP algorithm can exactly recover the block 𝐾-sparse signal in 𝐾 iterations, so Theorem 3.7 is proved. ∎

References

[1] W. U. Bajwa, M. F. Duarte and R. Calderbank, Conditioning of random block subdictionaries with applications to block-sparse recovery and regression, IEEE Trans. Inform. Theory 61 (2015), no. 7, 4060–4079. 10.1109/TIT.2015.2429632Suche in Google Scholar

[2] R. G. Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal Process. Magaz. 25 (2008), no. 2, 83–91. 10.1109/MSP.2007.914730Suche in Google Scholar

[3] Z. Ben-Haim and Y. C. Eldar, Near-oracle performance of greedy block-sparse estimation techniques from noisy measurements, IEEE J. Selected Topics Signal Process. 5 (2011), no. 5, 1032–1047. 10.1109/JSTSP.2011.2160250Suche in Google Scholar

[4] T. Blumensath and M. Davies, Compressed sensing and source separation, Independent Component Analysis and Signal Separation, Lecture Notes in Comput. Sci. 4666, Springer, Berlin (2007), 341–348. 10.1007/978-3-540-74494-8_43Suche in Google Scholar

[5] E. J. Candes, Compressive Sampling, Marta Sanz Sole 17 (2006), no. 2, 1433–1452. 10.4171/022-3/69Suche in Google Scholar

[6] E. J. Candes, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), no. 2, 489–509. 10.1109/TIT.2005.862083Suche in Google Scholar

[7] J. Chen and X. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Trans. Signal Process. 54 (2006), no. 12, 4634–4643. 10.1109/TSP.2006.881263Suche in Google Scholar

[8] Y. C. Eldar, P. Kuppinger and H. Bölcskei, Block-sparse signals: Uncertainty relations and efficient recovery, IEEE Trans. Signal Process. 58 (2010), no. 6, 3042–3054. 10.1109/TSP.2010.2044837Suche in Google Scholar

[9] Y. C. Eldar and M. Mishali, Robust recovery of signals from a structured union of subspaces, IEEE Trans. Inform. Theory 55 (2009), no. 11, 5302–5316. 10.1109/TIT.2009.2030471Suche in Google Scholar

[10] E. Elhamifar and R. Vidal, Block-sparse recovery via convex optimization, IEEE Trans. Signal Process. 60 (2012), no. 8, 4094–4107. 10.1109/TSP.2012.2196694Suche in Google Scholar

[11] E. Elhamifar and R. Vidal, Sparse Subspace Clustering: Algorithm, Theory, and Applications, IEEE Trans. Pattern Anal. Machine Intell. 35 (2013), no. 11, 2765–2781. 10.1109/TPAMI.2013.57Suche in Google Scholar PubMed

[12] A. C. Fannjiang, T. Strohmer and P. Yan, Compressed remote sensing of sparse objects, SIAM J. Imaging Sci. 3 (2010), no. 3, 595–618. 10.1137/090757034Suche in Google Scholar

[13] N. C. Feng, J. J. Wang and W. W. Wang, Sparse signal recovery with prior information by iterative reweighted least squares algorithm, J. Inverse Ill-Posed Probl. 26 (2017), no. 2, 171–184. 10.1515/jiip-2016-0087Suche in Google Scholar

[14] M. A. Herman and T. Strohmer, High-resolution radar via compressed sensing, IEEE Trans. Signal Process. 57 (2009), no. 6, 2275–2284. 10.1109/TSP.2009.2014277Suche in Google Scholar

[15] M. A. Herman and T. Strohmer, General deviants: An analysis of perturbations in compressed sensing, IEEE J. Selected Topics Signal Process. 4 (2010), no. 2, 342–349. 10.1109/JSTSP.2009.2039170Suche in Google Scholar

[16] T. Ince and A. Nacaroglu, On the perturbation of measurement matrix in non-convex compressed sensing, Signal Process. 98 (2014), no. 5, 143–149. 10.1016/j.sigpro.2013.11.025Suche in Google Scholar

[17] Y. Jiao, B. Jin and X. Lu, Group sparse recovery via the l 0 ( l 2 ) penalty: Theory andagorithm, IEEE Trans. Signal Process. 65 (2016), no. 4, 998–1012. 10.1109/TSP.2016.2630028Suche in Google Scholar

[18] J. Lei and S. Liu, Inversion algorithm based on the generalized objective functional for compressed sensing, Appl. Math. Model. 37 (2013), no. 6, 4407–4429. 10.1016/j.apm.2012.09.049Suche in Google Scholar

[19] C. Liu, Y. Fang and J. Liu, Some new results about sufficient conditions for exact support recovery of sparse signals via orthogonal matching pursuit, IEEE Trans. Signal Process. 65 (2017), no. 17, 4511–4524. 10.1109/TSP.2017.2711543Suche in Google Scholar

[20] C. Y. Liu, J. J. Wang and W. W. Wang, Non-convex block-sparse compressed sensing with redundant dictionaries, IET Signal Process. 11 (2017), no. 2, 171–180. 10.1049/iet-spr.2016.0272Suche in Google Scholar

[21] M. Lustig, D. L. Donoho and D. L. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance Med. 58 (2007), no. 6, 1182–1195. 10.1002/mrm.21391Suche in Google Scholar PubMed

[22] Q. Mo, A sharp restricted isometry constant bound of orthogonal matching pursuit, preprint (2015), https://arxiv.org/abs/1501.01708. Suche in Google Scholar

[23] H. Nyquist, Certain topics in telegraph transmission theory, Amer. Inst. Electr. Eng. 47 (1928), no. 2, 617–644. 10.1109/T-AIEE.1928.5055024Suche in Google Scholar

[24] F. Parvaresh, H. Vikalo and H. Misra, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays, IEEE J. Selected Topics Signal Process. 2 (2008), no. 3, 275–285. 10.1109/JSTSP.2008.924384Suche in Google Scholar

[25] G. Swirszcz, N. Abe and A. C. Lozano, Grouped orthogonal matching pursuit for variable selection and prediction, Adv. Neural Inform. Process. Syst. 22 (2009), 1150–1158. Suche in Google Scholar

[26] R. Vidal and Y. Ma, A unified algebraic approach to 2-D and 3-D motion segmentation and estimation, J. Math. Imaging Vision 25 (2006), no. 3, 403–421. 10.1007/s10851-006-8286-zSuche in Google Scholar

[27] J. Wang, Support recovery with orthogonal matching pursuit in the presence of noise, IEEE Trans. Signal Process. 63 (2015), no. 21, 5868–5877. 10.1109/TSP.2015.2468676Suche in Google Scholar

[28] J. Wang and B. Shim, On the recovery limit of sparse signals using orthogonal matching pursuit, IEEE Trans. Signal Process. 60 (2012), no. 9, 4973–4976. 10.1109/TSP.2012.2203124Suche in Google Scholar

[29] J. J. Wang, J. Zhang and W. W. Wang, A perturbation analysis of nonconvex block-sparse compressed sensing, Commun. Nonlinear Sci. Numer. Simul. 29 (2015), no. 1–3, 416–426. 10.1016/j.cnsns.2015.05.022Suche in Google Scholar

[30] J. Wen, Z. Zhou and Z. Liu, Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit, Appl. Comput. Harmon. Anal. 47 (2019), no. 3, 948–974. 10.1016/j.acha.2018.02.002Suche in Google Scholar

[31] J. Wen, Z. Zhou and J. Wang, A sharp condition for exact support recovery of sparse signals with orthogonal matching pursuit, IEEE International Symposium on Information Theory, IEEE Press, Piscataway (2016), 2364–2368. 10.1109/ISIT.2016.7541722Suche in Google Scholar

[32] J. Wen, X. Zhu and D. Li, Improved bounds on restricted isometry constant for orthogonal matching pursuit, Electron. Letters 49 (2014), no. 23, 1487–1489. 10.1049/el.2013.2222Suche in Google Scholar

[33] C. Y. Zhang, Y. T. Li and F. Chen, On Schur complement of block diagonally dominant matrices, Linear Algebra Appl. 414 (2006), no. 2–3, 533–546. 10.1016/j.laa.2005.10.046Suche in Google Scholar

Received: 2019-06-24
Revised: 2020-08-14
Accepted: 2020-09-25
Published Online: 2020-11-17
Published in Print: 2021-10-01

© 2020 Liu, Zhang, Qiu, Li and Leng, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 3.10.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jiip-2019-0043/html?lang=de
Button zum nach oben scrollen