Home A perturbation analysis based on group sparse representation with orthogonal matching pursuit
Article Open Access

A perturbation analysis based on group sparse representation with orthogonal matching pursuit

  • Chunyan Liu , Feng Zhang EMAIL logo , Wei Qiu , Chuan Li and Zhenbei Leng
Published/Copyright: November 17, 2020

Abstract

In this paper, by exploiting orthogonal projection matrix and block Schur complement, we extend the study to a complete perturbation model. Based on the block-restricted isometry property (BRIP), we establish some sufficient conditions for recovering the support of the block 𝐾-sparse signals via block orthogonal matching pursuit (BOMP) algorithm. Under some constraints on the minimum magnitude of the nonzero elements of the block 𝐾-sparse signals, we prove that the support of the block 𝐾-sparse signals can be exactly recovered by the BOMP algorithm in the case of β„“ 2 and β„“ 2 / β„“ ∞ bounded total noise if 𝑨 satisfies the BRIP of order K + 1 with

δ K + 1 < 1 K + 1 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 + 1 ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

In addition, we also show that this is a sharp condition for exactly recovering any block 𝐾-sparse signal with the BOMP algorithm. Moreover, we also give the reconstruction upper bound of the error between the recovered block-sparse signal and the original block-sparse signal. In the noiseless and perturbed case, we also prove that the BOMP algorithm can exactly recover the block 𝐾-sparse signal under some constraints on the block 𝐾-sparse signal and

δ K + 1 < 2 + 2 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

Finally, we compare the actual performance of perturbed OMP and perturbed BOMP algorithm in the numerical study. We also present some numerical experiments to verify the main theorem by using the completely perturbed BOMP algorithm.

MSC 2010: 49N45; 47A55; 94A12

1 Introduction

Compressive Sensing (CS), pioneered by Donoho and Candès, Romberg and Tao [23, 6], is an efficient data acquisition paradigm which has captured lots of attention in different fields, including machine learning, signal processing, pattern recognition, information theory, image processing, engineering and communication [5, 2, 18, 21, 13].

In essence, a crucial concern in CS is how to recover a sparse or compressible sparse signal from an underdetermined system of linear equations. Generally, we consider the model y = A ⁒ x , where y ∈ C n (β„‚ denotes the complex field) are the observed measurements, A ∈ C n Γ— m ( n β‰ͺ m ) is a given measurement matrix and x ∈ C m is an unknown signal that needs to be recovered. In practice, we usually run into the case that the observation vector π’š is disturbed by 𝒗, and consider the CS model

(1.1) y ^ = A ⁒ x + v ,

where v ∈ C n is the additive noise independent of 𝒙. However, the measurement matrix is affected by various external environment and internal interference in the compression sampling so that the actual measurement matrix is perturbed more or less, and hence the measurement results have a large deviation. In practical application, the measurement matrix error is often unavoidable, such as the error in analog-to-digital signal conversion and the lattice error in signal sampling. The perturbed case can be realized in source separation [4], remote sensing [12], radar [14] and others. That is to say, not only the observation vector π’š is perturbed by 𝒗, but also the measurement matrix 𝑨 is perturbed by 𝑬 in practical applications ( E ∈ C n Γ— m is a perturbation matrix). In such a scenario, there will be an extra multiplicative noise E ⁒ x in (1.1) when the perturbed sensing matrix is A ^ = A + E . In 2010, Herman and Strohmer [15] proposed the following completely perturbed β„“ 1 minimization model to recover stably the signal 𝒙 from the measurements y ^ :

(1.2) min βˆ₯ x βˆ₯ 1   such that   βˆ₯ A ^ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² ,

where Ο΅ A , K , y β€² is the total noise. They showed that the signal 𝒙 could be reliably recovered by solving (1.2) under the condition Ξ΄ 2 ⁒ K ≀ 2 / ( 1 + Ο΅ A ( 2 ⁒ K ) ) 2 - 1 . In [16], Ince and Nacaroglu established a sufficient condition to guarantee stable recovery of the signal by solving a completely perturbed β„“ p minimization problem.

However, nonzero elements of some real-world signals may appear in a few fixed blocks [20], which is different from the structure of a conventional sparse signal. The block-sparse signals appear in various applications, such as DNA microarrays [24], equalization of sparse communication channels [26], multiple measurement signals [7] and clustering of data in multiple subspaces [11]. From the mathematical viewpoint, a block-structured signal 𝒙 over the block index set I = { d 1 , … , d N } can be modeled as follows:

x = [ x 1 ⁒ β‹― ⁒ x d 1 ⏟ x ⁒ [ 1 ] ⁒ x d 1 + 1 ⁒ β‹― ⁒ x d 1 + d 2 ⏟ x ⁒ [ 2 ] ⁒ β‹― ⁒ x m - d N + 1 ⁒ β‹― ⁒ x m ⏟ x ⁒ [ N ] ] T ,

where x ⁒ [ i ] denotes the 𝑖th block of 𝒙, d i is the block size for the 𝑖th block. If 𝒙 has at most 𝐾 nonzero blocks, i.e., βˆ₯ x βˆ₯ 2 , 0 = βˆ‘ i = 1 N I ⁒ ( βˆ₯ x ⁒ [ i ] βˆ₯ 2 ) ≀ K , we call such 𝒙 block 𝐾-sparse signal. Next, the mixed β„“ 2 / β„“ 1 norm is defined as βˆ₯ x βˆ₯ 2 , 1 = βˆ‘ i = 1 N βˆ₯ x ⁒ [ i ] βˆ₯ 2 ; the mixed β„“ 2 / β„“ ∞ norm is βˆ₯ x βˆ₯ 2 , ∞ = max ⁑ { βˆ₯ x ⁒ [ 1 ] βˆ₯ 2 , … , βˆ₯ x ⁒ [ N ] βˆ₯ 2 } , and the βˆ₯ x βˆ₯ 2 , 2 is replaced by βˆ₯ x βˆ₯ 2 , i.e., βˆ₯ x βˆ₯ 2 , 2 = βˆ₯ x βˆ₯ 2 . Especially, the block-sparse signal is converted into a conventional sparse signal when the block size d 1 = β‹― = d N = 1 . In contrast to the block-sparse signal, the conventional sparse signal will be called sparse signal in this paper.

Block orthogonal matching pursuit (BOMP) is a widely used greedy algorithm which can quickly get a local optimal solution instead of seeking the global optimal solution. The basic idea of the BOMP algorithm is to select the column with the strongest linear correlation between the measurement matrix and the current residual, add all columns of the group to which the column belongs to the label set at a time, and finally determine the support of the block-sparse signal by iteration [8]. Compared with the standard OMP algorithm [27], Eldar and Kuppinger [8] have shown that the BOMP algorithm can reconstruct block-sparse signals better.

For any set Ξ› βŠ‚ { 1 , 2 , … , N } , let A ^ ⁒ [ Ξ› ] denote the submatrix of A ^ that contains only the blocks indexed by Ξ›, and let x ⁒ [ Ξ› ] be the subvector of 𝒙 that contains only the blocks indexed by Ξ›. For instance, if Ξ› = { 2 , 3 , 5 } , we have A ^ ⁒ [ Ξ› ] = [ A ^ ⁒ [ 2 ] , A ^ ⁒ [ 3 ] , A ^ ⁒ [ 5 ] ] and x ⁒ [ Ξ› ] = ( x ⁒ [ 2 ] , x ⁒ [ 3 ] , x ⁒ [ 5 ] ) T . Then the BOMP algorithm for A ^ is formally described in Algorithm 1 [25].

Algorithm 1

Algorithm 1 (The BOMP algorithm for A ^ )

Input: measurements y ^ , the perturbed sensing matrix A ^ and sparsity 𝐾.

Initialize: k = 0 , r 0 = y ^ , Ξ› 0 = βˆ… .

Iterate: repeat until a stopping criterion is met.

  1. k = k + 1 ,

  2. Ξ» k = arg max 1 ≀ i ≀ N βˆ₯ A ^ βˆ— [ i ] r k - 1 βˆ₯ 2 ,

  3. Ξ› k = Ξ› k - 1 βˆͺ { Ξ» k } ,

  4. x ^ [ Ξ› k ] = arg min βˆ₯ y ^ - A ^ βˆ— [ Ξ› k ] x βˆ₯ 2 , where supp ⁑ ( x ) = { i ∣ βˆ₯ x ⁒ [ i ] βˆ₯ 2 β‰  0 } = Ξ› k ,

  5. r k = y ^ - A ^ ⁒ [ Ξ› k ] ⁒ x ^ ⁒ [ Ξ› k ] .

Output: x ^ = arg min supp ⁑ ( x ) βŠ‚ Ξ› K βˆ₯ y ^ - A ^ x βˆ₯ 2 .

In the following, the block-restricted isometry property (BRIP) of sensing matrix 𝑨 has been proposed to analyze the recovery performance of algorithms for block-sparse signals [9], that is, a matrix 𝑨 satisfies the BRIP of order 𝐾 if there exists a smallest positive constant Ξ΄ B ⁒ K ∈ ( 0 , 1 ) such that

( 1 - Ξ΄ B ⁒ K ) ⁒ βˆ₯ x βˆ₯ 2 2 ≀ βˆ₯ A ⁒ x βˆ₯ 2 2 ≀ ( 1 + Ξ΄ B ⁒ K ) ⁒ βˆ₯ x βˆ₯ 2 2

for all block 𝐾-sparse 𝒙 over the block index set ℐ, where Ξ΄ B ⁒ K is the block-restricted isometry constant (BRIC). Whenever this causes no confusion, we simply denote Ξ΄ B ⁒ K by Ξ΄ K in the rest of this paper. Without loss of generality, we assume that the block entries of a block 𝐾-sparse signal x ∈ C m are ordered by

βˆ₯ x ⁒ [ 1 ] βˆ₯ 2 β‰₯ βˆ₯ x ⁒ [ 2 ] βˆ₯ 2 β‰₯ β‹― β‰₯ βˆ₯ x ⁒ [ K ] βˆ₯ 2 β‰₯ 0 ,

with βˆ₯ x ⁒ [ K + 1 ] βˆ₯ 2 = βˆ₯ x ⁒ [ K + 2 ] βˆ₯ 2 = β‹― = βˆ₯ x ⁒ [ N ] βˆ₯ 2 = 0 .

In this paper, we investigate the block-sparse compressed sensing with the BOMP algorithm for the completely perturbed model. As there are several types of noise, we only consider β„“ 2 bounded noise and β„“ 2 / β„“ ∞ bounded noise for the block-sparse compressed sensing of the completely perturbed model, respectively, i.e.,

(1.3) βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² ,
(1.4) βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² ,
where Ο΅ A , K , y β€²β€² = βˆ₯ A βˆ₯ 2 ⁒ ( 1 + Ο΅ A ) ⁒ Ο΅ A , K , y β€² (the proof is relegated to Section 2), and the matrix A ^ can be formed as follows:

A ^ = [ A ^ 1 ⁒ β‹― ⁒ A ^ d 1 ⏟ A ^ ⁒ [ 1 ] ⁒ A ^ d 1 + 1 ⁒ β‹― ⁒ A ^ d 1 + d 2 ⏟ A ^ ⁒ [ 2 ] ⁒ β‹― ⁒ A ^ m - d N + 1 ⁒ β‹― ⁒ A ^ m ⏟ A ^ ⁒ [ N ] ] .

To investigate the theoretical analysis of the block-sparse compressed sensing with the BOMP algorithm for the completely perturbed model in (1.3) and (1.4), the restricted isometry property (RIP) for A ^ (see [15]) is extended to the BRIP for A ^ in Section 2 (see [29]).

In recent years, many researchers have been trying to find out the RIP-based condition for guaranteeing the exact or stable recovery of 𝐾-sparse signals by using the OMP algorithm. For the unperturbed case, if Ξ΄ K + 1 < 1 / K + 1 , the 𝐾-sparse signal 𝒙 can be reliably recovered by using the OMP algorithm to solve (1.1) in 𝐾 iterations; see, e.g., [28]. Mo [22] has proved that Ξ΄ K + 1 < 1 / K + 1 is a sharp condition for exact or stable recovery of 𝐾-sparse signal via the OMP algorithm. Especially, Wen and Zhu [32] have shown that Ξ΄ K + 1 β‰₯ 1 / K + 1 may fail to recover a 𝐾-sparse signal 𝒙 with the OMP algorithm in 𝐾 iterations. Wen, Zhou and Wang [31] established a sufficient condition to guarantee exact or stable recovery of supp ⁑ ( x ) with the OMP algorithm in 𝐾 iterations. In particular, Liu and Fang have improved the sufficient condition in [19]. In addition, under some constraints on the minimum magnitude of the nonzero block elements of block 𝐾-sparse signal 𝒙 (i.e., min i ∈ supp ⁑ ( x ) βˆ₯ x [ i ] βˆ₯ 2 ) and assuming 𝑨 satisfies the BRIP of order K + 1 with Ξ΄ K + 1 < 1 / K + 1 , Wen, Zhou and Liu [30] have proved that the block 𝐾-sparse signal can be exactly or stably recovered by using the BOMP algorithm in 𝐾 iterations. If K β‰₯ 1 and 1 / K + 1 ≀ t < 1 , [30] shows that there exists a matrix 𝑨 satisfying the BRIP with Ξ΄ K + 1 = t and a block 𝐾-sparse signal 𝒙 such that the BOMP may fail to recover 𝒙 in 𝐾 iterations.

From the applied viewpoint, it is important to recover the block-sparse signal in the setting of complete perturbation. In this paper, with the BRIP condition, we study exact support recovery of block-sparse signals for the block-sparse compressed sensing of the completely perturbed model with the BOMP algorithm in complex settings. The main results of this paper are summarized as follows. Firstly, if 𝑨 satisfies the BRIP of order K + 1 with

(1.5) δ K + 1 < 1 K + 1 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 + 1 ( 1 + ϡ A ( K + 1 ) ) 2 - 1

and min i ∈ supp ⁑ ( x ) βˆ₯ x [ i ] βˆ₯ 2 exceeds a certain lower bound in the block-sparse compressed sensing of the completely perturbed model, we show that the support of the block 𝐾-sparse signal can be exactly or stably recovered with a certain stopping criterion under either the β„“ 2 or β„“ 2 / β„“ ∞ bounded total noise. In addition, we provide the reconstruction upper bound of the error between the recovered block-sparse signal and the original block-sparse signal. Secondly, we show that (1.5) is a sharp condition for exactly recovering any block 𝐾-sparse signal with the BOMP algorithm. Thirdly, when the measurement matrix 𝑨 is perturbed by 𝑬 in the noiseless case, we also prove that the BOMP algorithm can exactly recover the block 𝐾-sparse signal under some constraints on the block 𝐾-sparse signal and

δ K + 1 < 2 + 2 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

Finally, we conduct some numerical experiments to verify Theorem 3.5 by using the block-sparse algorithm with complete perturbation.

The rest of the paper is organized as follows. In Section 2, we give some notations and lemmas. Section 3 gives our main theoretical results. Section 4 provides a series of numerical experiments to support the theoretical results. Finally, we summarize this paper in Section 5.

2 Notations and preliminaries

In this section, we first introduce some symbols which will be used in the rest of this paper.

Notation

Let Ξ© k = { 1 , 2 , … , k } with k ∈ N + and Ξ© 0 = βˆ… . For two sets Ξ› and Ξ“, let Ξ› \ Ξ“ = { i ∣ i ∈ Ξ› , i βˆ‰ Ξ“ } . Let Ξ› = { Ξ» 1 , Ξ» 2 , … , Ξ» | Ξ› | } . Let A ^ ⁒ [ Ξ› ] be the submatrix of the block matrix A ^ to the index set Ξ›. Let x ⁒ [ Ξ› ] be the subvector of the block vector 𝒙 to the index set Ξ›. Let A ^ ⊀ ⁒ [ Ξ› ] be the transpose of A ^ ⁒ [ Ξ› ] , and let A ^ βˆ— ⁒ [ Ξ› ] denote the conjugate transposition of A ^ ⁒ [ Ξ› ] , where | Ξ› | is the cardinality of Ξ›. Let H ^ denote a subspace of C n . Let P ⁒ [ H ^ ] represent an orthogonal projection block matrix onto H ^ . Let P βŠ₯ ⁒ [ H ^ ] = I n - P ⁒ [ H ^ ] be an orthogonal projection block matrix onto the orthogonal complement of H ^ , where I n denotes the identity block matrix. Let H ^ Ξ› = span ⁑ { A ^ ⁒ [ Ξ» 1 ] , A ^ ⁒ [ Ξ» 2 ] , … , A ^ ⁒ [ Ξ» | Ξ› | ] } represent the column space of A ^ ⁒ [ Ξ› ] , H βˆ… = 0 , and let

P ⁒ [ H ^ Ξ› ] = A ^ ⁒ [ Ξ› ] ⁒ ( A ^ βˆ— ⁒ [ Ξ› ] ⁒ A ^ ⁒ [ Ξ› ] ) - 1 ⁒ A ^ βˆ— ⁒ [ Ξ› ] .

Let the scalar ⟨ x , y ⟩ = y βˆ— ⁒ x denote the Euclidean inner product of the complex block vectors 𝒙 and π’š. For a block matrix B ∈ C n Γ— n ,

B = [ B 11 B 12 B 21 B 22 ] ,

where the block B 11 is a π‘˜-order invertible block matrix with 0 < k < n . From [33], letting C k ⁒ ( B ) denote the Schur complement of B 11 in 𝑩, we have C k ⁒ ( B ) = B 22 - B 21 ⁒ B 11 - 1 ⁒ B 12 and C 0 ⁒ ( B ) = B .

Similar to [15], we define some notations which will be used in the rest of this paper. We first quantify the perturbations 𝑬 and 𝒗 with the relative bounds

(2.1) βˆ₯ E βˆ₯ 2 βˆ₯ A βˆ₯ 2 ≀ Ο΅ A , βˆ₯ E βˆ₯ 2 ( K ) βˆ₯ A βˆ₯ 2 ( K ) ≀ Ο΅ A ( K ) , βˆ₯ v βˆ₯ 2 βˆ₯ y βˆ₯ 2 ≀ Ο΅ y ,

where βˆ₯ β‹… βˆ₯ 2 represents the spectral norm and βˆ₯ β‹… βˆ₯ 2 ( K ) stands for the largest spectral norm taken over all 𝐾-block submatrices. We also define the constants

Ξ³ K = βˆ₯ x K c βˆ₯ 2 βˆ₯ x K βˆ₯ 2 , s K = βˆ₯ x K c βˆ₯ 2 , 1 K ⁒ βˆ₯ x K βˆ₯ 2 , Ξ± A = βˆ₯ A βˆ₯ 2 1 - Ξ΄ K , ΞΊ A ( K ) = 1 + Ξ΄ K 1 - Ξ΄ K .

The x K is the best 𝐾-term approximation of 𝒙, and it contains the 𝐾-largest blocks of 𝒙 in index set ℐ.

Before presenting our main lemmas, we first give the BRIP [29] of matrix A ^ .

Definition 2.1

Definition 2.1 (BRIP for A ^ )

For the BRIC Ξ΄ K ( K = 1 , 2 , … ) for 𝑨 and Ο΅ A ( K ) associated with matrix 𝑬, fix the constant Ξ΄ ^ K , max = ( 1 + Ξ΄ K ) ⁒ ( 1 + Ο΅ A ( K ) ) 2 - 1 . Then the matrix A ^ = A + E satisfies the BRIP of order 𝐾 if there exists a smallest nonnegative number Ξ΄ ^ K ≀ Ξ΄ ^ K , max such that ( 1 - Ξ΄ ^ K ) ⁒ βˆ₯ x βˆ₯ 2 2 ≀ βˆ₯ A ^ ⁒ x βˆ₯ 2 2 ≀ ( 1 + Ξ΄ ^ K ) ⁒ βˆ₯ x βˆ₯ 2 2 for all x ∈ R N that are block 𝐾-sparse over the block index set ℐ. Similar to [15], we have the following inequality: 1 - ( 1 - Ξ΄ K ) ⁒ ( 1 - Ο΅ A ( K ) ) 2 ≀ Ξ΄ ^ K ≀ Ξ΄ ^ K , max = ( 1 + Ξ΄ K ) ⁒ ( 1 + Ο΅ A ( K ) ) 2 - 1 .

Next, we will give some evident properties of the block-sensing matrix A ^ , orthogonal projection matrix and the block Schur complement of matrices.

Lemma 2.2

For any complex vectors Ξ± , Ξ² ∈ C n , it holds that ⟨ P ⁒ [ H ] ⁒ Ξ± , P ⁒ [ H ] ⁒ Ξ² ⟩ = ⟨ P ⁒ [ H ] ⁒ Ξ± , Ξ² ⟩ = ⟨ Ξ± , P ⁒ [ H ] ⁒ Ξ² ⟩ and βˆ₯ P ⁒ [ H ] ⁒ Ξ± βˆ₯ 2 ≀ βˆ₯ Ξ± βˆ₯ 2 .

Lemma 2.3

If the matrix A ^ satisfies the BRIP of orders K 1 and K 2 with K 1 < K 2 , we have Ξ΄ ^ K 1 ≀ Ξ΄ ^ K 2 .

Lemma 2.4

Let the matrix A ^ satisfy the BRIP of order K + 1 , and let Ξ› = { Ξ» 1 , Ξ» 2 , … , Ξ» | Ξ› | } βŠ‚ Ξ© N with | Ξ› | ≀ K + 1 . Then it holds that

( 1 - Ξ΄ ^ K + 1 ) ⁒ I | Ξ› | ≀ A ^ * ⁒ [ Ξ› ] ⁒ A ^ ⁒ [ Ξ› ] ≀ ( 1 + Ξ΄ ^ K + 1 ) ⁒ I | Ξ› | ,
( 1 - Ξ΄ ^ K + 1 ) ⁒ βˆ₯ ΞΎ βˆ₯ 2 2 ≀ βˆ₯ βˆ‘ i = 1 | Ξ› | ΞΎ ⁒ [ i ] ⁒ A ^ ⁒ [ Ξ› i ] βˆ₯ 2 2 ≀ ( 1 + Ξ΄ ^ K + 1 ) ⁒ βˆ₯ ΞΎ βˆ₯ 2 2 ,
where ΞΎ = ( ΞΎ ⁒ [ 1 ] , ΞΎ ⁒ [ 2 ] , … , ΞΎ ⁒ [ | Ξ› | ] ) T

Lemma 2.5

Lemma 2.5 ([33])

For any block matrices B , C , if B β‰₯ C > 0 , then C k ⁒ ( B ) β‰₯ C k ⁒ ( C ) .

Lemma 2.6

For any block vector x ∈ C n , βˆ₯ x βˆ₯ 2 , 1 ≀ n ⁒ βˆ₯ x βˆ₯ 2 , βˆ₯ x βˆ₯ 2 ≀ n ⁒ βˆ₯ x βˆ₯ 2 , ∞ and βˆ₯ x βˆ₯ 2 , ∞ ≀ βˆ₯ x βˆ₯ 2 ≀ βˆ₯ x βˆ₯ 2 , 1 .

Lemma 2.7

In (1.1), let the matrix A ^ = A + E be given. If the β„“ 2 bounded noise for the block-sparse compressed sensing of the completely perturbed model is such that βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , we have βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , where the total noise parameter is

Ο΅ A , K , y β€² = ( Ο΅ A K ⁒ ΞΊ A ( K ) + Ο΅ A ⁒ Ξ± A ⁒ r K 1 - ΞΊ A ( K ) ⁒ ( r K + s K ) + Ο΅ y ) ⁒ βˆ₯ y βˆ₯ 2 β‰₯ 0 , Ο΅ A , K , y β€²β€² = βˆ₯ A βˆ₯ 2 ⁒ ( 1 + Ο΅ A ) ⁒ Ο΅ A , K , y β€² .

Proof

Using Lemma 2.6, we have

βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 ≀ βˆ₯ A ^ βˆ— βˆ₯ 2 ⁒ βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ,

Applying βˆ₯ A ^ βˆ— βˆ₯ 2 = βˆ₯ A ^ βˆ₯ 2 = βˆ₯ A + E βˆ₯ 2 and βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , we can easily get

βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ βˆ₯ A ^ βˆ₯ 2 ⁒ Ο΅ A , K , y β€² = βˆ₯ A + E βˆ₯ 2 ⁒ Ο΅ A , K , y β€² ≀ ( βˆ₯ A βˆ₯ 2 + βˆ₯ E βˆ₯ 2 ) ⁒ Ο΅ A , K , y β€² ,

Using the inequality βˆ₯ E βˆ₯ 2 ≀ βˆ₯ A βˆ₯ 2 ⁒ Ο΅ A , this leads to

βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ ( βˆ₯ A βˆ₯ 2 + βˆ₯ A βˆ₯ 2 ⁒ Ο΅ A ) ⁒ Ο΅ A , K , y β€² .

So the lemma is proved. ∎

Next, we will give some important lemmas, and the proof will be provided in Appendix A. All the following lemmas are based on the assumption that A ^ satisfies the BRIP of order K + 1 .

Lemma 2.8

In (1.1), let

P ⁒ [ H ^ Ξ© s ] ⁒ y ^ = βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ x ^ ⁒ [ i ] = A ^ ⁒ [ Ξ© s ] ⁒ x ^   with ⁒ x ^ = ( x ^ ⁒ [ 1 ] , x ^ ⁒ [ 2 ] , … , x ^ ⁒ [ s ] ) T .

Suppose A ^ ⁒ x = A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] , where s ≀ K and βˆ₯ x ⁒ [ 1 ] βˆ₯ 2 ⁒ βˆ₯ x ⁒ [ 2 ] βˆ₯ 2 ⁒ β‹― ⁒ βˆ₯ x ⁒ [ s ] βˆ₯ 2 β‰  0 . For any s < j ≀ N and

Ξ› k = { βˆ… , k = 0 , { Ξ» 1 , Ξ» 2 , … , Ξ» k } , 1 ≀ k < s ,

with { Ξ» 1 , Ξ» 2 , … , Ξ» k } βŠ‚ Ξ© s , if we define

S 0 ⁒ Ξ› k = max 1 ≀ i ≀ s βˆ₯ A ^ [ i ] * P βŠ₯ [ H ^ Ξ› k ] y ^ βˆ₯ 2 , S j ⁒ Ξ› k = βˆ₯ A ^ [ j ] * P βŠ₯ [ H ^ Ξ› k ] y ^ βˆ₯ 2 ,

then there exists ΞΈ j ∈ [ 0 , 2 ⁒ Ο€ ] such that

(2.2) S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 - Ξ΄ ^ K + 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 - Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ⟩ ,

where A ^ ~ ⁒ [ j ] = e i ⁒ θ j ⁒ A ^ ⁒ [ j ] .

Lemma 2.9

In (1.1), if βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , then we have

(2.3) βˆ₯ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ) * ⁒ A ^ ⁒ [ j ] βˆ₯ 2 = βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] ) βˆ₯ 2 ≀ ( 1 + s ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€²

for s ≀ K and 1 ≀ j ≀ N .

Lemma 2.10

In (1.1), let P ⁒ [ H ^ Ξ© s ] ⁒ y ^ = βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ x ^ ⁒ [ i ] = A ^ ⁒ [ Ξ© s ] ⁒ x ^ , and suppose that A ^ ⁒ x = A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] with s ≀ K . If βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , then we have

(2.4) min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 .

For βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , we also have

(2.5) min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - K ⁒ Ο΅ A , K , y β€²β€² 1 - Ξ΄ ^ K + 1 .

Lemma 2.11

In (1.1), suppose that

A ^ ⁒ x = A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ]   with ⁒ s ≀ K ⁒ and ⁒ βˆ₯ x ⁒ [ 1 ] βˆ₯ 2 ⁒ βˆ₯ x ⁒ [ 2 ] βˆ₯ 2 ⁒ β‹― ⁒ βˆ₯ x ⁒ [ s ] βˆ₯ 2 β‰  0 .

If βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , then we have

(2.6) βˆ₯ P βŠ₯ [ H ^ Ξ© 0 ] y ^ βˆ₯ 2 β‰₯ βˆ₯ P βŠ₯ [ H ^ Ξ© 1 ] y ^ βˆ₯ 2 β‰₯ β‹― β‰₯ βˆ₯ P βŠ₯ [ H ^ Ξ© s - 1 ] y ^ βˆ₯ 2 β‰₯ 1 - Ξ΄ ^ K + 1 min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - Ο΅ A , K , y β€² .

For βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , we also have

(2.7) βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ βˆ₯ 2 , ∞ ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² ,
(2.8) βˆ₯ A ^ βˆ— P βŠ₯ [ H ^ Ξ© s ] y ^ βˆ₯ 2 , ∞ β‰₯ ( 1 - Ξ΄ ^ K + 1 ) min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - ( 1 + K - 1 ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) Ο΅ A , K , y β€²β€²
for all k < s

3 Main theoretical results

In this section, we give the following main results which will be proved in Appendix B. The following results are based on the assumption that supp ⁑ ( x ) = { i ∣ βˆ₯ x ⁒ [ i ] βˆ₯ 2 β‰  0 } = Ξ© s = { 1 , 2 , … , s } with s ≀ K .

Theorem 3.1

For given relative perturbations Ο΅ A , Ο΅ A ( K ) , Ο΅ A ( K + 1 ) and Ο΅ y in (2.1), under βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , assume the BRIC for matrix 𝑨 satisfies

(3.1) δ K + 1 < 1 K + 1 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 + 1 ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

Then the BOMP algorithm with the stopping criterion βˆ₯ r k βˆ₯ 2 ≀ Ο΅ A , K , y β€² exactly recovers supp ⁑ ( x ) in 𝐾 iterations provided that

(3.2) min i ∈ supp ⁑ ( x ) βˆ₯ x [ i ] βˆ₯ 2 > Ο΅ A , K , y β€² 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) + ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 ⁒ Ο΅ A , K , y β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 .

Remark 3.2

Theorem 3.1 provides a sufficient condition (3.2) for recovering the support of the block 𝐾-sparse signals via the BOMP algorithm in the setting of complete perturbation. It has been proven in [30] that if there exists a matrix 𝑨 satisfying the BRIP with 1 K + 1 ≀ Ξ΄ K + 1 < 1 , the BOMP algorithm may fail to recover block 𝐾-sparse signal 𝒙 in 𝐾 iterations. Similar to the proof in [30], we can show that the BOMP algorithm may fail to recover 𝒙 in 𝐾 iterations when there exists a matrix A ^ satisfying the BRIP with 1 K + 1 ≀ Ξ΄ ^ K + 1 < 1 . According to the above inequality, we have 1 K + 1 ≀ Ξ΄ ^ K + 1 ≀ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 . Namely, if there exists a matrix 𝑨 which satisfies the BRIP with

1 K + 1 ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 + 1 ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ≀ Ξ΄ K + 1 < 1 ,

the BOMP algorithm may fail to recover a block 𝐾-sparse signal 𝒙 in 𝐾 iterations. The results show that (3.1) is a sharp sufficient condition for exact recovery of the block 𝐾-sparse signal within completely perturbed model with the BOMP algorithm in 𝐾 iterations. Condition (3.1) of Theorem 3.1 generalizes the condition of [30] to the block-sparse compressed sensing of the completely perturbed model.

In addition, as a special case of Theorem 3.1, E = 0 , which means that there is no perturbation in the measurement matrix 𝑨; then Ξ΄ ^ K + 1 = Ξ΄ K + 1 . Obviously, when d i = 1 ( i = 1 , 2 , … , N ), E = 0 , and we obtain a sufficient condition for recovering 𝐾-sparse signals by the OMP algorithm in 𝐾 iterations [19]. Theorem 1 in [19] is included in our theorem.

Theorem 3.3

For given relative perturbations Ο΅ A , Ο΅ A ( K ) , Ο΅ A ( K + 1 ) and Ο΅ y in (2.1), under βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , assume the BRIC for matrix 𝑨 satisfies

δ K + 1 < 1 K + 1 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 + 1 ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

Then the BOMP algorithm with the stopping criterion

βˆ₯ A ^ βˆ— ⁒ r k βˆ₯ 2 , ∞ ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€²

exactly recovers supp ⁑ ( x ) in 𝐾 iterations provided that

(3.3) min i ∈ supp ⁑ ( x ) βˆ₯ x [ i ] βˆ₯ 2 > K ⁒ Ο΅ A , K , y β€²β€² 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) + [ 1 + K ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 .

Remark 3.4

From Theorem 3.3, we can see that (3.3) is a sufficient condition for the recovery of the block 𝐾-sparse signals with the BOMP algorithm. Obviously, when d i = 1 ( i = 1 , 2 , … , N ), E = 0 , we obtain a sufficient condition for recovering 𝐾-sparse signals by the OMP algorithm in 𝐾 iterations [19]. Theorem 2 in [19] is included in our theorem.

In the following, we will provide the approximation precision between the output signal and the original signal, which is characterized by a total noise and BRIC when supp ⁑ ( x ) = { i ∣ βˆ₯ x ⁒ [ i ] βˆ₯ 2 β‰  0 } can be exactly and stably recovered in the presence of total noise.

Theorem 3.5

In (1.1), let | supp ⁑ ( x ) | = s ≀ K . If the matrix 𝑨 satisfies the BRIP of order 𝐾 and the BOMP algorithm can recover exactly supp ⁑ ( x ) in 𝑠 iterations, i.e., Ξ› s = supp ⁑ ( x ) , then, for βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² ,

(3.4) βˆ₯ x ⋆ - x ⁒ [ Ξ› s ] βˆ₯ 2 ≀ Ο΅ A , K , y β€² 2 - ( 1 + Ξ΄ K ) ⁒ ( 1 + Ο΅ A ( K ) ) 2 .

For βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² ,

(3.5) βˆ₯ x ⋆ - x ⁒ [ Ξ› s ] βˆ₯ 2 ≀ s 2 - ( 1 + Ξ΄ K ) ⁒ ( 1 + Ο΅ A ( K ) ) 2 ⁒ Ο΅ A , K , y β€²β€² .

Remark 3.6

Under the β„“ 2 and β„“ 2 / β„“ ∞ bounded total noise, (3.4) and (3.5) in Theorem 3.5 offer upper bound estimations, which explicitly characterize the relationship between the output block-sparse signal and the original block-sparse signal. The results show that the approximation precision is controlled by the total noise and BRIC. Particularly, they reveal that the support of the block-sparse signal can be exactly recovered by the block-sparse compressed sensing of the completely perturbed model with the BOMP algorithm without the noise.

To simplify the notation, for any x ∈ C m , we define

g ⁒ ( x ) = max ⁑ { βˆ₯ x ⁒ [ Ξ› ] βˆ₯ 2 , 1 βˆ₯ x ⁒ [ Ξ› ] βˆ₯ 2 | Ξ› βŠ‚ Ξ© N , Ξ› β‰  βˆ… } .

Using Lemma 2.6, if 𝒙 is a block 𝐾-sparse signal, we have 1 ≀ g ⁒ ( x ) ≀ K . In the noiseless case, when the measurement matrix 𝑨 is perturbed by 𝑬, we prove that the BOMP algorithm can also exactly recover the block 𝐾-sparse signal under some constraints on the block signal and

0 < δ K + 1 < 2 + 2 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

Theorem 3.7

In (1.1), in the noiseless case and when the measurement matrix 𝑨 is perturbed by 𝑬, if the matrix 𝑨 satisfies the BRIP of order K + 1 with

δ K + 1 < 2 + 2 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1

and

(3.6) g ⁒ ( x ) < 2 ⁒ ( 1 + δ K + 1 ) ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - ( 1 + δ K + 1 ) 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 4 ( 1 + δ K + 1 ) ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1 ,

then the BOMP algorithm can exactly recover the block 𝐾-sparse signal in 𝐾 iterations.

Remark 3.8

Since g ⁒ ( x ) β‰₯ 1 for any block 𝐾-sparse signal 𝒙. To ensure that the condition (3.6) is holds, it is necessary to ensure

2 ⁒ ( 1 + δ K + 1 ) ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - ( 1 + δ K + 1 ) 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 4 ( 1 + δ K + 1 ) ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1 > 1

According to the above inequality, we have

( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 < 2 2 , i.e.,   Ξ΄ K + 1 < 2 + 2 2 ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 .

4 Numerical experiments

In this section, we conduct some numerical experiments to verify our theoretical results by using the block-sparse algorithm with complete perturbation. We also investigate the performance of perturbed OMP and perturbed BOMP algorithm in processing block-sparse signals [3, 10]. Without loss of generality, we consider the block-sparse signal 𝒙 with even block size, i.e., d i = d ( i = 1 , … , m ), and let the signal length be N = 256 . In each experiment, we randomly generate a block-sparse signal 𝒙, where the elements of each 𝒙 are drawn from the standard Gaussian distribution, and randomly generate a 128 Γ— 256 measurement matrix 𝑨 from Gaussian distribution [1, 17]. We generate the measurements y = A ⁒ x + v by using 𝒙 and 𝐴, where 𝒗 is the Gaussian noise. In addition, 𝐸 is a random Gaussian matrix, where βˆ₯ E βˆ₯ 2 = Ο΅ A ⁒ βˆ₯ A βˆ₯ 2 (the value of Ο΅ A is not fixed). The perturbed matrix is A ^ = A + E . In each experiment, we select the average value of 200 independent trails as the final statistical results.

As the first experiment, we compare the performance of perturbed OMP and perturbed BOMP algorithm in processing block-sparse signals. Some reconstruction errors βˆ₯ x * - x βˆ₯ 2 are listed in Table 1. Without loss of generality, we choose signals whose block size 𝑑 ranges from 2 to 32 with the block sparsity K = 32 , 16 , 8 , 4 , 2 as the test signals and use the reconstruction error to measure the algorithm performance for perturbation level Ο΅ A = 0 , 0.05 , 0.1 , 0.2 . It is easy to see that the reconstruction error decreases with the decrease of the perturbation level Ο΅ A . Table 1 shows that the perturbed BOMP algorithm is more effective than the perturbed OMP algorithm in reconstructing block-sparse signals. In other words, signal structure is very important in signal reconstruction. Specifically, when the perturbation level is Ο΅ A = 0 and block sparsity is K = 2 , the reconstruction performance of the perturbed BOMP algorithm is better than other cases.

Table 1

Restorative comparison of reconstruction errors between perturbed OMP and BOMP algorithm.

Ο΅ A d = 2 d = 4 d = 8 d = 16 d = 32
OMP 0 4.69 4.72 4.53 4.69 4.83
0.05 4.71 5.01 4.82 4.71 4.96
0.1 5.32 5.38 5.39 5.54 5.32
0.2 6.38 6.53 6.23 6.50 6.34

Block-OMP 0 2.18 0.89 0.59 0.57 0.56
0.05 2.20 0.91 0.76 0.69 0.69
0.1 2.94 1.41 1.09 0.99 0.96
0.2 4.51 2.47 1.70 1.68 1.66

In this part, we conduct some numerical experiments to verify our theoretical results by using the completely perturbed BOMP algorithm for Theorem 3.5. With

δ K = 0.1 ( < 1 K ⁒ ( 1 + ϡ A ( K ) ) 2 + 1 ( 1 + ϡ A ( K ) ) 2 - 1 ) ,

we produce the signals with 64 blocks uniformly at random (i.e., d = 4 ) and choose 10 as the best block number for the sparse approximation. Table 2 and Table 3 show the theoretical error bound βˆ₯ x - x * βˆ₯ 2 and βˆ₯ x - x * βˆ₯ 2 , ∞ for various number of samples, where the perturbation level is Ο΅ A = 0 , 0.05 , 0.1 , 0.15 . The above tables show that the smaller the perturbation and the more samples, the better the performance of the perturbed BOMP algorithm. As the results show, the reconstruction error bound is lower than the theoretical error bound when the number of samples is M = 112 , 128 , 144 . However, the reconstruction performance of the perturbed BOMP algorithm is poor when the number of samples is M = 96 .

Table 2

Theoretical verification of βˆ₯ x - x * βˆ₯ 2 for various number of samples.

Ο΅ A M = 96 M = 112 M = 128 M = 144
βˆ₯ x - x * βˆ₯ 2 0 0.0961 0.4425 0.7674 1.1725
0.05 0.0659 0.4021 0.7725 1.1320
0.1 0.0722 0.4047 0.7941 1.1739
0.15 0.0775 0.4095 0.8062 1.1926

Theoretical threshold 0 0.0516 0.4470 0.9076 1.4541
0.05 0.0556 0.4569 0.9124 1.4640
0.1 0.0595 0.4550 0.9040 1.4684
0.15 0.0631 0.4553 0.9137 1.4728
Table 3

Theoretical verification of βˆ₯ x - x * βˆ₯ 2 , ∞ for various number of samples.

Ο΅ A M = 96 M = 112 M = 128 M = 144
βˆ₯ x - x * βˆ₯ 2 , ∞ 0 0.0961 0.4425 0.7674 1.1725
0.05 0.0659 0.4021 0.7725 1.1320
0.1 0.0722 0.4047 0.7941 1.1739
0.15 0.0775 0.4095 0.8062 1.1926

Theoretical threshold 0 0.0516 0.4470 0.9076 1.4541
0.05 0.0556 0.4569 0.9124 1.4640
0.1 0.0595 0.4550 0.9040 1.4684
0.15 0.0631 0.4553 0.9137 1.4728

In Figure 1, we compare the theoretical error bound and the reconstruction error of Theorem 3.5 for various block-sparsity levels with perturbation level Ο΅ A = 0 , 0.05 , 0.1 , 0.15 . Figure 1 (a) and (c) show that decreasing Ο΅ A improves the recovery performance of the perturbed BOMP algorithm for a smaller block-sparsity level. Figure 1 (a) and (b) indicate that βˆ₯ x - x * βˆ₯ 2 is lower than the theoretical error bound. Similarly, Figure 1 (c) and (d) present the relationship between βˆ₯ x - x * βˆ₯ 2 , ∞ and the theoretical error bound in different block-sparsity levels and Ο΅ A = 0 , 0.05 , 0.1 , 0.15 , respectively.

Figure 1

The figure plots theoretical error bound, βˆ₯ x - x * βˆ₯ 2 and βˆ₯ x - x * βˆ₯ 2 , ∞ versus the perturbation level Ο΅ A for various block-sparsity levels. (a) βˆ₯ x - x * βˆ₯ 2 versus block-sparsity level for the block size d = 4 . (b) The theoretical error bound of (3.4) for Ξ΄ K = 0.1 . (c) βˆ₯ x - x * βˆ₯ 2 , ∞ versus block-sparsity level for the block size d = 4 . (d) The theoretical error bound of (3.5) for Ξ΄ K = 0.1 .

(a)
(a)
(b)
(b)
(c)
(c)
(d)
(d)

5 Conclusion

In this paper, we discuss the sufficient conditions for recovering the support of a block 𝐾-sparse signal under the BRIP using the BOMP algorithm in both the β„“ 2 and β„“ 2 / β„“ ∞ bounded total noise. Under some constraints on the minimum magnitude of the nonzero elements of the block 𝐾-sparse signals, we prove that the support of the block 𝐾-sparse signals can be exactly recovered by the BOMP algorithm in both the β„“ 2 and β„“ 2 / β„“ ∞ bounded total noise if 𝑨 satisfies the BRIP of order K + 1 with

δ K + 1 < 1 K + 1 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 + 1 ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

In addition, we also give the reconstruction upper bound of the error between the recovered block-sparse signal and the original block-sparse signal. In the noiseless case, when the measurement matrix 𝑨 is perturbed by 𝑬, we also prove that the BOMP algorithm can exactly recover the block 𝐾-sparse signal under some constraints on the block 𝐾-sparse signal and

δ K + 1 < 2 + 2 2 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 - 1 .

Finally, a series of numerical experiments are carried out to verify our theoretical results. The results obtained are helpful in further discussing and studying the perturbed BOMP algorithm.

Award Identifier / Grant number: 61673015

Award Identifier / Grant number: KJQN201802002

Award Identifier / Grant number: KJQN201802001

Funding statement: This research was supported by Natural Science Foundation of China (No. 61673015), the Science and Technology Research Program of Chongqing Municipal Education Commission (No. KJQN201802002, No. KJQN201802001).

Appendix A Appendix

Proof of Lemma 2.8

Let Ξ› k = Ξ© k . Then we have H ^ Ξ› k = H ^ Ξ© k . If ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , A ^ ⁒ [ j ] ⟩ = e i ⁒ ΞΈ j ⁒ βˆ₯ A ^ ⁒ [ j ] * ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ βˆ₯ 2 , then we have S j ⁒ Ξ© k = βˆ₯ A ^ ~ ⁒ [ j ] * ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ βˆ₯ 2 , where A ^ ~ ⁒ [ j ] = e i ⁒ ΞΈ j ⁒ A ^ ⁒ [ j ] . For any J a = { j 1 , j 2 , … , j a } βŠ‚ Ξ© s , let

A ^ ~ J a , j = [ A ^ ⁒ [ j 1 ] , A ^ ⁒ [ j 2 ] , … , A ^ ⁒ [ j a ] , A ^ ~ ⁒ [ j ] ] .

Let A ^ ~ ⁒ [ J a , j ] = A ^ ⁒ [ J a βˆͺ { j } ] ⁒ U with U = diag ⁑ { I ⁒ [ j 1 ] , I ⁒ [ j 2 ] , … , I ⁒ [ j a ] , e i ⁒ ΞΈ j ⁒ I ⁒ [ j ] } . Thus, we have

A ^ ~ * ⁒ [ J a , j ] ⁒ A ^ ~ ⁒ [ J a , j ] = U * ⁒ A ^ * ⁒ [ J a βˆͺ { j } ] ⁒ A ^ ⁒ [ J a βˆͺ { j } ] ⁒ U .

It is clear that | J a βˆͺ { j } | ≀ K + 1 and U βˆ— ⁒ U = I m + 1 . Then, by Lemma 2.4, we obtain

(A.1) ( 1 - Ξ΄ ^ K + 1 ) ⁒ I ⁒ [ a + 1 ] ≀ A ^ ~ * ⁒ [ J a , j ] ⁒ A ^ ~ ⁒ [ J a , j ] ≀ ( 1 + Ξ΄ ^ K + 1 ) ⁒ I ⁒ [ a + 1 ] .

From Lemma 2.2, we can easily get

P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] = P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] = P βŠ₯ ⁒ [ H ^ Ξ© s ] ,
P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ i ] = 0 , with ⁒ β€…1 ≀ i ≀ k ,
and

P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ = P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ [ A ^ ⁒ x ^ + ( y ^ - A ^ ⁒ x ^ ) ] = P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ [ A ^ ⁒ [ Ξ© s ] ⁒ x ^ ⁒ [ Ξ© s ] + ( y ^ - A ^ ⁒ x ^ ) ] = P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) .

For any t > 0 , we have

βˆ₯ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ - P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 - βˆ₯ ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ + P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2
= 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ βˆ₯ 2 2 - 4 ⁒ t ⁒ Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ v ^ ⟩
= 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ ⟩ - 4 ⁒ t ⁒ S j ⁒ Ξ© k
= 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( P ⁒ [ H ^ Ξ© s ] ⁒ y ^ ) ⟩
    + 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( P βŸ‚ ⁒ [ H ^ Ξ© s ] ⁒ y ^ ) ⟩ - 4 ⁒ t ⁒ S j ⁒ Ξ© k
= 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( βˆ‘ i = 1 s x ^ ⁒ [ i ] ⁒ A ^ ⁒ [ i ] ) ⟩
    + 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ - 4 ⁒ t ⁒ S j ⁒ Ξ© k
= 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , βˆ‘ i = k + 1 s x ^ ⁒ [ i ] ⁒ A ^ ⁒ [ i ] ⟩
    + 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ) ⟩ - 4 ⁒ t ⁒ S j ⁒ Ξ© k
(A.2) = 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ , βˆ‘ i = k + 1 s x ^ ⁒ [ i ] ⁒ A ^ ⁒ [ i ] ⟩ + 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ ⟨ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ - 4 ⁒ t ⁒ S j ⁒ Ξ© k .
Since

(A.3) βˆ₯ ( βˆ‘ i = k + 1 s x ^ ⁒ [ i ] ⁒ A ^ ⁒ [ i ] ) * ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ βˆ₯ 2 ≀ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ S 0 ⁒ Ξ© k

and

(A.4) ⟨ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ = ⟨ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 ,

by (A.2), (A.3), (A.4), this leads to

(A.5) βˆ₯ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ - P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 - βˆ₯ ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ + P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 ≀ 4 ⁒ t ⁒ ( S 0 ⁒ Ξ© k - S j ⁒ Ξ© k ) + 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 .

Let

ΞΎ = ( ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ x ^ ⁒ [ k + 1 ] , … , ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ x ^ ⁒ [ s ] , - 1 ) T ,
Ξ· = ( ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ x ^ ⁒ [ k + 1 ] , … , ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ x ^ ⁒ [ s ] , 1 ) T .
According to the above equality, we have

( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ - P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] = ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( P ⁒ [ H ^ Ξ© s ] ⁒ y ^ ) - P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] + ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ ) = P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( βˆ‘ i = k + 1 s ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ x ^ ⁒ [ i ] ⁒ A ^ ⁒ [ i ] - A ^ ~ ⁒ [ j ] ) + ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) = P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ + ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) .

It is easy to see that

⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ , ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ = ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] , ( y ^ - A ^ ⁒ x ^ ) ⟩ = - ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⟨ P βŠ₯ [ H ^ Ξ© k ] A ^ ~ [ j ] , ( y ^ - A ^ x ^ ) . ⟩

From the above equality, we thus have

(A.6) βˆ₯ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ - P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ + ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ βˆ₯ 2 2 + ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) 2 ⁒ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 - 2 ⁒ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ = ΞΎ * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ + ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) 2 ⁒ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 - 2 ⁒ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ .

Similar to the proof of (A.6), we can easily get

(A.7) βˆ₯ ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ + P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 = Ξ· * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ Ξ· + ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) 2 ⁒ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 + 2 ⁒ ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ .

By using (A.6) and (A.7), this leads to

(A.8) βˆ₯ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ - P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 - βˆ₯ ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ + P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 2 = ΞΎ * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ - Ξ· * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ Ξ· + 4 ⁒ t βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) βˆ₯ 2 2 - 4 ⁒ t ⁒ Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( y ^ - A ^ ⁒ x ^ ) ⟩ .

Matrix A ^ ~ ⁒ [ Ξ© s , j ] = [ A ^ ⁒ [ 1 ] , A ^ ⁒ [ 2 ] , … , A ^ ⁒ [ s ] , A ^ ⁒ [ j ] ] is divided into matrix A ^ ~ ⁒ [ Ξ© s , j ] = [ A ^ ⁒ [ Ξ© k ] , A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ] . According to some simple calculations, we have

A ^ ~ * ⁒ [ Ω s , j ] ⁒ A ^ ~ ⁒ [ Ω s , j ] = [ A ^ * ⁒ [ Ω k ] ⁒ A ^ ⁒ [ Ω k ] A ^ * ⁒ [ Ω k ] ⁒ A ^ ~ ⁒ [ Ω s \ Ω k , j ] A ^ ~ * ⁒ [ Ω s \ Ω k , j ] ⁒ A ^ ⁒ [ Ω k ] A ^ ~ * ⁒ [ Ω s \ Ω k , j ] ⁒ A ^ ~ ⁒ [ Ω s \ Ω k , j ] ] .

Then the Schur complement of A ^ ~ * ⁒ [ Ω s , j ] ⁒ A ^ ~ ⁒ [ Ω s , j ] in A ^ * ⁒ [ Ω k ] ⁒ A ^ ⁒ [ Ω k ] is defined as

(A.9) C k ⁒ ( A ^ ~ * ⁒ [ Ξ© s , j ] ⁒ A ^ ~ ⁒ [ Ξ© s , j ] ) = A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] - A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ A ^ ⁒ [ Ξ© k ] ⁒ ( A ^ * ⁒ [ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© k ] ) - 1 ⁒ A ^ * ⁒ [ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] = A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ]

From (2.5) and (A.1), this leads to

(A.10) ( 1 - Ξ΄ ^ K + 1 ) ⁒ I s - k + 1 ≀ C k ⁒ ( A ^ ~ * ⁒ [ Ξ© s , j ] ⁒ A ^ ~ ⁒ [ Ξ© s , j ] ) ≀ ( 1 + Ξ΄ ^ K + 1 ) ⁒ I s - k + 1

By combining (2.4), (A.9) and (A.10), we obtain

(A.11) ΞΎ * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ = ΞΎ * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ( I n - A ^ ⁒ [ Ξ© k ] ⁒ ( A ^ βˆ— ⁒ [ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© k ] ) - 1 ⁒ A ^ βˆ— ⁒ [ Ξ© k ] ) ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ ΞΎ = ΞΎ * ⁒ C k ⁒ ( A ^ ~ * ⁒ [ Ξ© s , j ] ⁒ A ^ ~ ⁒ [ Ξ© s , j ] ) ⁒ ΞΎ β‰₯ ( 1 - Ξ΄ ^ K + 1 ) ⁒ ΞΎ * ⁒ ΞΎ = ( 1 - Ξ΄ ^ K + 1 ) ⁒ [ ( t + 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) 2 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 + 1 ] .

Similarly, we obtain

(A.12) Ξ· * ⁒ A ^ ~ * ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ~ ⁒ [ Ξ© s \ Ξ© k , j ] ⁒ Ξ· = Ξ· * ⁒ C k ⁒ ( A ^ ~ * ⁒ [ Ξ© s , j ] ⁒ A ^ ~ ⁒ [ Ξ© s , j ] ) ⁒ Ξ· ≀ ( 1 + Ξ΄ ^ K + 1 ) ⁒ Ξ· * ⁒ Ξ· = ( 1 + Ξ΄ ^ K + 1 ) ⁒ [ ( t - 1 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ) 2 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 + 1 ] .

By using (A.5), (A.8), (A.11) and (A.12), we can easily get

(A.13) S 0 ⁒ Ξ© k - S j ⁒ Ξ© k β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 - Ξ΄ K + 1 2 ( t βˆ₯ x ^ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 + 1 t ( βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 2 + 1 ) - Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ⟩ ,

According to the arithmetic-geometric inequality and (A.13), we have

S 0 ⁒ Ξ© k - S j ⁒ Ξ© k β‰₯ max t > 0 { βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 - Ξ΄ ^ K + 1 2 ( t βˆ₯ x ^ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 + 1 t ( βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 2 + 1 ) ) - Re ⟨ A ^ ~ [ j ] , P βŠ₯ [ H ^ Ξ© s ] ( A ^ x - y ^ ) ⟩ } = βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 - Ξ΄ ^ K + 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 + βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 1 2 - Re ⁑ ⟨ A ^ ~ ⁒ [ j ] , P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ⟩ .

So the lemma is proved. ∎

Proof of Lemma 2.9

When j ≀ s , we have

βˆ₯ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ) * ⁒ A ^ ⁒ [ j ] βˆ₯ 2 = βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] ) βˆ₯ 2 = 0 ,

so the lemma holds evidently.

When j > s , let P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] = βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] = A ^ ⁒ [ Ξ© s ] ⁒ Z with Z = ( z ⁒ [ 1 ] , z ⁒ [ 2 ] , … , z ⁒ [ s ] ) T . From Lemma 2.4, this leads to

(A.14) βˆ₯ P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] βˆ₯ 2 2 = βˆ₯ A ^ ⁒ [ Ξ© s ] ⁒ Z βˆ₯ 2 2 = Z * ⁒ A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ Z β‰₯ ( 1 - Ξ΄ ^ K + 1 ) ⁒ βˆ₯ Z βˆ₯ 2 2 .

For any t β‰₯ 0 , we then have

(A.15) βˆ₯ P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] βˆ₯ 2 2 = ⟨ P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] , P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] ⟩ = Re ⁑ ⟨ P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] , βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] ⟩ = Re ⁑ ⟨ A ^ ⁒ [ j ] , P ⁒ [ H ^ Ξ© s ] ⁒ ( βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] ) ⟩ = Re ⁑ ⟨ A ^ ⁒ [ j ] , βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] ⟩ = 1 4 ⁒ t ⁒ ( βˆ₯ βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] + t ⁒ A ^ ⁒ [ j ] βˆ₯ 2 2 - βˆ₯ βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] - t ⁒ A ^ ⁒ [ j ] βˆ₯ 2 2 ) ≀ 1 4 ⁒ t ⁒ ( ( 1 + Ξ΄ ^ K + 1 ) ⁒ ( βˆ₯ Z βˆ₯ 2 2 + t 2 ) - ( 1 - Ξ΄ ^ K + 1 ) ⁒ ( βˆ₯ Z βˆ₯ 2 2 + t 2 ) ) = Ξ΄ ^ K + 1 2 ⁒ t ⁒ ( βˆ₯ Z βˆ₯ 2 2 + t 2 ) = Ξ΄ ^ K + 1 2 ⁒ ( 1 t ⁒ βˆ₯ Z βˆ₯ 2 2 + t ) .

By using the arithmetic-geometric inequality, (A.14) and (A.15), then

( 1 - Ξ΄ ^ K + 1 ) ⁒ βˆ₯ Z βˆ₯ 2 2 ≀ min t > 0 ⁑ { Ξ΄ ^ K + 1 2 ⁒ ( 1 t ⁒ βˆ₯ Z βˆ₯ 2 2 + t ) } = Ξ΄ ^ K + 1 ⁒ βˆ₯ Z βˆ₯ 2 .

This implies

βˆ₯ Z βˆ₯ 2 ≀ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 .

From Lemma 2.6 and the inequality

βˆ₯ ( A ^ x - y ^ ) βˆ— A ^ [ j ] βˆ₯ 2 = βˆ₯ ( A ^ [ j ] ) βˆ— ( A ^ x - y ^ ) βˆ₯ 2 ≀ βˆ₯ A ^ βˆ— ( A ^ x - y ^ ) βˆ₯ 2 , ∞ = max 1 ≀ j ≀ N βˆ₯ ( A ^ [ j ] ) βˆ— ( A ^ x - y ^ ) βˆ₯ 2 ≀ Ο΅ A , K , y β€²β€² ,

we have

βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] ) βˆ₯ 2 = βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ ( A ^ ⁒ [ j ] - P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] ) βˆ₯ 2 ≀ βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ A ^ ⁒ [ j ] βˆ₯ 2 + βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ P ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] βˆ₯ 2 ≀ Ο΅ A , K , y β€²β€² + βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ z ⁒ [ i ] βˆ₯ 2 ≀ Ο΅ A , K , y β€²β€² + βˆ₯ Z βˆ₯ 1 ⁒ Ο΅ A , K , y β€²β€² ≀ Ο΅ A , K , y β€²β€² + s ⁒ βˆ₯ Z βˆ₯ 2 ⁒ Ο΅ A , K , y β€²β€² ≀ ( 1 + s ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² .

So the lemma is proved. ∎

Proof of Lemma 2.10

Using the equality

P ⁒ [ H ^ Ξ© s ] ⁒ y ^ = βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ x ^ ⁒ [ i ] = A ^ ⁒ [ Ξ© s ] ⁒ x ^ ,

we have

(A.16) P ⁒ [ H ^ Ω s ] ⁒ ( A ^ ⁒ x - y ^ ) = A ^ ⁒ [ Ω s ] ⁒ x - A ^ ⁒ [ Ω s ] ⁒ x ^ = A ^ ⁒ [ Ω s ] ⁒ ( x - x ^ ) .

Let V = x - x ^ , i.e., V ⁒ [ i ] = x ⁒ [ i ] - x ^ ⁒ [ i ] for any 1 ≀ i ≀ s . When V = 0 , (2.4), (2.5) hold evidently.

When V β‰  0 , for βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , by Lemma 2.2, Lemma 2.4 and (A.16), we obtain

(A.17) ( Ο΅ A , K , y β€² ) 2 β‰₯ βˆ₯ P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 2 = βˆ₯ A ^ ⁒ [ Ξ© s ] ⁒ V βˆ₯ 2 2 = V * ⁒ A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ V β‰₯ ( 1 - Ξ΄ ^ K + 1 ) ⁒ βˆ₯ V βˆ₯ 2 2 .

From Lemma 2.6 and (A.17), this leads to

(A.18) βˆ₯ V βˆ₯ 2 , ∞ ≀ βˆ₯ V βˆ₯ 2 ≀ Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 .

Thus, it follows from (A.18) that

min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 = min 1 ≀ i ≀ s βˆ₯ x [ i ] + ( x ^ [ i ] - x [ i ] ) βˆ₯ 2 β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - max 1 ≀ i ≀ s βˆ₯ x ^ [ i ] - x [ i ] βˆ₯ 2 = min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - βˆ₯ V βˆ₯ 2 , ∞ β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 .

So (2.4) holds.

For βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , we have

(A.19) βˆ₯ P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 2 = βˆ₯ ⟨ P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) , P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ⟩ βˆ₯ 2 = βˆ₯ ⟨ P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) , βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ ( x ⁒ [ i ] - x ^ ⁒ [ i ] ) ⟩ βˆ₯ 2 = βˆ₯ ⟨ ( A ^ ⁒ x - y ^ ) , P ⁒ [ H ^ Ξ© s ] ⁒ ( βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ ( x ⁒ [ i ] - x ^ ⁒ [ i ] ) ) ⟩ βˆ₯ 2 ≀ βˆ₯ ( A ^ ⁒ [ Ξ© s ] ) * ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ⁒ βˆ₯ V βˆ₯ 2 , 1 ≀ βˆ₯ V βˆ₯ 2 , 1 ⁒ Ο΅ A , K , y β€²β€² .

From Lemma 2.6, (A.17) and (A.19), we can easily get

(A.20) βˆ₯ V βˆ₯ 2 , ∞ ≀ βˆ₯ V βˆ₯ 2 ≀ Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 ⁒ βˆ₯ V βˆ₯ 2 , 1 βˆ₯ V βˆ₯ 2 ≀ s ⁒ Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 ≀ K ⁒ Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 .

Then it follows from (A.20) that

min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 = min 1 ≀ i ≀ s βˆ₯ x [ i ] + ( x ^ [ i ] - x [ i ] ) βˆ₯ 2 β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - max 1 ≀ i ≀ s βˆ₯ x ^ [ i ] - x [ i ] βˆ₯ 2 = min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - βˆ₯ V βˆ₯ 2 , ∞ β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - K ⁒ Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 .

Therefore, (2.5) holds. So the lemma is proved. ∎

Proof of Lemma 2.11

For βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² and A ^ ⁒ x = A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] , we have

βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© j ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 ≀ βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€²   with ⁒ β€…0 ≀ j ≀ N .

Since H ^ Ξ© 0 βŠ‚ H ^ Ξ© 1 ⁒ β‹― βŠ‚ H ^ Ξ© s - 1 , we can easily get H ^ Ξ© 0 βŸ‚ βŠƒ H ^ Ξ© 1 βŸ‚ ⁒ β‹― βŠƒ H ^ Ξ© s - 1 βŸ‚ . Then we have

βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© 0 ] ⁒ y ^ βˆ₯ 2 β‰₯ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© 1 ] ⁒ y ^ βˆ₯ 2 β‰₯ β‹― β‰₯ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ y ^ βˆ₯ 2 .

And also noting that

(A.21) βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ y ^ βˆ₯ 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ ( A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] - A ^ ⁒ x + y ^ ) βˆ₯ 2 β‰₯ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 - βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 β‰₯ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 - Ο΅ A , K , y β€² ,

we can easily get

(A.22) P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] = P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ Ξ© s \ Ξ© s - 1 ] ⁒ x ⁒ [ Ξ© s \ Ξ© s - 1 ] = x ⁒ [ s ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ s ]

and

(A.23) C s - 1 ⁒ ( A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ) = A ^ * ⁒ [ s ] ⁒ A ^ ⁒ [ s ] - A ^ * ⁒ [ s ] ⁒ A ^ ⁒ [ Ξ© s - 1 ] ⁒ ( A ^ * ⁒ [ Ξ© s - 1 ] ⁒ A ^ ⁒ [ Ξ© s - 1 ] ) - 1 ⁒ A ^ * ⁒ [ Ξ© s - 1 ] ⁒ A ^ ⁒ [ s ] = A ^ * ⁒ [ Ξ© s ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ s ] .

From Lemma 2.4, Lemma 2.5, (A.22) and (A.23), we have

(A.24) βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 2 = βˆ₯ x ⁒ [ s ] βˆ₯ 2 2 ⁒ A ^ * ⁒ [ Ξ© s ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s - 1 ] ⁒ A ^ ⁒ [ s ] = βˆ₯ x ⁒ [ s ] βˆ₯ 2 2 ⁒ C s - 1 ⁒ ( A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ) β‰₯ ( 1 - Ξ΄ ^ K + 1 ) ( min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 ) 2 .

By combining (A.21) and (A.24), therefore, (2.6) holds.

For βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² and A ^ ⁒ x = A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] , then it follows from Lemma 2.9 that

βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ βˆ₯ 2 , ∞ = βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] - A ^ ⁒ x + y ^ ) βˆ₯ 2 , ∞ ≀ βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 , ∞ + βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ = βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ = max s < j ≀ N βˆ₯ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ) * ⁒ A ^ ⁒ [ j ] βˆ₯ 2 ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² ,

so (2.7) holds.

Note that, for k < s , we have

(A.25) βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ βˆ₯ 2 , ∞ = βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] - A ^ ⁒ x + y ^ ) βˆ₯ 2 , ∞ β‰₯ βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 , ∞ - βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ .

Using Lemma 2.4 and Lemma 2.5, we have

(A.26) βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© s \ Ξ© k ] ⁒ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 = x * ⁒ [ Ξ© s \ Ξ© k ] ⁒ A ^ * ⁒ [ Ξ© s \ Ξ© k ] ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© s \ Ξ© k ] ⁒ x ⁒ [ Ξ© s \ Ξ© k ] = x * ⁒ [ Ξ© s \ Ξ© k ] ⁒ A ^ * ⁒ [ Ξ© s \ Ξ© k ] ⁒ ( I n - A ^ ⁒ [ Ξ© k ] ⁒ ( A ^ βˆ— ⁒ [ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© k ] ) - 1 ⁒ A ^ βˆ— ⁒ [ Ξ© k ] ) ⁒ A ^ ⁒ [ Ξ© s \ Ξ© k ] ⁒ x ⁒ [ Ξ© s \ Ξ© k ] = x * ⁒ [ Ξ© s \ Ξ© k ] ⁒ C k ⁒ ( A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ) ⁒ x ⁒ [ Ξ© s \ Ξ© k ] β‰₯ ( 1 - Ξ΄ ^ K + 1 ) ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 .

On the other hand, we have

(A.27) βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ x βˆ₯ 2 2 = βˆ₯ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ x , P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ x ⁒ [ i ] ⟩ βˆ₯ 2 = βˆ₯ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ x , βˆ‘ i = k + 1 s A ^ ⁒ [ i ] ⁒ x ⁒ [ i ] ⟩ βˆ₯ 2 = βˆ₯ βˆ‘ i = k + 1 s x ⁒ [ i ] ⁒ ⟨ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ x , A ^ ⁒ [ i ] ⟩ βˆ₯ 2 ≀ βˆ₯ x [ Ξ© s \ Ξ© k ] βˆ₯ 2 , 1 max 1 ≀ i ≀ s βˆ₯ A ^ [ i ] * ( P βŠ₯ [ H ^ Ξ© k ] A ^ x ) βˆ₯ 2 = βˆ₯ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 , 1 ⁒ βˆ₯ A ^ ⁒ [ Ξ© s ] * ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ x ) βˆ₯ 2 , ∞ ≀ βˆ₯ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 , 1 ⁒ βˆ₯ A ^ * ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ x ) βˆ₯ 2 , ∞ .

From Lemma 2.6, (A.26) and (A.27), we get

(A.28) βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] βˆ₯ 2 , ∞ β‰₯ ( 1 - Ξ΄ ^ K + 1 ) ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 2 βˆ₯ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 , 1 β‰₯ 1 - Ξ΄ ^ K + 1 s - k ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ© k ] βˆ₯ 2 β‰₯ 1 - Ξ΄ ^ K + 1 s - k s - k min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 β‰₯ ( 1 - Ξ΄ ^ K + 1 ) min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 .

In the following, by Lemma 2.9 and a simple calculation, we have

(A.29) βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ = max 1 ≀ i ≀ N βˆ₯ ( P βŠ₯ [ H ^ Ξ© k ] ( A ^ x - y ^ ) ) * A ^ [ i ] βˆ₯ 2 = max 1 ≀ i ≀ N βˆ₯ ( A ^ x - y ^ ) * ( P βŠ₯ [ H ^ Ξ© k ] A ^ [ i ] ) βˆ₯ 2 ≀ ( 1 + k ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² ≀ ( 1 + s - 1 ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² ≀ ( 1 + K - 1 ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² .

That is, the above (A.25), (A.28) and (A.29) show that (2.8) holds. So the lemma holds. ∎

Appendix B Appendix

Proof of Theorem 3.1

The proof can be divided into two steps in the first part of Theorem 3.1. We first prove that the BOMP algorithm selects correct indexes in all iterations. Next, we prove that BOMP performs exactly 𝑠 iterations. Induction is the first step in proof. Suppose that the BOMP algorithm selects correct indexes in the first π‘˜ iterations, i.e., Ξ› k βŠ‚ Ξ© s with k < s . The assumption that we have Ξ› 0 = βˆ… βŠ‚ Ξ© s when k = 0 holds evidently. Next, we need to show that the BOMP algorithm selects a correct index in the k + 1 -th iteration, i.e., Ξ» k + 1 ∈ Ξ© s .

Since r k = P βŠ₯ ⁒ [ H ^ Ξ© 0 ] ⁒ y ^ , by Lemma 2.8, we have S 0 ⁒ Ξ› k - S j ⁒ Ξ› k > 0 for any k = 0 , 1 , … , s - 1 with s < j ≀ N . Let P ⁒ [ H ^ Ξ© s ] ⁒ y ^ = βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ x ^ ⁒ [ i ] = A ^ ⁒ [ Ξ© s ] ⁒ x ^ and A ^ ⁒ x = βˆ‘ i = 1 s A ^ ⁒ [ i ] ⁒ x ⁒ [ i ] . Thus, from (2.4) and (3.2), we have

min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 > Ο΅ A , K , y β€² 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) + ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 ⁒ Ο΅ A , K , y β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 - Ο΅ A , K , y β€² 1 - Ξ΄ ^ K + 1 > ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 ⁒ Ο΅ A , K , y β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 > 0 .

Thus, it suffices to show that βˆ₯ x ⁒ [ 2 ] βˆ₯ 2 ⁒ β‹― ⁒ βˆ₯ x ⁒ [ s ] βˆ₯ 2 β‰  0 . According to the definition of BRIP, it is easy to check that

βˆ₯ A ^ ~ ⁒ [ j ] βˆ₯ 2 = βˆ₯ A ^ ⁒ [ j ] βˆ₯ 2 ≀ 1 + Ξ΄ ^ K + 1 .

Thus, we obtain

(B.1) βˆ₯ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ) * ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 ≀ βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 ⁒ βˆ₯ A ^ ~ ⁒ [ j ] βˆ₯ 2 ≀ 1 + Ξ΄ ^ K + 1 ⁒ Ο΅ A , K , y β€² .

By (2.2) and (B.1), we have

(B.2) S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 - Ξ΄ ^ K + 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 - 1 + Ξ΄ ^ K + 1 ⁒ Ο΅ A , K , y β€² = βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ⁒ ( βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 - Ξ΄ ^ K + 1 ⁒ 1 + ( βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ) 2 - 1 + Ξ΄ ^ K + 1 ⁒ Ο΅ A , K , y β€² βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ) .

Let f ⁒ ( x ) = x - Ξ΄ ^ K + 1 ⁒ 1 + x 2 with x ∈ [ 0 , + ∞ ) . By simple calculations, we have f β€² ⁒ ( x ) = 1 - Ξ΄ ^ K + 1 ⁒ x 1 + x 2 > 0 , where x ∈ [ 0 , + ∞ ) and 0 < Ξ΄ ^ K + 1 ≀ 1 , i.e., the function f ⁒ ( x ) = x - Ξ΄ ^ K + 1 ⁒ 1 + x 2 is monotonously increasing on the interval [ 0 , + ∞ ) . It is easy to verify that

βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 β‰₯ 1 s - k   and   βˆ₯ x ^ [ Ξ© s \ Ξ› k ] βˆ₯ 2 β‰₯ s - k min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2

by Lemma 2.6. Using (B.2), we obtain

S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ⁒ ( 1 s - k - Ξ΄ ^ K + 1 ⁒ s - k + 1 s - k - 1 + Ξ΄ ^ K + 1 ⁒ Ο΅ A , K , y β€² s - k min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 ) .

We use (2.4), (3.2) and (B.2) to get

S 0 ⁒ Ξ› k - S j ⁒ Ξ› k , β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 s - k ⁒ ( 1 - Ξ΄ ^ K + 1 ⁒ s - k + 1 - 1 + Ξ΄ ^ K + 1 ⁒ Ο΅ A , K , y β€² min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 ) > βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 s - k ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ ( K + 1 - s - k + 1 ) > βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 s - k ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ ( K + 1 - s + 1 ) β‰₯ 0 .

Then the BOMP algorithm selects a correct index in the k + 1 -th iteration. Next, we show that the BOMP algorithm with the stopping criterion βˆ₯ r k βˆ₯ 2 ≀ Ο΅ A , K , y β€² performs exactly 𝑠 iterations, which is equivalent to show that βˆ₯ r k βˆ₯ 2 > Ο΅ A , K , y β€² for k = 0 , 1 , 2 , … , s - 1 and βˆ₯ r s βˆ₯ 2 ≀ Ο΅ A , K , y β€² . Since the BOMP algorithm selects a correct index in each iteration under (3.2), we assume that Ξ› k = Ξ© k for 0 ≀ k ≀ s . It follows from (2.6) and (3.2) that

βˆ₯ r 0 βˆ₯ 2 β‰₯ βˆ₯ r 1 βˆ₯ 2 β‰₯ β‹― β‰₯ βˆ₯ r s - 1 βˆ₯ 2 β‰₯ 1 - Ξ΄ ^ K + 1 min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - Ο΅ A , K , y β€² β‰₯ 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - Ο΅ A , K , y β€² > 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ⁒ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 ⁒ Ο΅ A , K , y β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 β‰₯ 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ⁒ Ο΅ A , K , y β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 β‰₯ [ 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 β‰₯ Ο΅ A , K , y β€² ,

i.e., the BOMP algorithm does not terminate before performing the 𝑠-th iteration.

By Lemma 2.2, we have

βˆ₯ r s βˆ₯ 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ βˆ₯ 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] - A ^ ⁒ x + y ^ ) βˆ₯ 2 = βˆ₯ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 ≀ βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² .

Thus, the BOMP algorithm terminates after performing the 𝑠-th iteration, i.e., the BOMP algorithm performs 𝑠 iterations. So Theorem 3.1 is proved. ∎

Proof of Theorem 3.3

For any A ^ ⁒ [ j ] with s < j ≀ N and any k ∈ N with 0 ≀ k < s , we first prove S 0 ⁒ Ξ› k - S j ⁒ Ξ› k > 0 for Ξ› k βŠ‚ Ξ© s . It follows from (2.5) and (3.3) that

min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 β‰₯ min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - K ⁒ Ο΅ A , K , y β€²β€² 1 - Ξ΄ ^ K + 1 > K ⁒ Ο΅ A , K , y β€²β€² 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) + [ 1 + K ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 - K ⁒ Ο΅ A , K , y β€²β€² 1 - Ξ΄ ^ K + 1 > [ 1 + K ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1 > 0 .

Thus, it suffices to show that βˆ₯ x ⁒ [ 2 ] βˆ₯ 2 ⁒ β‹― ⁒ βˆ₯ x ⁒ [ s ] βˆ₯ 2 β‰  0 . Then, by (2.3), we obtain

βˆ₯ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ) * ⁒ A ^ ~ ⁒ [ j ] βˆ₯ 2 = βˆ₯ ( A ^ ⁒ x - y ^ ) * ⁒ ( P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ A ^ ⁒ [ j ] ) βˆ₯ 2 ≀ ( 1 + s ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² .

By (2.2) and (B.1), we have

(B.3) S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 - Ξ΄ ^ K + 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 - ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² = βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ⁒ ( βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 - Ξ΄ ^ K + 1 ⁒ 1 + ( βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ) 2 - ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ) β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ⁒ ( 1 s - k - Ξ΄ ^ K + 1 ⁒ s - k + 1 s - k - ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² s - k min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 ) .

By combining (2.5), (3.3) and (B.3), we have

S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 s - k ⁒ ( 1 - Ξ΄ ^ K + 1 ⁒ s - k + 1 - ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² min 1 ≀ i ≀ s βˆ₯ x ^ [ i ] βˆ₯ 2 ) > βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 s - k ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ ( K + 1 - s - k + 1 ) > βˆ₯ x ^ ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 s - k ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ ( K + 1 - s + 1 ) β‰₯ 0 .

Thus, the BOMP algorithm selects a correct index at each iteration. Next, we show that the BOMP algorithm with the stopping criterion

βˆ₯ A ^ βˆ— ⁒ r k βˆ₯ 2 , ∞ ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€²

performs exactly 𝑠 iterations, which is equivalent to show that

βˆ₯ A ^ βˆ— ⁒ r k βˆ₯ 2 , ∞ > ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€²   for ⁒ k = 0 , 1 , 2 , … , s - 1 ,
βˆ₯ A ^ βˆ— ⁒ r s βˆ₯ 2 , ∞ ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² .
Similarly, we assume that Ξ› k = Ξ© k for 0 ≀ k ≀ s . It follows from (2.8) and (3.3) that, for any 0 ≀ k < s ,
βˆ₯ A ^ βˆ— ⁒ r k βˆ₯ 2 , ∞ = βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© k ] ⁒ y ^ βˆ₯ 2 , ∞
β‰₯ [ 1 - ( ( 1 + Ξ΄ K + 1 ) ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] min 1 ≀ i ≀ s βˆ₯ x [ i ] βˆ₯ 2 - ( 1 + K - 1 ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) Ο΅ A , K , y β€²β€²
> K ⁒ Ο΅ A , K , y β€²β€² + [ 1 - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) + K ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1
- ( 1 + K - 1 ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ) ⁒ Ο΅ A , K , y β€²β€²
= K ⁒ Ο΅ A , K , y β€²β€² + Ο΅ A , K , y β€²β€² + [ ( K + K + 1 - 1 ) ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ⁒ K + 1
- ( 1 + K - 1 ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ) ⁒ Ο΅ A , K , y β€²β€²
β‰₯ K ⁒ Ο΅ A , K , y β€²β€² + Ο΅ A , K , y β€²β€² + [ ( K + K + 1 - 1 ) ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ]
- ( 1 + K - 1 ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ) ⁒ Ο΅ A , K , y β€²β€²
= K ⁒ Ο΅ A , K , y β€²β€² + [ ( K + K + 1 - K - 1 - 1 ) ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] ⁒ Ο΅ A , K , y β€²β€² 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ]
= { 1 + K ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] } ⁒ Ο΅ A , K , y β€²β€²
+ { K - 1 + ( K + 1 - K - 1 - 1 ) ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] } ⁒ Ο΅ A , K , y β€²β€² .
When

δ K + 1 < 1 K + 1 ⁒ ( 1 + ϡ A ( K + 1 ) ) 2 + 1 ( 1 + ϡ A ( K + 1 ) ) 2 - 1 ,

if K = 1 , it is obvious that

K - 1 + ( K + 1 - K - 1 - 1 ) ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] β‰₯ 0

for K > 1 . It is easy to check that

K + 1 - K - 1 - 1 < 0 , ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] ≀ 1 ( K + 1 - 1 ) , K > K - 1 K + 1 - 1 .

Thus,

K - 1 + ( K + 1 - K - 1 - 1 ) ⁒ ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] β‰₯ K - K - 1 K + 1 - 1 > 0 .

So the BOMP algorithm does not terminate before performing the 𝑠-th iteration.

In the following, by (2.7), we can easily get

βˆ₯ A ^ βˆ— ⁒ r s βˆ₯ 2 , ∞ = βˆ₯ A ^ βˆ— ⁒ P βŠ₯ ⁒ [ H ^ Ξ© s ] ⁒ y ^ βˆ₯ 2 , ∞ ≀ ( 1 + K ⁒ Ξ΄ ^ K + 1 1 - Ξ΄ ^ K + 1 ) ⁒ Ο΅ A , K , y β€²β€² ≀ { 1 + K ⁒ [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] 1 - [ ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ] } ⁒ Ο΅ A , K , y β€²β€² .

Therefore, the BOMP algorithm terminates after performing the 𝑠-th iteration, i.e., the BOMP algorithm performs 𝑠 iterations. So Theorem 3.3 is proved. ∎

Proof of Theorem 3.5

Since

(B.4) A ^ ⁒ [ Ξ© s ] ⁒ x ⋆ = P ⁒ [ H ^ Ξ© s ] ⁒ y ^ = P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - ( A ^ ⁒ x - y ^ ) ) = P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] - ( A ^ ⁒ x - y ^ ) ) = A ^ ⁒ [ Ξ© s ] ⁒ x ⁒ [ Ξ© s ] - P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) ,

from Lemma 2.4, (B.4) and the definition of BRIP, for βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 ≀ Ο΅ A , K , y β€² , we have

βˆ₯ x ⋆ - x ⁒ [ Ξ› s ] βˆ₯ 2 ≀ βˆ₯ A ^ ⁒ [ Ξ© s ] ⁒ ( x ⋆ - x ⁒ [ Ξ› s ] ) βˆ₯ 2 1 - Ξ΄ ^ K = βˆ₯ P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 1 - Ξ΄ ^ K
≀ βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 1 - Ξ΄ ^ K ≀ Ο΅ A , K , y β€² 1 - Ξ΄ ^ K ≀ Ο΅ A , K , y β€² 2 - ( 1 + Ξ΄ K ) ⁒ ( 1 + Ο΅ A ( K ) ) 2 ,
and for βˆ₯ A ^ βˆ— ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ Ο΅ A , K , y β€²β€² , we also have

(B.5) ( 1 - Ξ΄ ^ K ) ⁒ βˆ₯ x ⋆ - x ⁒ [ Ξ› s ] βˆ₯ 2 2 ≀ βˆ₯ A ^ ⁒ [ Ξ© s ] ⁒ ( x ⋆ - x ⁒ [ Ξ› s ] ) βˆ₯ 2 2 = βˆ₯ P ⁒ [ H ^ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 2 = βˆ₯ A ^ ⁒ [ Ξ© s ] ⁒ ( A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ) - 1 ⁒ A ^ * ⁒ [ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 2 = ( A ^ ⁒ x - y ^ ) * ⁒ A ^ ⁒ [ Ξ© s ] ⁒ ( A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ) - 1 ⁒ A ^ * ⁒ [ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) .

Using ( 1 - Ξ΄ ^ K ) ⁒ I s ≀ A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ≀ ( 1 + Ξ΄ ^ K ) ⁒ I s , we have

(B.6) 1 1 + Ξ΄ ^ K ⁒ I s ≀ ( A ^ * ⁒ [ Ξ© s ] ⁒ A ^ ⁒ [ Ξ© s ] ) - 1 ≀ 1 1 - Ξ΄ ^ K ⁒ I s .

It follows from (B.5) and (B.6) that

( 1 - Ξ΄ ^ K ) ⁒ βˆ₯ x ⋆ - x ⁒ [ Ξ› s ] βˆ₯ 2 2 ≀ 1 1 - Ξ΄ ^ K ⁒ βˆ₯ A ^ * ⁒ [ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 2 .

According to the above inequality, we have

βˆ₯ x ⋆ - x ⁒ [ Ξ› s ] βˆ₯ 2 ≀ 1 1 - Ξ΄ ^ K ⁒ βˆ₯ A ^ * ⁒ [ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 ≀ s 1 - Ξ΄ ^ K ⁒ βˆ₯ A ^ * ⁒ [ Ξ© s ] ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ s 1 - Ξ΄ ^ K ⁒ βˆ₯ A ^ * ⁒ ( A ^ ⁒ x - y ^ ) βˆ₯ 2 , ∞ ≀ s 1 - Ξ΄ ^ K ⁒ Ο΅ A , K , y β€²β€² ≀ s 2 - ( 1 + Ξ΄ K ) ⁒ ( 1 + Ο΅ A ( K ) ) 2 ⁒ Ο΅ A , K , y β€²β€² .

So Theorem 3.5 is proved. ∎

Proof of Theorem 3.7

We first suppose that the BOMP algorithm selects correct indexes in the first π‘˜ iterations, i.e., Ξ› k βŠ‚ Ξ© s with k < s . Since Ξ› 0 = Ο• βŠ‚ Ξ© s , the assumption holds obviously when k = 0 . Next, we prove S 0 ⁒ Ξ› k - S j ⁒ Ξ› k > 0 for all s < j ≀ N . When βˆ₯ A ^ ⁒ x - y ^ βˆ₯ 2 = 0 , we have x ^ = x ⁒ [ Ξ© s ] . It follows from (2.2) that

S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 - Ξ΄ ^ K + 1 ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 = βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 ⁒ ( 1 1 + ( βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 ) 2 - Ξ΄ ^ K + 1 ) .

According to the definition of g ⁒ ( x ) , we have g ⁒ ( x ) β‰₯ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 . It follows from (3.6) that

S 0 ⁒ Ξ› k - S j ⁒ Ξ› k β‰₯ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 ⁒ ( 1 1 + g 2 ⁒ ( x ) - Ξ΄ ^ K + 1 ) β‰₯ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 ⁒ βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 , 1 2 + βˆ₯ x ⁒ [ Ξ© s \ Ξ› k ] βˆ₯ 2 2 ⁒ [ 1 1 + g 2 ⁒ ( x ) - ( ( 1 + Ξ΄ K + 1 ) ⁒ ( 1 + Ο΅ A ( K + 1 ) ) 2 - 1 ) ] > 0 .

Then the BOMP algorithm can exactly recover the block 𝐾-sparse signal in 𝐾 iterations, so Theorem 3.7 is proved. ∎

References

[1] W. U. Bajwa, M. F. Duarte and R. Calderbank, Conditioning of random block subdictionaries with applications to block-sparse recovery and regression, IEEE Trans. Inform. Theory 61 (2015), no. 7, 4060–4079. 10.1109/TIT.2015.2429632Search in Google Scholar

[2] R. G. Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal Process. Magaz. 25 (2008), no. 2, 83–91. 10.1109/MSP.2007.914730Search in Google Scholar

[3] Z. Ben-Haim and Y. C. Eldar, Near-oracle performance of greedy block-sparse estimation techniques from noisy measurements, IEEE J. Selected Topics Signal Process. 5 (2011), no. 5, 1032–1047. 10.1109/JSTSP.2011.2160250Search in Google Scholar

[4] T. Blumensath and M. Davies, Compressed sensing and source separation, Independent Component Analysis and Signal Separation, Lecture Notes in Comput. Sci. 4666, Springer, Berlin (2007), 341–348. 10.1007/978-3-540-74494-8_43Search in Google Scholar

[5] E. J. Candes, Compressive Sampling, Marta Sanz Sole 17 (2006), no. 2, 1433–1452. 10.4171/022-3/69Search in Google Scholar

[6] E. J. Candes, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), no. 2, 489–509. 10.1109/TIT.2005.862083Search in Google Scholar

[7] J. Chen and X. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Trans. Signal Process. 54 (2006), no. 12, 4634–4643. 10.1109/TSP.2006.881263Search in Google Scholar

[8] Y. C. Eldar, P. Kuppinger and H. BΓΆlcskei, Block-sparse signals: Uncertainty relations and efficient recovery, IEEE Trans. Signal Process. 58 (2010), no. 6, 3042–3054. 10.1109/TSP.2010.2044837Search in Google Scholar

[9] Y. C. Eldar and M. Mishali, Robust recovery of signals from a structured union of subspaces, IEEE Trans. Inform. Theory 55 (2009), no. 11, 5302–5316. 10.1109/TIT.2009.2030471Search in Google Scholar

[10] E. Elhamifar and R. Vidal, Block-sparse recovery via convex optimization, IEEE Trans. Signal Process. 60 (2012), no. 8, 4094–4107. 10.1109/TSP.2012.2196694Search in Google Scholar

[11] E. Elhamifar and R. Vidal, Sparse Subspace Clustering: Algorithm, Theory, and Applications, IEEE Trans. Pattern Anal. Machine Intell. 35 (2013), no. 11, 2765–2781. 10.1109/TPAMI.2013.57Search in Google Scholar PubMed

[12] A. C. Fannjiang, T. Strohmer and P. Yan, Compressed remote sensing of sparse objects, SIAM J. Imaging Sci. 3 (2010), no. 3, 595–618. 10.1137/090757034Search in Google Scholar

[13] N. C. Feng, J. J. Wang and W. W. Wang, Sparse signal recovery with prior information by iterative reweighted least squares algorithm, J. Inverse Ill-Posed Probl. 26 (2017), no. 2, 171–184. 10.1515/jiip-2016-0087Search in Google Scholar

[14] M. A. Herman and T. Strohmer, High-resolution radar via compressed sensing, IEEE Trans. Signal Process. 57 (2009), no. 6, 2275–2284. 10.1109/TSP.2009.2014277Search in Google Scholar

[15] M. A. Herman and T. Strohmer, General deviants: An analysis of perturbations in compressed sensing, IEEE J. Selected Topics Signal Process. 4 (2010), no. 2, 342–349. 10.1109/JSTSP.2009.2039170Search in Google Scholar

[16] T. Ince and A. Nacaroglu, On the perturbation of measurement matrix in non-convex compressed sensing, Signal Process. 98 (2014), no. 5, 143–149. 10.1016/j.sigpro.2013.11.025Search in Google Scholar

[17] Y. Jiao, B. Jin and X. Lu, Group sparse recovery via the l 0 ⁒ ( l 2 ) penalty: Theory andagorithm, IEEE Trans. Signal Process. 65 (2016), no. 4, 998–1012. 10.1109/TSP.2016.2630028Search in Google Scholar

[18] J. Lei and S. Liu, Inversion algorithm based on the generalized objective functional for compressed sensing, Appl. Math. Model. 37 (2013), no. 6, 4407–4429. 10.1016/j.apm.2012.09.049Search in Google Scholar

[19] C. Liu, Y. Fang and J. Liu, Some new results about sufficient conditions for exact support recovery of sparse signals via orthogonal matching pursuit, IEEE Trans. Signal Process. 65 (2017), no. 17, 4511–4524. 10.1109/TSP.2017.2711543Search in Google Scholar

[20] C. Y. Liu, J. J. Wang and W. W. Wang, Non-convex block-sparse compressed sensing with redundant dictionaries, IET Signal Process. 11 (2017), no. 2, 171–180. 10.1049/iet-spr.2016.0272Search in Google Scholar

[21] M. Lustig, D. L. Donoho and D. L. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance Med. 58 (2007), no. 6, 1182–1195. 10.1002/mrm.21391Search in Google Scholar PubMed

[22] Q. Mo, A sharp restricted isometry constant bound of orthogonal matching pursuit, preprint (2015), https://arxiv.org/abs/1501.01708. Search in Google Scholar

[23] H. Nyquist, Certain topics in telegraph transmission theory, Amer. Inst. Electr. Eng. 47 (1928), no. 2, 617–644. 10.1109/T-AIEE.1928.5055024Search in Google Scholar

[24] F. Parvaresh, H. Vikalo and H. Misra, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays, IEEE J. Selected Topics Signal Process. 2 (2008), no. 3, 275–285. 10.1109/JSTSP.2008.924384Search in Google Scholar

[25] G. Swirszcz, N. Abe and A. C. Lozano, Grouped orthogonal matching pursuit for variable selection and prediction, Adv. Neural Inform. Process. Syst. 22 (2009), 1150–1158. Search in Google Scholar

[26] R. Vidal and Y. Ma, A unified algebraic approach to 2-D and 3-D motion segmentation and estimation, J. Math. Imaging Vision 25 (2006), no. 3, 403–421. 10.1007/s10851-006-8286-zSearch in Google Scholar

[27] J. Wang, Support recovery with orthogonal matching pursuit in the presence of noise, IEEE Trans. Signal Process. 63 (2015), no. 21, 5868–5877. 10.1109/TSP.2015.2468676Search in Google Scholar

[28] J. Wang and B. Shim, On the recovery limit of sparse signals using orthogonal matching pursuit, IEEE Trans. Signal Process. 60 (2012), no. 9, 4973–4976. 10.1109/TSP.2012.2203124Search in Google Scholar

[29] J. J. Wang, J. Zhang and W. W. Wang, A perturbation analysis of nonconvex block-sparse compressed sensing, Commun. Nonlinear Sci. Numer. Simul. 29 (2015), no. 1–3, 416–426. 10.1016/j.cnsns.2015.05.022Search in Google Scholar

[30] J. Wen, Z. Zhou and Z. Liu, Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit, Appl. Comput. Harmon. Anal. 47 (2019), no. 3, 948–974. 10.1016/j.acha.2018.02.002Search in Google Scholar

[31] J. Wen, Z. Zhou and J. Wang, A sharp condition for exact support recovery of sparse signals with orthogonal matching pursuit, IEEE International Symposium on Information Theory, IEEE Press, Piscataway (2016), 2364–2368. 10.1109/ISIT.2016.7541722Search in Google Scholar

[32] J. Wen, X. Zhu and D. Li, Improved bounds on restricted isometry constant for orthogonal matching pursuit, Electron. Letters 49 (2014), no. 23, 1487–1489. 10.1049/el.2013.2222Search in Google Scholar

[33] C. Y. Zhang, Y. T. Li and F. Chen, On Schur complement of block diagonally dominant matrices, Linear Algebra Appl. 414 (2006), no. 2–3, 533–546. 10.1016/j.laa.2005.10.046Search in Google Scholar

Received: 2019-06-24
Revised: 2020-08-14
Accepted: 2020-09-25
Published Online: 2020-11-17
Published in Print: 2021-10-01

Β© 2020 Liu, Zhang, Qiu, Li and Leng, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 22.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jiip-2019-0043/html?lang=en
Scroll to top button