Home Binary sparse signal recovery with binary least squares
Article
Licensed
Unlicensed Requires Authentication

Binary sparse signal recovery with binary least squares

  • Haifeng Li ORCID logo EMAIL logo and Qi Chen
Published/Copyright: September 16, 2025

Abstract

The problem of obtaining a 𝐾-sparse binary signal 𝐱 from y = Φ ⁒ x + ν is often encountered in communications and signal processing. Based on orthogonal least squares (OLS), a greedy algorithm guaranteeing the support recovery of a 𝐾-sparse binary signal is presented in this paper. Using the mutual coherence and restricted isometric properties of Φ , sufficient conditions ensuring the exact recovery of the support of 𝐱 are given. Finally, compared with binary matching pursuit (BMP) and OLS, the simulation experiments imply the advantage of the proposed algorithm.

MSC 2020: 94A12; 65F22; 65J22

Award Identifier / Grant number: 242300420252

Award Identifier / Grant number: 24A120007

Funding statement: This work was partially supported by the Natural Science Foundation of Henan Province (grant no. 242300420252) and in part by the Key Scientific Research Project of Colleges and Universities in Henan Province (grant no. 24A120007) and in part by Young Backbone Teachers in Henan Province (grant no. 2023GGJS037).

A Some lemmas

Lemma 1

Lemma 1 ([29])

Consider the ( k + 1 ) -th step in the BLS algorithm. Then BLS selects the index

s k + 1 = arg ⁒ max i ∈ T βˆ– Ξ› k ⁑ | ⟨ r k , Ξ¦ i ⟩ | βˆ₯ P Ξ› k βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 .

Lemma 2

Lemma 2 ([29])

Assuming that S 1 ∈ Ξ© , Ξ¦ is a column normalization matrix and satisfies RIP of order | S 1 | + 1 , then for any i ∈ Ξ© βˆ– S 1 , we have that

βˆ₯ P S 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 β‰₯ 1 βˆ’ Ξ΄ | S 1 | + 1 2 .

Lemma 3

Lemma 3 ([17])

Suppose that S 1 , S 2 satisfy | S 2 βˆ– S 1 | > 1 , Ξ¦ is a column normalization matrix and satisfies RIP of order | S 1 βˆͺ S 2 | . Let w ∈ R n and supp ⁑ ( w ) = S 2 . One has

( 1 βˆ’ Ξ΄ S 1 βˆͺ S 2 ) ⁒ βˆ₯ w βˆ₯ 2 2 ≀ βˆ₯ P S 1 βŸ‚ ⁒ Ξ¦ ⁒ w βˆ₯ 2 2 ≀ ( 1 + Ξ΄ S 1 βˆͺ S 2 ) ⁒ βˆ₯ w βˆ₯ 2 2 .

Lemma 4

Consider y = Ξ¦ ⁒ x . For any constant Ξ± > 0 and k ∈ { 0 , 1 , … , K βˆ’ 1 } , one has

βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 > Ξ± ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ |

for all j ∈ Ξ© βˆ– T , provided that Ξ¦ satisfies Ξ΄ K βˆ’ k + 1 < 1 / Ξ± + 1 .

Proof

The proof of Lemma 4 is similar to [29] except for the form of the residual. Define

Ξ² = 1 βˆ’ Ξ± + 1 Ξ± .

Let

Ξ¨ = 1 1 βˆ’ Ξ² 4 ⁒ [ Ξ¦ T βˆ– Ξ› k βˆ’ 1 Ξ¦ j ] , u = [ x T βˆ– Ξ› k βˆ’ 1 0 ] ∈ R | T βˆ– Ξ› k βˆ’ 1 | + 1 , v = [ 0 | T βˆ– Ξ› k βˆ’ 1 | Γ— 1 Ξ» ⁒ Ξ² ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ] ∈ R | T βˆ– Ξ› k βˆ’ 1 | + 1 ,

where

Ξ» = { 1 if ⁒ ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ β‰₯ 0 , βˆ’ 1 if ⁒ ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ < 0 .

Then one has

(A.1) βˆ₯ Ξ¨ ⁒ ( u + v ) βˆ₯ 2 2 βˆ’ βˆ₯ Ξ¨ ⁒ ( Ξ² 2 ⁒ u βˆ’ v ) βˆ₯ 2 2 = βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 + 2 ⁒ ( 1 + Ξ² 2 ) ⁒ Ξ² 1 βˆ’ Ξ² 4 ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ | = βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 βˆ’ Ξ± ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ | ,

where (A.1) follows from

2 ⁒ Ξ² 1 βˆ’ Ξ² 2 = βˆ’ Ξ± .

On the other hand, applying Lemma 3, we can have

(A.2) βˆ₯ Ξ¨ ⁒ ( u + v ) βˆ₯ 2 2 βˆ’ βˆ₯ Ξ¨ ⁒ ( Ξ² 2 ⁒ u βˆ’ v ) βˆ₯ 2 2 β‰₯ 1 βˆ’ Ξ΄ K βˆ’ k + 1 1 βˆ’ Ξ² 4 ⁒ βˆ₯ u + v βˆ₯ 2 2 βˆ’ 1 + Ξ΄ K βˆ’ k + 1 1 βˆ’ Ξ² 4 ⁒ βˆ₯ Ξ² 2 ⁒ u βˆ’ v βˆ₯ 2 2 = (a) ⁒ ( 1 βˆ’ Ξ΄ K βˆ’ k + 1 ⁒ Ξ± + 1 ) ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 > 0 ,

where (a) is from

1 + Ξ² 2 1 βˆ’ Ξ² 2 = Ξ± + 1 .

By combining (A.1) and (A.2), we can get

βˆ₯ Ξ¨ ⁒ ( u + v ) βˆ₯ 2 2 βˆ’ βˆ₯ Ξ¨ ⁒ ( Ξ² 2 ⁒ u βˆ’ v ) βˆ₯ 2 2 = βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 βˆ’ Ξ± ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ | > 0 .

So it is easy to see that βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 > Ξ± ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ | . ∎

Lemma 5

Lemma 5 ([8, 25])

Let S 1 βŠ‚ T and w ∈ R | S 1 | . Suppose that Ξ¦ satisfies Ξ΄ | S 1 | < 1 . One has Ξ΄ | S 1 | ≀ ( | S 1 | βˆ’ 1 ) ⁒ ΞΌ .

B Proof of Theorem 1

Proof

The proof of Theorem 1 is similar to [29] except for the form of the residual.

Our proof is based on mathematical induction. Assume that the BLS algorithm selects the correct index in the first k βˆ’ 1 iterations, i.e., Ξ› k βˆ’ 1 βŠ‚ T , so according to Algorithm 1, it is only necessary to prove s k ∈ T in the π‘˜-th iteration, i.e.,

(B.1) max i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 > max i ∈ Ξ© βˆ– T ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 .

According to Algorithm 1, we can get

r k βˆ’ 1 = y βˆ’ βˆ‘ i ∈ Ξ› k βˆ’ 1 Ξ¦ i = Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 + Ξ½ .

Because Ξ¦ is the column normalization matrix, there is βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 ≀ βˆ₯ Ξ¦ i βˆ₯ 2 = 1 . Then

max i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 β‰₯ max i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | β‰₯ max i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ i ⟩ | βˆ’ βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ½ βˆ₯ ∞ β‰₯ (a) ⁒ βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 | T βˆ– Ξ› k βˆ’ 1 | ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 βˆ’ βˆ₯ Ξ½ βˆ₯ 2 ,

where (a) follows from the Cauchy–Schwarz inequality.

On the right side of (B.1), one has

(B.2) max j ∈ Ξ© βˆ– T ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ j ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j βˆ₯ 2 ≀ max j ∈ Ξ© βˆ– T ⁑ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ | + | ⟨ Ξ½ , Ξ¦ j ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j βˆ₯ 2 ⁒ ≀ (a) ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j 0 ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 + βˆ₯ Ξ½ βˆ₯ 2 1 βˆ’ Ξ΄ K + 1 2 ,

where

j 0 = arg ⁒ max j ∈ Ξ© βˆ– T ⁑ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j βˆ₯ 2 ,

(a) follows from Lemma 2 and the monotonicity of RIP.

Next, to prove (B.1), we should prove

(B.3) βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 | T βˆ– Ξ› k βˆ’ 1 | ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 βˆ’ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j 0 ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 > Ξ· + Ξ· 1 βˆ’ Ξ΄ K + 1 2 β‰₯ βˆ₯ Ξ½ βˆ₯ 2 + βˆ₯ Ξ½ βˆ₯ 2 1 βˆ’ Ξ΄ K + 1 2 .

According to Lemma 4, letting

Ξ± = | T βˆ– Ξ› k βˆ’ 1 | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 2 ,

from (A.1) and (A.2), we can have

βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 βˆ’ Ξ± ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j 0 ⟩ | = ( 1 βˆ’ Ξ΄ K βˆ’ k + 1 ⁒ Ξ± + 1 ) ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 .

So

(B.4) βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 | T βˆ– Ξ› k βˆ’ 1 | ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 βˆ’ Ξ± ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 ⁒ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j 0 ⟩ | | T βˆ– Ξ› k βˆ’ 1 | ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 β‰₯ ( 1 βˆ’ Ξ΄ K + 1 ⁒ Ξ± + 1 ) ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 | T βˆ– Ξ› k βˆ’ 1 | .

Note that

Ξ± = | T βˆ– Ξ› k βˆ’ 1 | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 2 ;

then we have that

(B.5) Ξ΄ K + 1 ⁒ Ξ± + 1 = | T βˆ– Ξ› k βˆ’ 1 | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 + 1 ⁒ Ξ΄ K + 1 ⁒ ≀ k = 1 ⁒ K + 1 ⁒ Ξ΄ K + 1 < 1 ,

and

(B.6) Ξ΄ K + 1 ⁒ Ξ± + 1 = | T βˆ– Ξ› k βˆ’ 1 | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 + 1 ⁒ Ξ΄ K + 1 ⁒ ≀ ( 1 < k ≀ K ) ⁒ K βˆ’ 1 1 βˆ’ Ξ΄ | Ξ› k βˆ’ 1 | + 1 2 + 1 ⁒ Ξ΄ K + 1 ≀ K βˆ’ 1 1 βˆ’ Ξ΄ K + 1 2 + 1 ⁒ Ξ΄ K + 1 ≀ K + 1 ⁒ Ξ΄ K + 1 < 1 .

So

(B.7) 1 βˆ’ Ξ΄ K + 1 ⁒ Ξ± + 1 > 0 .

Thus (B.4) can be changed into

(B.8) βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 2 | T βˆ– Ξ› k βˆ’ 1 | ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 βˆ’ | ⟨ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 , Ξ¦ j 0 ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ j 0 βˆ₯ 2 β‰₯ ( 1 βˆ’ Ξ΄ K + 1 ⁒ Ξ± + 1 ) ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 | T βˆ– Ξ› k βˆ’ 1 | ⁒ > ( B.5 ), ( B.6 ) ⁒ ( 1 βˆ’ Ξ΄ K + 1 ⁒ K + 1 ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | .

In order to prove (B.3), by (B.8) and (B.7), we only need to show

(B.9) ( 1 βˆ’ Ξ΄ K + 1 ⁒ K + 1 ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | > 1 βˆ’ Ξ΄ K + 1 2 + 1 1 βˆ’ Ξ΄ K + 1 2 ⁒ Ξ·

Through (B.2)–(B.9), it can be easily obtained that, when

Ξ΄ K + 1 < 1 K + 1 and Ξ· < 1 βˆ’ Ξ΄ K + 1 2 ⁒ ( 1 βˆ’ Ξ΄ K + 1 ⁒ K + 1 ) 2 ⁒ min i ∈ T ⁑ | x i | ,

(B.3) holds. So, for 1 ≀ k ≀ | T | , we have s k ∈ T .

In the following, we will prove that the BLS algorithm iterates 𝐾 steps. Firstly, for 1 ≀ k ≀ K βˆ’ 1 , one has

βˆ₯ r k βˆ₯ 2 = βˆ₯ Ξ¦ T βˆ– Ξ› k ⁒ x T βˆ– Ξ› k + Ξ½ βˆ₯ 2 β‰₯ βˆ₯ Ξ¦ T βˆ– Ξ› k ⁒ x T βˆ– Ξ› k βˆ₯ 2 βˆ’ Ξ· β‰₯ 1 βˆ’ Ξ΄ | T | + 1 ⁒ βˆ₯ x T βˆ– Ξ› k βˆ₯ 2 βˆ’ Ξ· β‰₯ 1 βˆ’ Ξ΄ K + 1 βˆ’ Ξ· β‰₯ (a) ⁒ 1 βˆ’ Ξ΄ K + 1 2 ⁒ ( 1 βˆ’ Ξ΄ K + 1 ⁒ K + 1 ) βˆ’ Ξ· ⁒ > (b) ⁒ 2 ⁒ Ξ· βˆ’ Ξ· = Ξ· ,

where (a) is from 1 βˆ’ Ξ΄ K + 1 2 ⁒ ( 1 βˆ’ Ξ΄ K + 1 ⁒ K + 1 ) ≀ 1 βˆ’ K + 1 ⁒ Ξ΄ K + 1 ≀ 1 βˆ’ Ξ΄ K + 1 ≀ 1 βˆ’ Ξ΄ K + 1 , and (b) is from (3.2). So the BLS algorithm iterates at least | T | steps. When k = K , βˆ₯ r K βˆ₯ 2 = βˆ₯ Ξ¦ T βˆ– T K ⁒ x T βˆ– T K + Ξ½ βˆ₯ 2 = βˆ₯ Ξ½ βˆ₯ 2 ≀ Ξ· . So, to sum up, the BLS algorithm iterates 𝐾 steps. ∎

C Proof of Theorem 2

Proof

We divide the proof into two steps. Firstly, it is proved that, for 1 ≀ k ≀ K , there is s k ∈ T ; then it is proved that the BLS algorithm iterates 𝐾 steps. Similarly, assuming that the correct index is selected in the first k βˆ’ 1 steps, it is only necessary to show that (B.1) is valid in the π‘˜-th iteration.

For the left side of (B.1), we have

max i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 β‰₯ max i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | β‰₯ βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ βˆ’ βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ½ βˆ₯ ∞ .

To the right of (B.1), we have

max i ∈ Ξ© βˆ– T ⁑ | ⟨ r k βˆ’ 1 , Ξ¦ i ⟩ | βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ ⁒ Ξ¦ i βˆ₯ 2 ≀ βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 + βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ½ βˆ₯ ∞ min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 .

At this point, we need to prove

(C.1) βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ βˆ’ βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 β‰₯ βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ½ βˆ₯ ∞ + βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ½ βˆ₯ ∞ min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 .

Let

p = βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ and q = βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 .

The lower bound of 𝑝 is

(C.2) p = βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ β‰₯ βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 | T βˆ– Ξ› k βˆ’ 1 | ⁒ β‰₯ (a) ⁒ ( 1 βˆ’ ( | T βˆ– Ξ› k βˆ’ 1 | βˆ’ 1 ) ⁒ ΞΌ ) ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 | T βˆ– Ξ› k βˆ’ 1 | ,

where (a) is from Lemma 5. The upper bound of π‘ž is

(C.3) q = βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ¦ T βˆ– Ξ› k βˆ’ 1 ⁒ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ ∞ min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 ≀ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 ⁒ ≀ (a) ⁒ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 1 βˆ’ Ξ΄ | Ξ› k βˆ’ 1 | + 1 2 ≀ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 1 βˆ’ Ξ΄ | Ξ› k βˆ’ 1 | + 1 ⁒ β‰₯ (b) ⁒ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ ⁒ βˆ₯ x T βˆ– Ξ› k βˆ’ 1 βˆ₯ 2 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ,

where (a) follows from Lemma 2, and (b) follows from Lemma 5.

Thus, according to (C.2) and (C.3), one has

(C.4) p βˆ’ q β‰₯ ( 1 βˆ’ ( | T βˆ– Ξ› k βˆ’ 1 | βˆ’ 1 ) ⁒ ΞΌ | T βˆ– Ξ› k βˆ’ 1 | βˆ’ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ) ⁒ | T βˆ– Ξ› k βˆ’ 1 | ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | = ( 1 βˆ’ ( | T βˆ– Ξ› k βˆ’ 1 | βˆ’ 1 ) ⁒ ΞΌ βˆ’ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | = ( 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ βˆ’ 2 ⁒ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ + ΞΌ + ( | Ξ› k βˆ’ 1 | ⁒ | T βˆ– Ξ› k βˆ’ 1 | βˆ’ | Ξ› k βˆ’ 1 | ) ⁒ ΞΌ 2 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | β‰₯ (a) ⁒ ( 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ βˆ’ 2 ⁒ | T βˆ– Ξ› k βˆ’ 1 | ⁒ ΞΌ + ΞΌ 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | = ( 1 βˆ’ ( 2 ⁒ | T βˆ– Ξ› k βˆ’ 1 | βˆ’ 1 ) ⁒ ΞΌ 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | β‰₯ (b) ⁒ ( 1 βˆ’ ( 2 ⁒ K βˆ’ 1 ) ⁒ ΞΌ ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | ,

where (a) is from ( | Ξ› k βˆ’ 1 | ⁒ | T βˆ– Ξ› k βˆ’ 1 | βˆ’ | Ξ› k βˆ’ 1 | ) ⁒ ΞΌ 2 > 0 , (b) is from

( 2 ⁒ | T βˆ– Ξ› k βˆ’ 1 | βˆ’ 1 ) ⁒ ΞΌ 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ < 1

and the value of

( 2 ⁒ | T βˆ– Ξ› k βˆ’ 1 | βˆ’ 1 ) ⁒ ΞΌ 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ

decreases as π‘˜ increases (i.e., the maximum value ( 2 ⁒ K βˆ’ 1 ) ⁒ ΞΌ is obtained when k = 1 ).

On the other hand,

βˆ₯ Ξ¦ T βˆ– Ξ› k βˆ’ 1 β€² ⁒ Ξ½ βˆ₯ ∞ + βˆ₯ Ξ¦ Ξ© βˆ– T β€² ⁒ Ξ½ βˆ₯ 2 min i ∈ Ξ© βˆ– T βˆ₯ P Ξ› k βˆ’ 1 βŸ‚ Ξ¦ i βˆ₯ 2 ≀ Ξ· + Ξ· 1 βˆ’ | Ξ› k βˆ’ 1 | ⁒ ΞΌ ≀ Ξ· + Ξ· 1 βˆ’ K ⁒ ΞΌ = 2 βˆ’ K ⁒ ΞΌ 1 βˆ’ K ⁒ ΞΌ ⁒ Ξ· .

In order to prove (C.1), according to (3.3) and (C.4), we need to show

(C.5) p βˆ’ q β‰₯ ( 1 βˆ’ ( 2 ⁒ K βˆ’ 1 ) ⁒ ΞΌ ) ⁒ min i ∈ T βˆ– Ξ› k βˆ’ 1 ⁑ | x i | > 2 βˆ’ K ⁒ ΞΌ 1 βˆ’ K ⁒ ΞΌ ⁒ Ξ· .

Inequality (3.4) can guarantee that (C.5) holds.

Now, we will prove that BLS iterates 𝐾 steps under βˆ₯ r k βˆ₯ 2 ≀ Ξ· . For 1 ≀ k ≀ K βˆ’ 1 , one has

βˆ₯ r k βˆ₯ 2 = βˆ₯ Ξ¦ T βˆ– Ξ› k ⁒ x T βˆ– Ξ› k + Ξ½ βˆ₯ 2 ⁒ β‰₯ (a) ⁒ 1 βˆ’ ( | T βˆ– Ξ› k | βˆ’ 1 ) ⁒ ΞΌ ⁒ min i ∈ T ⁑ | x i | βˆ’ Ξ· β‰₯ (b) ⁒ 1 βˆ’ ( | T βˆ– Ξ› k | βˆ’ 1 ) ⁒ ΞΌ ⁒ 2 βˆ’ K ⁒ ΞΌ ( 1 βˆ’ K ⁒ ΞΌ ) ⁒ ( 1 βˆ’ ( 2 ⁒ K βˆ’ 1 ) ⁒ ΞΌ ) βˆ’ Ξ· β‰₯ 2 ⁒ Ξ· βˆ’ Ξ· = Ξ· ,

where (a) is from Lemmas 3 and 5, (b) is from (3.4). That is, the BLS algorithm iterates at least | T | steps. In particular,

βˆ₯ r K βˆ₯ 2 = βˆ₯ Ξ¦ T βˆ– T K ⁒ x T βˆ– T K + Ξ½ βˆ₯ 2 = βˆ₯ Ξ½ βˆ₯ 2 ≀ Ξ· ,

so BLS iterates 𝐾 steps. ∎

References

[1] W. Bajwa, J. Haupt and A. Sayeed, Compressive wireless sensing, Proceedings of the 5th International Conference on Information Processing in Sensor Networks, IEEE Press, Piscataway (2006), 134–142. 10.1109/IPSN.2006.244128Search in Google Scholar

[2] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal. 27 (2009), no. 3, 265–274. 10.1016/j.acha.2009.04.002Search in Google Scholar

[3] E. Candes, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris 346 (2008), no. 9–10, 589–592. 10.1016/j.crma.2008.03.014Search in Google Scholar

[4] E. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005), no. 12, 4203–4215. 10.1109/TIT.2005.858979Search in Google Scholar

[5] E. Candes and M. Wakin, An introduction to compressive sampling, IEEE Signal Process. Mag. 25 (2008), no. 2, 21–30. 10.1109/MSP.2007.914731Search in Google Scholar

[6] S. Chen, S. A. Billings and W. Luo, Orthogonal least squares methods and their application to nonlinear system identification, Internat. J. Control 50 (1989), no. 5, 1873–1896. 10.1080/00207178908953472Search in Google Scholar

[7] W. Chen and H. Ge, Recovery of block sparse signals under the conditions on block RIC and ROC by BOMP and BOMMP, Inverse Probl. Imaging 12 (2018), no. 1, 153–174. 10.3934/ipi.2018006Search in Google Scholar

[8] W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inform. Theory 55 (2009), no. 5, 2230–2249. 10.1109/TIT.2009.2016006Search in Google Scholar

[9] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), no. 4, 1289–1306. 10.1109/TIT.2006.871582Search in Google Scholar

[10] D. L. Donoho, M. Elad and V. N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise, IEEE Trans. Inform. Theory 52 (2006), no. 1, 6–18. 10.1109/TIT.2005.860430Search in Google Scholar

[11] S. Foucart, Hard thresholding pursuit: An algorithm for compressive sensing, SIAM J. Numer. Anal. 49 (2011), no. 6, 2543–2563. 10.1137/100806278Search in Google Scholar

[12] J. J. Fuchs, Recovery of exact sparse representations in the presence of bounded noise, IEEE Trans. Inform. Theory 51 (2005), no. 10, 3601–3608. 10.1109/TIT.2005.855614Search in Google Scholar

[13] V. Goyal, A. Fletcher and S. Rangan, Compressive sampling and lossy compression, IEEE Signal Process. Mag. 25 (2008), no. 2, 48–56. 10.1109/MSP.2007.915001Search in Google Scholar

[14] Q. Hao, F. Hu and J. Lu, Distributed multiple human tracking with wireless binary pyroelectric infrared (pir) sensor networks, IEEE Sensors, IEEE Press, Piscataway (2010), 946–950. 10.1109/ICSENS.2010.5690895Search in Google Scholar

[15] C. Herzet, A. DrΓ©meau and C. Soussen, Relaxed recovery conditions for OMP/OLS by exploiting both coherence and decay, IEEE Trans. Inform. Theory 62 (2016), no. 1, 459–470. 10.1109/TIT.2015.2490660Search in Google Scholar

[16] C. Herzet, C. Soussen, J. Idier and R. Gribonval, Exact recovery conditions for sparse representations with partial support information, IEEE Trans. Inform. Theory 59 (2013), no. 11, 7509–7524. 10.1109/TIT.2013.2278179Search in Google Scholar

[17] B. Li, Y. Shen, Z. Wu and J. Li, Sufficient conditions for generalized orthogonal matching pursuit in noisy case, Signal Process. 108 (2015), 111–123. 10.1016/j.sigpro.2014.09.006Search in Google Scholar

[18] P. Li, W. Chen, H. Ge and M. K. Ng, β„“ 1 βˆ’ Ξ± ⁒ β„“ 2 minimization methods for signal and image reconstruction with impulsive noise removal, Inverse Problems 36 (2020), no. 5, Article ID 055009. 10.1088/1361-6420/ab750cSearch in Google Scholar

[19] J. Lu, J. Gong, Q. Hao and F. Hu, Space encoding based compressive multiple human tracking with distributed binary pyroelectric infrared sensornetworks, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE Press, Piscataway (2012), 180–185. 10.1109/MFI.2012.6342997Search in Google Scholar

[20] J. Lu, J. Gong, Q. Hao and F. Hu, Multi-agent based wireless pyroelectric infrared sensor networks for multi-human tracking and selfcalibration, IEEE Sensors, IEEE Press, Piscataway (2013), 1–4. 10.1109/ICSENS.2013.6688356Search in Google Scholar

[21] M. Lustig, D. Donoho and J. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magn. Resonance Med. 58 (2007), no. 6, 1182–1195. 10.1002/mrm.21391Search in Google Scholar PubMed

[22] J. Ma and M. Davies, Single-Pixel remote sensing, IEEE Geosci. Remote Sensing Lett. 6 (2009), no. 2, 199–203. 10.1109/LGRS.2008.2010959Search in Google Scholar

[23] T. Nguyen, C. Soussen, J. Idier and E. Djermoune, K-step analysis of orthogonal greedy algorithms for non-negative sparse representations, Signal Process. 188 (2021), Article ID 108185. 10.1016/j.sigpro.2021.108185Search in Google Scholar

[24] Y. Pati, R. Rezaiifar and P. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, Signals Systems Comput. 1 (1993), 40–44. 10.1109/ACSSC.1993.342465Search in Google Scholar

[25] J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory 50 (2004), no. 10, 2231–2242. 10.1109/TIT.2004.834793Search in Google Scholar

[26] J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory 53 (2007), no. 12, 4655–4666. 10.1109/TIT.2007.909108Search in Google Scholar

[27] Y. Tsaig and D. L. Donoho, Extensions of compressed sensing, Signal Process. 86 (2006), no. 3, 549–571. 10.1016/j.sigpro.2005.05.029Search in Google Scholar

[28] J. Wen and H. Li, Binary sparse signal recovery with binary matching pursuit, Inverse Problems 37 (2021), no. 6, Paper No. 065014. 10.1088/1361-6420/abf903Search in Google Scholar

[29] J. Wen, J. Wang and Q. Zhang, Nearly optimal bounds for orthogonal least squares, IEEE Trans. Signal Process. 65 (2017), no. 20, 5347–5356. 10.1109/TSP.2017.2728502Search in Google Scholar

[30] J. Wright, A. Yang, A. Ganesh, S. Sastry and Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Recognit. Mach. Intell. 31 (2008), no. 2, 210–227. 10.1109/TPAMI.2008.79Search in Google Scholar PubMed

[31] R. Zheng, K. Vu, A. Pendharkar and G. Song, Obstacle discovery in distributed actuator and sensor networks, ACM Trans. Sensor Networks 7 (2010), no. 3, 1–24. 10.1145/1807048.1807051Search in Google Scholar

Received: 2023-01-26
Revised: 2024-03-27
Accepted: 2025-08-29
Published Online: 2025-09-16

Β© 2025 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 27.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jiip-2023-0008/html?lang=en
Scroll to top button