Home Mathematics Convergence analysis of M-iteration for 𝒢-nonexpansive mappings with directed graphs applicable in image deblurring and signal recovering problems
Article Open Access

Convergence analysis of M-iteration for 𝒢-nonexpansive mappings with directed graphs applicable in image deblurring and signal recovering problems

  • Chonjaroen Chairatsiripong , Damrongsak Yambangwai and Tanakit Thianwan EMAIL logo
Published/Copyright: July 5, 2023
Become an author with De Gruyter Brill

Abstract

In this article, weak and strong convergence theorems of the M-iteration method for 𝒢-nonexpansive mapping in a uniformly convex Banach space with a directed graph were established. Moreover, weak convergence theorem without making use of Opial’s condition is proved. The rate of convergence between the M-iteration and some other iteration processes in the literature was also compared. Specifically, our main result shows that the M-iteration converges faster than the Noor and SP iterations. Finally, the numerical examples to compare convergence behavior of the M-iteration with the three-step Noor iteration and the SP-iteration were given. As application, some numerical experiments in real-world problems were provided, focused on image deblurring and signal recovering problems.

MSC 2010: 46T99; 47H09; 47H10; 47J25; 49M37; 54H25

1 Introduction

The renowned fixed-point result of a contractive mapping in complete metric spaces, which is known as Banach’s contraction principle, was introduced in 1922 by Banach [1], an important instrument for solving the existence problems of nonlinear mappings. Since then, various generalizations have been studied in many directions of this principle.

Jachymski [2] proposed a novel notion of G -contraction in 2008, establishing that it was a realistic extension of the Banach contraction principle in a metric space involving a directed graph. Using this notion, he simply proved the Kelisky-Rivlin theorem [3]. By combining graph theory and fixed-point theory, Aleomrainejad et al. [4] provided some iterative scheme findings for G -contractive and G -nonexpansive mappings on graphs. In [5], Alfuraidan and Khamsi introduced the concept of G -monotone nonexpansive multivalued mappings on a metric space endowed with a graph. The existence of fixed-points of monotone nonexpansive mappings on a Banach space endowed with a directed graph was investigated by Alfuraidan [6].

In 2015, Tiammee et al. [7] presented Browder’s convergence theorem and Halpern iteration process for G -nonexpansive mappings in Hilbert space involving a directed graph. After that, Tripak [8] introduced the Ishikawa iterative scheme to approximate common fixed-points of G -nonexpansive mappings defined on nonempty closed convex subsets of a uniformly convex Banach space endowed with a graph. Recently, various fixed-point iteration processes for G -nonexpansive mappings have been studied extensively by many authors (see, e.g., [9,10] and references cited therein).

In 2000, Noor [11] studied the convergence criteria of the following three-step iteration method for solving general variational inequalities and related problems. The three-step Noor iteration is defined by:

(1) l n = ( 1 η n ) v n + η n T v n , t n = ( 1 ϱ n ) v n + ϱ n T l n , v n + 1 = ( 1 ξ n ) v n + ξ n T t n , n 0 ,

where { η n } , { ϱ n } , and { ξ n } are in ( 0 , 1 ) .

Glowinski and Tallec [12] used the three-step iterative approaches to find solutions for the problem of elastoviscoplasticity, eigenvalue computation, and the theory of liquid crystals. In [12], it was shown that the three-step iterative process yields better numerical results than the estimated iterations in two and one steps. In 1998, Haubruge et al. [13] studied the convergence analysis of three-step methods of Glowinski and Tallec [12] and applied these methods to obtain new splitting-type algorithms for solving variation inequalities, separable convex programming, and minimization of a sum of convex functions. They also proved that three-step iterations lead to highly parallelized algorithms under certain conditions. As a result, we conclude that the three-step approach plays an important and substantial role in the solution of numerous problems in pure and applied sciences.

In 2011, Phuengrattana and Suantai [14] introduced the following new three-step iteration process known as the SP-iteration:

(2) h n = ( 1 η n ) u n + η n T u n , q n = ( 1 ϱ n ) h n + ϱ n T h n , u n + 1 = ( 1 ξ n ) q n + ξ n T q n , n 0 ,

where { η n } , { ϱ n } , and { ξ n } are in ( 0 , 1 ) .

In addition, they showed that the SP-iteration (2) converges faster than the Noor iteration (1) for the class of continuous nondecreasing functions.

Recently in 2018, Ullah and Arshad [15] introduced an iteration, called M-iteration, defined by:

(3) r n = ( 1 ξ n ) w n + ξ n T w n , s n = T r n , w n + 1 = T s n , n 0 ,

where { ξ n } is in ( 0 , 1 ) .

Ullah and Arshad [15] showed that the iteration process (3) is faster than the Picard-S iteration [16] and the S-iteration [17] for Suzuki generalized nonexpansive mappings. In this direction, some of the notable studies were enhanced and conducted by many works, as seen in [1820].

The main purpose of this article is to prove some weak and strong convergence theorems of the M-iteration method (3) for G -nonexpansive mapping in a uniformly convex Banach space endowed with a graph. We also show the numerical experiment for supporting our main results and comparing the rate of convergence of the M-iteration (3) with the three-step Noor iteration (1) and the SP-iteration (2). Furthermore, we apply the M-iteration method to solve image deblurring and signal recovering problems.

2 Preliminaries

In this section, we recall a few basic notions concerning the connectivity of graphs. All of them can be found, e.g., in [21].

Let C be a nonempty subset of a real Banach space X . We identify the graph G with the pair ( V ( G ) , E ( G ) ) , where the set V ( G ) of its vertices coincides with set C and the set of edges E ( G ) contains { ( w , w ) : w C } . Also, G is such that no two edges are parallel. A mapping T : C C is said to be G -contraction if T preserves the edges of G (or T is edge-preserving), i.e.,

( w , s ) E ( G ) ( T w , T s ) E ( G ) ,

and T decreases the weights of edges of G in the following way: there exists ψ ( 0 , 1 ) such that

( w , s ) E ( G ) T w T s ψ w s .

A mapping T : C C is said to be G -nonexpansive (see [5], Definition 2.3 (iii)) if T preserves edges of G , i.e.,

( w , s ) E ( G ) ( T w , T s ) E ( G ) ,

and T non-increases weights of edges of G in the following way:

( w , s ) E ( G ) T w T s w s .

If w and s are the vertices in a graph G , then a path in G from w to s of length N ( N N { 0 } ) is a sequence { w i } i = 0 N of N + 1 vertices such that w 0 = w , w N = s , and ( w i , w i + 1 ) E ( G ) for i = 0 , 1 , N 1 . A graph G is connected if there is a path between any two vertices. A directed graph G = ( V ( G ) , E ( G ) ) is said to be transitive if, for any w , s , r V ( G ) such that ( w , s ) and ( s , r ) are in E ( G ) , we have ( w , r ) E ( G ) . We denote G 1 the conversion of a graph G and

E ( G 1 ) = { ( w , s ) X × X : ( s , w ) E ( G ) } .

Let w 0 V ( G ) and A a subset of V ( G ) . We say that A is dominated by w 0 if ( w 0 , w ) E ( G ) for all w A . A dominates w 0 if for each w A , ( w , w 0 ) E ( G ) .

In this article, we use and to denote the strong convergence and weak convergence, respectively.

A mapping T : C C is said to be G -demiclosed at 0 if, for any sequence { w n } in C such that ( w n , w n + 1 ) E ( G ) , w n w and T w n 0 imply T w = 0 .

A Banach space X is said to satisfy Opial’s condition [22] if w n w and w s , implying that

limsup n w n w < limsup n w n s .

Let C be a nonempty closed convex subset of a real uniformly convex Banach space X . Recall that the mapping T : C X with ( T ) is said to satisfy Condition ( A ) [23] if there is a nondecreasing function f : [ 0 , ) [ 0 , ) , with f ( 0 ) = 0 and f ( r ˆ ) > 0 for all r ˆ ( 0 , ) such that

w T w f ( d ( w , ( T ) ) )

for all w C . Let C be a subset of a metric space ( X , d ) . A mapping T : C C is semi-compact [24] if for a sequence { w n } in C , with lim n d ( w n , T w n ) = 0 , there exists a subsequence { w n j } of { w n } such that w n j p C .

Let C be a nonempty subset of a normed space X and let G = ( V ( G ) , E ( G ) ) be a directed graph such that V ( G ) = C . Then, C is said to have Property G (see [25]) if for each sequence in C converging weakly to w C and ( w n , w n + 1 ) E ( G ) , there is a subsequence { w n j } of { w n } such that ( w n j , w ) E ( G ) for all j N .

Remark 2.1

If G is transitive, then Property G is equivalent to the property: if { w n } is a sequence in C with ( w n , w n + 1 ) E ( G ) such that for any subsequence { w n j } of the sequence { w n } converging weakly to w in X , then ( w n , w ) E ( G ) for all n N .

In the sequel, the following lemmas are needed to prove our main results.

Lemma 2.2

[25] Suppose that X is a Banach space having Opial’s condition, C has Property G , and let T : C C be a G -nonexpansive mapping. Then, I T is G -demiclosed at 0, i.e., if w n w and w n T w n 0 , then w ( T ) , where ( T ) is the set of fixed-points of T .

Lemma 2.3

[26] Let X be a uniformly convex Banach space, and { ξ n } a sequence in [ δ , 1 δ ] for some δ ( 0 , 1 ) . Suppose that sequences { w n } and { s n } in X are such that limsup n w n c , limsup n s n c and limsup n ξ n w n + ( 1 ξ n ) s n = c for some c 0 . Then, lim n w n s n = 0 .

Lemma 2.4

[27] Let X be a Banach space that satisfies Opial’s condition and let { w n } be a sequence in X . Let u , v X be such that lim n w n u and lim n w n v exist. If { w n j } and { w n k } are the subsequences of { w n } that converge weakly to u and v, respectively, then u = v .

Lemma 2.5

[28] Let C be a nonempty closed convex subset of a uniformly convex Banach space X and suppose that C has Property G . Let T be a G -nonexpansive mapping on C . Then, I T is G -demiclosed at 0.

Lemma 2.6

[29] Let { w n } be a bounded sequence in a reflexive Banach space X . If for any weakly convergent subsequence { w n j } of { w n } , both { w n j } and { w n j + 1 } converge weakly to the same point in X , then the sequence { w n } is weakly convergent.

3 Main results

Throughout the section, let C be a nonempty closed convex subset of a Banach space X endowed with a directed graph G such that V ( G ) = C and E ( G ) is convex. We also suppose that the graph G is transitive. The mapping T is G -nonexpansive from C to C with ( T ) . For an arbitrary w 0 C , define the sequence { w n } by (3).

We start with proving the following useful results.

Proposition 3.1

Let q ˜ be such that ( w 0 , q ˜ ) and ( q ˜ , w 0 ) are in E ( G ) . Then, ( w n , q ˜ ) , ( r n , q ˜ ) , ( s n , q ˜ ) , ( q ˜ , w n ) , ( q ˜ , r n ) , ( q ˜ , s n ) , ( s n , r n ) , ( w n , r n ) , and ( w n , w n + 1 ) are in E ( G ) .

Proof

We proceed by induction. Since T is edge-preserving and ( w 0 , q ˜ ) E ( G ) , we have ( T w 0 , q ˜ ) E ( G ) and so ( r 0 , q ˜ ) E ( G ) , by E ( G ) is convex. Again, by edge-preserving of T and ( r 0 , q ˜ ) E ( G ) , we have ( T r 0 , q ˜ ) E ( G ) and so ( s 0 , q ˜ ) E ( G ) . Then, since T is edge-preserving and ( s 0 , q ˜ ) E ( G ) , we obtain ( T s 0 , q ˜ ) E ( G ) , and hence, ( w 1 , q ˜ ) E ( G ) . Thus, by edge-preserving of T , ( T w 1 , q ˜ ) E ( G ) . Again, by the convexity of E ( G ) and ( T w 1 , q ˜ ) , ( w 1 , q ˜ ) E ( G ) , we have ( r 1 , q ˜ ) E ( G ) . Then, since T is edge-preserving, ( T r 1 , q ˜ ) E ( G ) , we obtain ( s 1 , q ˜ ) E ( G ) , and hence, ( T s 1 , q ˜ ) E ( G ) .

Next, we assume that ( w k , q ˜ ) E ( G ) . Since T is edge-preserving, we obtain ( T w k , q ˜ ) E ( G ) . So ( r k , q ˜ ) E ( G ) , since E ( G ) is convex. Hence, by edge-preserving of T and ( r k , q ˜ ) E ( G ) , we have ( T r k , q ˜ ) E ( G ) , then ( s k , q ˜ ) E ( G ) . Since T is edge-preserving, we have ( T s k , q ˜ ) E ( G ) . Thus, ( w k + 1 , q ˜ ) E ( G ) . Hence, by edge-preserving of T , we obtain ( T w k + 1 , q ˜ ) E ( G ) , and so ( r k + 1 , q ˜ ) E ( G ) , since E ( G ) is convex. Again, by edge-preserving of T , we obtain ( T r k + 1 , q ˜ ) E ( G ) , and so ( s k + 1 , q ˜ ) E ( G ) . Therefore, ( w n , q ˜ ) , ( r n , q ˜ ) , ( s n , q ˜ ) E ( G ) for all n 1 .

Since T is edge-preserving and ( q ˜ , w 0 ) E ( G ) , we have ( q ˜ , T w 0 ) E ( G ) , and so ( q ˜ , r 0 ) E ( G ) , since E ( G ) is convex. Again, since T is edge-preserving and ( q ˜ , r 0 ) E ( G ) , we have ( q ˜ , T r 0 ) E ( G ) , and so ( q ˜ , s 0 ) E ( G ) . Using a similar argument, we can show that ( q ˜ , w n ) , ( q ˜ , r n ) , ( q ˜ , s n ) E ( G ) for all n 1 under the assumption that ( q ˜ , w 0 ) E ( G ) , ( q ˜ , r 0 ) E ( G ) , and ( q ˜ , s 0 ) E ( G ) . By transitivity of G , we obtain ( s n , r n ) , ( w n , r n ) , ( w n , w n + 1 ) E ( G ) . This completes the proof.□

Lemma 3.2

Let X be a uniformly convex Banach space. Suppose that { ξ n } is a real sequence in [ δ , 1 δ ] for some δ ( 0 , 1 ) and ( w 0 , q ˜ ) , ( q ˜ , w 0 ) E ( G ) for arbitrary w 0 C and q ˜ ( T ) . Then,

  1. lim n w n q ˜ exists,

  2. lim n w n T w n = 0 .

Proof

(i) Let q ˜ ( T ) . By Proposition 3.1, we have ( w n , q ˜ ) , ( s n , q ˜ ) , ( r n , q ˜ ) , ( s n , r n ) , ( w n , r n ) E ( G ) . Then, by G -nonexpansiveness of T and using (3), we have

(4) r n q ˜ = ( 1 ξ n ) w n + ξ n T w n q ˜ = ( 1 ξ n ) ( w n q ˜ ) + ξ n ( T w n q ˜ ) ( 1 ξ n ) w n q ˜ + ξ n T w n q ˜ ( 1 ξ n ) w n q ˜ + ξ n w n q ˜ = w n q ˜ ,

(5) s n q ˜ = T r n q ˜ r n q ˜ w n q ˜ ,

and so

(6) w n + 1 q ˜ = T s n q ˜ s n q ˜ w n q ˜ .

Therefore,

w n + 1 q ˜ w n q ˜ w 1 q ˜ , n N .

Since { w n q ˜ } is monotonically decreasing, we have that the sequence { w n q ˜ } is convergent. In particular, the sequence { w n } is bounded.

(ii) Assume that lim n w n q ˜ = c . If c = 0 , then by G -nonexpansiveness of T , we obtain

w n T w n w n q ˜ + q ˜ T w n w n q ˜ + q ˜ w n .

Therefore, the result follows. Suppose that c > 0 . Taking the lim sup on both sides in Inequality (5), we obtain

(7) limsup n s n q ˜ limsup n w n q ˜ = c .

Since lim n w n + 1 q 0 = c . Letting n in Inequality (6), we have

(8) lim n T s n q ˜ = c .

Taking the lim sup on both sides in the Inequality (4), we obtain

(9) limsup n r n q ˜ limsup n w n q ˜ = c .

In addition, by G -nonexpansiveness of T , we have T r n q ˜ r n q ˜ , taking the lim sup on both sides in this inequality and using (9), we obtain

(10) limsup n T r n q ˜ c .

Note that w n + 1 q ˜ s n q ˜ r n q ˜ gives that

(11) liminf n r n q ˜ c .

From (9) and (11), we have

(12) lim n r n q ˜ = c .

By (4) and (12), we have

(13) lim n ( 1 ξ n ) ( w n q ˜ ) + ξ n ( T w n q ˜ ) = c .

In addition, limsup n T w n q ˜ limsup n w n q ˜ = c , using (13) and Lemma 2.3, we have

lim n T w n w n = 0 .

This completes the proof.□

We now prove the weak convergence of the sequence generated by the M-iteration method (3) for a G -nonexpansive mapping in a uniformly convex Banach space satisfying Opial’s condition.

Theorem 3.3

Let X be a uniformly convex Banach space that satisfies Opial’s condition and C has Property G . Suppose that { ξ n } is a real sequence in [ δ , 1 δ ] for some δ ( 0 , 1 ) . If ( w 0 , q ˜ ) , ( q ˜ , w 0 ) E ( G ) for arbitrary w 0 C and q ˜ ( T ) , then { w n } converges weakly to a fixed-point of T .

Proof

Let q ˜ be such that ( w 0 , q ˜ ) , ( q ˜ , w 0 ) E ( G ) . From Lemma 3.2 (i), lim n w n q ˜ exists, so { w n } is bounded. It follows from Lemma 3.2 (ii) that lim n w n T w n = 0 . Since X is uniformly convex and { w n } is bounded, we may assume that w n u as n , without loss of generality. By Lemma 2.2, we have u . Suppose that subsequences { w n k } and { w n j } of { w n } converge weakly to u and v , respectively. By Lemma 3.2 (ii), we obtain that w n k T w n k 0 and w n j T w n j 0 as k , j . Using Lemma 2.2, we have u , v F ( T ) . By Lemma 3.2 (i), lim n w n u and lim n w n v exist. It follows from Lemma 2.4 that u = v . Therefore, { w n } converges weakly to a fixed-point of T .□

It is worth noting that Opial’s condition has remained crucial in proving weak convergence theorems. However, each l p ( 1 p < ) satisfies Opial’s condition, while all L p do not have the property unless p = 2 .

Next, we deal with the weak convergence of the sequence { w n } generated by (3) for G -nonexpansive mapping without assuming Opial’s condition in a uniformly convex Banach space with a directed graph.

Theorem 3.4

Let X be a uniformly convex Banach space. Suppose that C has Property G , { ξ n } is a real sequence in [ δ , 1 δ ] for some δ ( 0 , 1 ) , ( T ) is dominated by w 0 , and ( T ) dominates w 0 . If ( w 0 , q ˜ ) , ( q ˜ , w 0 ) E ( G ) for arbitrary w 0 C and q ˜ ( T ) , then { w n } converges weakly to a fixed-point of T .

Proof

Let q ˜ be such that ( w 0 , q 0 ) and ( q 0 , w 0 ) are in E ( G ) . From Lemma 3.2 (i), lim n w n q ˜ exists, so { w n } is bounded in C . Since C is nonempty closed convex subset of a uniformly convex Banach space X , it is weakly compact, and hence, there exists a subsequence { w n j } of the sequence { w n } such that { w n j } converges weakly to some point p C . By Lemma 3.2 (ii), we obtain that

(14) lim j w n j T w n j = 0 .

In addition, r n w n = ( 1 ξ n ) w n + ξ n T w n w n ξ n T w n w n , using Lemma 3.2 (ii), we have

(15) lim n r n w n = 0 .

Using Lemma 3.2 (ii) and (15), we have

(16) T r n r n = T r n T w n + T w n w n + w n r n T r n T w n + T w n w n + w n r n r n w n + T w n w n + w n r n 0 ( as n ) .

Using (3) and (16), we have

(17) lim n s n r n = lim n T r n r n = 0 .

In addition,

T s n s n T s n T r n + T r n r n + r n s n s n r n + T r n r n + r n s n .

Using (16) and (17), we have

(18) lim n T s n s n = 0 .

Using Lemma 2.5, we have I T is G -demiclosed at 0 so that p ( T ) . To complete the proof, it suffices to show that { w n } converges weakly to p . To this end, we need to show that { w n } satisfies the hypothesis of Lemma 2.6. Let { w n j } be a subsequence of { w n } that converges weakly to some q C . By similar aforementioned arguments, q is in ( T ) . Now, for each j 1 , using (3), we have

(19) w n j + 1 = T s n j .

It follows from (14) that

(20) T w n j = ( T w n j w n j ) + w n j q .

Now from (3) and (20),

(21) r n j = ( 1 ξ n j ) w n j + ξ n j T w n j q .

Using (16) and (21), we have

(22) T r n j = ( T r n j r n j ) + r n j q .

Now from (3) and (22),

(23) s n j = T r n j q .

Also, from (18) and (23), we have

(24) T s n j = ( T s n j s n j ) + s n j q .

It follows from (19) and (24) that

w n j + 1 q .

Therefore, the sequence { w n } satisfies the hypothesis of Lemma 2.6, which in turn implies that { w n } weakly converges to q so that p = q . This completes the proof.□

The strong convergence of the sequence generated by the M-iteration method (3) for G -nonexpansive mapping in a uniformly convex Banach space with a directed graph is discussed in the rest of this section.

Theorem 3.5

Let X be a uniformly convex Banach space. Suppose that { ξ n } is a real sequence in [ δ , 1 δ ] for some δ ( 0 , 1 ) , T satisfies Condition ( A ), ( T ) is dominated by w 0 and ( T ) dominates w 0 . Then, { w n } converges strongly to a fixed-point of T .

Proof

By Lemma 3.2 (i), lim n w n q exists and so lim n d ( w n , ( T ) ) exists for any q ( T ) . Also, by Lemma 3.2 (ii), lim n w n T w n = 0 . It follows from Condition ( A ) that lim n f ( d ( w n , ( T ) ) ) = 0 . Since f : [ 0 , ) [ 0 , ) is a nondecreasing function satisfying f ( 0 ) = 0 , f ( r ) > 0 for all r ( 0 , ) , we obtain that lim n d ( w n , ( T ) ) = 0 . Next, we show that { w n } is a Cauchy sequence. Since lim n d ( w n , ( T ) ) = 0 , given any ε > 0 , there exists a natural number n 0 such that d ( w n , ( T ) ) < ε 2 for all n n 0 . So we can find q ( T ) such that w n 0 q < ε 4 . For n n 0 and m 1 , we have

w n + m w n = w n + m q + w n q w n 0 q + w n 0 q < ε 2 + ε 2 = ε .

This shows that { w n } is a Cauchy sequence and so is convergent since X is complete. Let lim n w n = q ˜ . Then, d ( q ˜ , ( T ) ) = 0 . It follows that q ˜ ( T ) . This completes the proof.□

Theorem 3.6

Let X be a uniformly convex Banach space. Suppose that C has Property G and { ξ n } is a real sequence in [ δ , 1 δ ] for some δ ( 0 , 1 ) , ( T ) is dominated by w 0 and ( T ) dominates w 0 . If T is semi-compact, then { w n } converges strongly to a fixed-point of T .

Proof

It follows from Lemma 3.2 that { w n } is bounded and lim n w n T w n = 0 . Since T is semi-compact, then there exists a subsequence { w n j } of { w n } such that w n j q C as j . Since C has Property G and transitivity of graph G , we obtain ( w n j , q ) E ( G ) . Note that lim j w n j T w n j = 0 . Then,

q T q q w n j + w n j T w n j + T w n j T q q w n j + w n j T w n j + w n j q 0 ( as j ) .

Hence, q ( T ) . Thus, lim n d ( w n , ( T ) ) exists by Theorem 3.5. We note that d ( w n j , ) d ( w n j , q ) 0 as j ; hence, lim n d ( w n , ( T ) ) = 0 . It follows, as in the proof of Theorem 3.5, that { w n } converges strongly to a fixed-point of T . This completes the proof.□

4 Rate of convergence and numerical examples

In this section, we show that the M-iteration process converges faster than the iterative scheme due to Phuengrattana et al. and Noor for the class G -contraction mappings. Furthermore, we provide a concrete example, including numerical results, and compare the proposed algorithm (3) with Noor (1) and SP (2) algorithms to declare that our algorithm is more effective. All codes were written in Matlab 2019b.

The following definitions about the rate of convergence are due to Berinde [30].

Definition 4.1

Let { ρ n } and { σ n } be two sequences of real numbers converging to ρ and σ , respectively. If lim n ρ n ρ σ n σ = 0 , then { ρ n } converges faster than { σ n } .

Definition 4.2

Suppose that for two fixed-point iteration processes { w n } and { u n }, both converging to the same fixed-point q ˜ , the error estimates

w n q ˜ ρ n for all n 1 , u n q ˜ σ n for all n 1 ,

are available, where { ρ n } and { σ n } are two sequences of positive numbers converging to zero. If { ρ n } converges faster than { σ n } , then { w n } converges faster than { u n } to q ˜ .

Definition 4.3

[31] Suppose { ζ n } is a sequence that converges to ζ , with ζ n ζ for all n . If positive constants ν and ψ exist with

lim n ζ n + 1 ζ ζ n ζ ψ = ν ,

then { ζ n } converges to ζ of order ψ , with asymptotic error constant ν . If ψ = 1 (and ν < 1 ), the sequence is linearly convergent.

In 2011, Phuengrattana and Suantai [14] showed that the Ishikawa iteration converges faster than the Mann iteration for a class of continuous functions on the closed interval in a real line. In order to study the order of convergence of a real sequence { ζ n } converging to ζ , we usually use the well-known terminology in numerical analysis (see [31], for example).

Let T be G -nonexpansive from C to C with ( T ) nonempty, where C is a nonempty closed convex subset of a Banach space X endowed with a directed graph. Let V ( G ) = C , E ( G ) is convex and the graph G is transitive.

The following propositions will be useful in this context.

Proposition 4.4

For an arbitrary v 0 C , define the sequence { v n } by the Noor iteration (1). Let q ˜ be such that ( v 0 , q ˜ ) and ( q ˜ , v 0 ) are in E ( G ) . Then, ( v n , q ˜ ) , ( t n , q ˜ ) , ( l n , q ˜ ) , ( q ˜ , v n ) , ( q ˜ , t n ) , and ( q ˜ , l n ) are in E ( G ) .

Proof

We proceed by induction. Since T is edge-preserving and ( v 0 , q ˜ ) E ( G ) , we have ( T v 0 , q ˜ ) E ( G ) and so ( l 0 , q ˜ ) E ( G ) , by E ( G ) is convex. Again, by edge-preserving of T and ( l 0 , q ˜ ) E ( G ) , we have ( T l 0 , q ˜ ) E ( G ) . Since ( v 0 , q ˜ ) E ( G ) , by the convexity of E ( G ) , we have ( t 0 , q ˜ ) E ( G ) . Then, since T is edge-preserving and ( t 0 , q ˜ ) E ( G ) , we obtain ( T t 0 , q ˜ ) E ( G ) , and hence ( v 1 , q ˜ ) E ( G ) , since E ( G ) is convex and ( v 0 , q ˜ ) E ( G ) . Thus, by edge-preserving of T , ( T v 1 , q ˜ ) E ( G ) . Again, by the convexity of E ( G ) and ( T v 1 , q ˜ ) , ( v 1 , q ˜ ) E ( G ) , we have ( l 1 , q ˜ ) E ( G ) . Then, since T is edge-preserving, ( T l 1 , q ˜ ) E ( G ) , by E ( G ) is convex and ( v 0 , q ˜ ) E ( G ) , we obtain ( t 1 , q ˜ ) E ( G ) , and hence, ( T t 1 , q ˜ ) E ( G ) .

Next, we assume that ( v k , q ˜ ) E ( G ) . Since T is edge-preserving, we obtain ( T v k , q ˜ ) E ( G ) . So ( l k , q ˜ ) E ( G ) , since E ( G ) is convex. Hence, by edge-preserving of T and ( l k , q ˜ ) E ( G ) , we have ( T l k , q ˜ ) E ( G ) , then ( t k , q ˜ ) E ( G ) , since E ( G ) is convex and ( v k , q ˜ ) E ( G ) . Since T is edge-preserving, we have ( T t k , q ˜ ) E ( G ) . Thus, ( v k + 1 , q ˜ ) E ( G ) , since ( v k , q ˜ ) , ( T t k , q ˜ ) E ( G ) and E ( G ) is convex. Hence, by edge-preserving of T , we obtain ( T v k + 1 , q ˜ ) E ( G ) , and so ( l k + 1 , q ˜ ) E ( G ) , since E ( G ) is convex and ( v k + 1 , q ˜ ) E ( G ) . Again, by edge-preserving of T , we obtain ( T l k + 1 , q ˜ ) E ( G ) , and so ( t k + 1 , q ˜ ) E ( G ) , since E ( G ) is convex and ( v k + 1 , q ˜ ) E ( G ) . Therefore, ( v n , q ˜ ) , ( t n , q ˜ ) , ( l n , q ˜ ) E ( G ) for all n 1 .

Since T is edge-preserving and ( q ˜ , v 0 ) E ( G ) , we have ( q ˜ , T v 0 ) E ( G ) , and so ( q ˜ , l 0 ) E ( G ) , since E ( G ) is convex. Again, since T is edge-preserving and ( q ˜ , l 0 ) E ( G ) , we have ( q ˜ , T l 0 ) E ( G ) , and so ( q ˜ , t 0 ) E ( G ) , by E ( G ) is convex and ( q ˜ , v 0 ) E ( G ) . Using a similar argument, we can show that ( q ˜ , v n ) , ( q ˜ , t n ) , ( q ˜ , l n ) E ( G ) for all n 1 under the assumption that ( q ˜ , v 0 ) E ( G ) , ( q ˜ , t 0 ) E ( G ) , and ( q ˜ , l 0 ) E ( G ) . This completes the proof.□

Proposition 4.5

For an arbitrary u 0 C , define the sequence { u n } by SP-iteration (2). Let q ˜ be such that ( u 0 , q ˜ ) and ( q ˜ , u 0 ) are in E ( G ) . Then, ( u n , q ˜ ) , ( q n , q ˜ ) , ( h n , q ˜ ) , ( q ˜ , u n ) , ( q ˜ , q n ) , and ( q ˜ , h n ) are in E ( G ) .

Proof

We proceed by induction. Since T is edge-preserving and ( u 0 , q ˜ ) E ( G ) , we have ( T u 0 , q ˜ ) E ( G ) and so ( h 0 , q ˜ ) E ( G ) , by E ( G ) is convex. Again, by edge-preserving of T and ( h 0 , q ˜ ) E ( G ) , we have ( T h 0 , q ˜ ) E ( G ) and so ( q 0 , q ˜ ) E ( G ) , since E ( G ) is convex. Then, since T is edge-preserving and ( q 0 , q ˜ ) E ( G ) , we obtain ( T q 0 , q ˜ ) E ( G ) , and hence, ( u 1 , q ˜ ) E ( G ) , by E ( G ) is convex. Thus, by edge-preserving of T , ( T u 1 , q ˜ ) E ( G ) . Again, by the convexity of E ( G ) and ( T u 1 , q ˜ ) , ( u 1 , q ˜ ) E ( G ) , we have ( h 1 , q ˜ ) E ( G ) . Then, since T is edge-preserving, ( T h 1 , q ˜ ) E ( G ) , by E ( G ) is convex, we obtain ( q 1 , q ˜ ) E ( G ) , and hence, ( T q 1 , q ˜ ) E ( G ) .

Next, we assume that ( u k , q ˜ ) E ( G ) . Since T is edge-preserving, we obtain ( T u k , q ˜ ) E ( G ) . So ( h k , q ˜ ) E ( G ) , since E ( G ) is convex. Hence, by edge-preserving of T and ( h k , q ˜ ) E ( G ) , we have ( T h k , q ˜ ) E ( G ) , then ( q k , q ˜ ) E ( G ) , since E ( G ) is convex. Since T is edge-preserving, we have ( T q k , q ˜ ) E ( G ) . Again, by the convexity of E ( G ) and ( q k , q ˜ ) E ( G ) , we have ( u k + 1 , q ˜ ) E ( G ) . Hence, by edge-preserving of T , we obtain ( T u k + 1 , q ˜ ) E ( G ) , and so ( h k + 1 , q ˜ ) E ( G ) , since E ( G ) is convex. Again, by edge-preserving of T , we obtain ( T h k + 1 , q ˜ ) E ( G ) , and so ( q k + 1 , q ˜ ) E ( G ) , since E ( G ) is convex. Therefore, ( u n , q ˜ ) , ( h n , q ˜ ) , ( q n , q ˜ ) E ( G ) for all n 1 .

Since T is edge-preserving and ( q ˜ , u 0 ) E ( G ) , we have ( q ˜ , T u 0 ) E ( G ) , and so ( q ˜ , h 0 ) E ( G ) , since E ( G ) is convex. Again, since T is edge-preserving and ( q ˜ , h 0 ) E ( G ) , we have ( q ˜ , T h 0 ) E ( G ) , and so ( q ˜ , q 0 ) E ( G ) , by E ( G ) is convex. Using a similar argument, we can show that ( q ˜ , u n ) , ( q ˜ , q n ) , ( q ˜ , h n ) E ( G ) for all n 1 under the assumption that ( q ˜ , u 0 ) E ( G ) , ( q ˜ , q 0 ) E ( G ) , and ( q ˜ , h 0 ) E ( G ) . This completes the proof.□

From the following result, we consider the rate of convergence of the M-iterative method (3) and the well-known iterative methods.

Theorem 4.6

Let X be a uniformly convex Banach space. Let T be a G -contraction with a contraction factor ψ ( 0 , 1 ) . Suppose that { ξ n } , { ϱ n } , and { η n } are the real sequences in [ δ , 1 δ ] for some δ ( 0 , 1 ) , and ( w 0 , q ˜ ) , ( q ˜ , w 0 ) , ( u 0 , q ˜ ) , ( q ˜ , u 0 ) , ( v 0 , q ˜ ) , ( q ˜ , v 0 ) E ( G ) for arbitrary w 0 , u 0 , v 0 C , and q ˜ ( T ) . Then, the sequence { w n } generated by M-iteration (3) converges faster than Noor iteration (1) and SP-iteration (2).

Proof

First, by the Noor iteration (1),

v n + 1 q ˜ ( 1 ξ n ) v n q ˜ + ξ n T t n q ˜ ( 1 ξ n ) v n q ˜ + ξ n ψ t n q ˜ ( 1 ξ n ) v n q ˜ + ξ n ψ ( ( 1 ϱ n ) v n q ˜ + ϱ n ψ l n q ˜ ) ( 1 ξ n ( 1 ψ ( 1 ϱ n ) ϱ n ψ 2 ( 1 η n ( 1 ψ ) ) ) ) v n q ˜ = ( 1 ξ n ( 1 ψ + ϱ n ψ ( 1 ψ ( 1 η n ( 1 ψ ) ) ) ) ) v n q ˜ ( 1 ξ n ( 1 ψ ) ) v n q ˜ v 0 q ˜ k = 0 n ( 1 ξ k ( 1 ψ ) ) .

Using (3), we have

r n q ˜ = ( 1 ξ n ) w n + ξ n T w n q ˜ = ( 1 ξ n ) ( w n q ˜ ) + ξ n ( T w n q ˜ ) ( 1 ξ n ) w n q ˜ + ξ n T w n q ˜ ( 1 ξ n ) w n q ˜ + ξ n ψ w n q ˜ = ( 1 ξ n + ψ ξ n ) w n q ˜ = ( 1 ( 1 ψ ) ξ n ) w n q ˜ ,

and so

s n q ˜ = T r n q ˜ ψ r n q ˜ ψ ( 1 ( 1 ψ ) ξ n ) w n q ˜ .

This implies that

w n + 1 q ˜ = T s n q ˜ ψ s n q ˜ ψ 2 ( 1 ( 1 ψ ) ξ n ) w n q ˜ .

Repetition of the aforementioned processes gives the following inequality:

w n + 1 q ˜ w 0 q ˜ ψ 2 ( n + 1 ) k = 0 n ( 1 ( 1 ψ ) ξ k ) .

Define

ρ n w 0 q ˜ ψ 2 ( n + 1 ) k = 0 n ( 1 ( 1 ψ ) ξ k ) , σ n v 0 q ˜ k = 0 n ( 1 ξ k ( 1 ψ ) ) , θ n ρ n σ n = w 0 q ˜ ψ 2 ( n + 1 ) k = 0 n ( 1 ( 1 ψ ) ξ k ) v 0 q ˜ k = 0 n ( 1 ξ k ( 1 ψ ) ) .

Therefore,

lim n θ n + 1 θ n = lim n ψ 2 ( 1 ( 1 ψ ) ξ n + 1 ) ( 1 ξ n + 1 ( 1 ψ ) ) = ψ 2 < 1 2 = 1 .

It thus follows from the well-known ratio test that n = 0 θ n < . Hence, we have lim n θ n = 0 , which implies that { w n } is faster than { v n } . Consequently, the M-iteration (3) converges faster than the Noor iteration (1).

Finally, by the SP-iteration (2),

u n + 1 q ˜ ( 1 ξ n ) q n q ˜ + ξ n T q n q ˜ ( 1 ξ n ) q n q ˜ + ξ n ψ q n q ˜ = ( 1 ξ n ( 1 ψ ) ) q n q ˜ ( 1 ξ n ( 1 ψ ) ) ( ( 1 ϱ n ) h n q ˜ + ϱ n ψ h n q ˜ ) = ( 1 ξ n ( 1 ψ ) ) ( 1 ϱ n ( 1 ψ ) ) h n q ˜ ( 1 ξ n ( 1 ψ ) ) h n q ˜ ( 1 ξ n ( 1 ψ ) ) ( ( 1 η n ) u n q ˜ + η n ψ u n q ˜ ) = ( 1 ξ n ( 1 ψ ) ) ( 1 η n ( 1 ψ ) ) u n q ˜ ( 1 ξ n ( 1 ψ ) ) u n q ˜ u 0 q ˜ k = 0 n ( 1 ξ k ( 1 ψ ) ) .

Define

ρ n w 0 q ˜ ψ 2 ( n + 1 ) k = 0 n ( 1 ( 1 ψ ) ξ k ) , σ n u 0 q ˜ k = 0 n ( 1 ξ k ( 1 ψ ) ) , θ n ρ n σ n = w 0 q ˜ ψ 2 ( n + 1 ) k = 0 n ( 1 ( 1 ψ ) ξ k ) u 0 q ˜ k = 0 n ( 1 ξ k ( 1 ψ ) ) .

By the same argument as earlier, we can show that the sequence generated by the M-iteration (3) converges faster than the SP-iteration (2). This completed the proof.□

Now, we will discuss a numerical experiment that supports our main results.

Example 4.7

Let X = R , C = [ 0 , 2 ] and G = ( V ( G ) , E ( G ) ) be a directed graph defined by V ( G ) = C and ( w , s ) E ( G ) if and only if 0.50 w s 1.70 or w = s C . In this example, we will present the numerical results of three possible mappings. Define mappings T 1 , T 2 , T 3 : C C , where

T 1 w = 2 3 arcsin ( w 1 ) + 1 , T 2 w = 1 3 tan ( w 1 ) + 1 , T 3 w = w ,

for any w C . It is easy to show that T 1 , T 2 , and T 3 are G -nonexpansive but T 1 , T 2 , and T 3 are not nonexpansive because T 1 w T 1 s > 0.50 = w s , T 2 u T 2 v > 0.07 = u v , and T 3 p T 3 q > 0.45 = p q when w = 1.95 , s = 1.45 , u = 0.08 , v = 0.01 , p = 0.5 , and q = 0.05 . Let

(25) ξ n = n + 1 5 n + 3 , ϱ n = n 4 n 3 + 2 , and η n = n n + 1 .

Let { w n } be a sequence generated by (3) and { s n } , { r n } be the sequences generated by the three-step Noor iteration (1) and SP-iteration (2), respectively. Example 4.7 shows the convergence behavior of these three comparative methods with the operators T 1 , T 2 , and T 3 . We choose r 1 = s 1 = w 1 = 1.65 and set the relative error ζ n w w < 1.00 × 1 0 8 as stopping criterion, with { ζ n } being all of the comparative sequences.

All numerical experiments for a fixed-point solution of T 1 , T 2 , and T 3 by using the three-step Noor iteration, SP-iteration and M-iteration are shown in Figure 1, 2, 3.

Figure 1 
               Numerical solution of sequence generated by three comparative methods with operators 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 1
                              
                           
                           ,
                           
                              
                                 T
                              
                              
                                 2
                              
                           
                        
                        {{\mathcal{T}}}_{1},{{\mathcal{T}}}_{2}
                     
                  , and 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 3
                              
                           
                        
                        {{\mathcal{T}}}_{3}
                     
                  , respectively.
Figure 1

Numerical solution of sequence generated by three comparative methods with operators T 1 , T 2 , and T 3 , respectively.

Figure 2 
               Relative error of sequence generated by three comparative methods with operators 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 1
                              
                           
                           ,
                           
                              
                                 T
                              
                              
                                 2
                              
                           
                        
                        {{\mathcal{T}}}_{1},{{\mathcal{T}}}_{2}
                     
                  , and 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 3
                              
                           
                        
                        {{\mathcal{T}}}_{3}
                     
                  , respectively.
Figure 2

Relative error of sequence generated by three comparative methods with operators T 1 , T 2 , and T 3 , respectively.

Figure 3 
               Convergence comparison of sequence generated by three comparative methods with operators with operators 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 1
                              
                           
                           ,
                           
                              
                                 T
                              
                              
                                 2
                              
                           
                        
                        {{\mathcal{T}}}_{1},{{\mathcal{T}}}_{2}
                     
                  , and 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 3
                              
                           
                        
                        {{\mathcal{T}}}_{3}
                     
                  , respectively.
Figure 3

Convergence comparison of sequence generated by three comparative methods with operators with operators T 1 , T 2 , and T 3 , respectively.

Figures 1 and 2 show the numerical solution and relative error behavior of three comparative methods with operators T 1 , T 2 , and T 3 . It can be seen that all sequences generated by these three methods converge to w = 1 . The errors of these three comparative methods are also decreased to zero when the number of iterations increased. Figure 3 shows the tendency of the asymptotic error constant σ for sequence { ζ n } results from the formula ζ n + 1 1 ζ n 1 of the three-step Noor, SP, and M-iterations. Figure 3 shows that all methods are linearly convergent. This message is being made more confident by using Definition 4.3.

The asymptotic error constants of three comparative methods with the operators T 1 , T 2 , and T 3 on Figure 3 show that the M-iteration has the smallest asymptotic error constant in all cases. And, the smaller of asymptotic error constant gives us the faster convergence of the considering sequence.

Figure 4 shows that the M-iteration consumes the least amount of time while producing results that are consistent with those obtained earlier. As we evaluate our M-iteration, we observe that by changing only one parameter, we may improve the method’s convergence rate. The relative error and asymptotic error constants of the M-iteration impacted by the controlled parameter η n and operators T 1 , T 2 , and T 3 are shown in Figures 5 and 6.

Figure 4 
               CPU times of sequence generated by three comparative methods with operators 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 1
                              
                           
                           ,
                           
                              
                                 T
                              
                              
                                 2
                              
                           
                        
                        {{\mathcal{T}}}_{1},{{\mathcal{T}}}_{2}
                     
                  , and 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 3
                              
                           
                        
                        {{\mathcal{T}}}_{3}
                     
                  , respectively.
Figure 4

CPU times of sequence generated by three comparative methods with operators T 1 , T 2 , and T 3 , respectively.

Figure 5 
               Relative error of sequence generated by the M-iteration impacted with operators 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 1
                              
                           
                           ,
                           
                              
                                 T
                              
                              
                                 2
                              
                           
                        
                        {{\mathcal{T}}}_{1},{{\mathcal{T}}}_{2}
                     
                  , and 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 3
                              
                           
                        
                        {{\mathcal{T}}}_{3}
                     
                   and controlled 
                     
                        
                        
                           
                              
                                 η
                              
                              
                                 n
                              
                           
                        
                        {\eta }_{n}
                     
                  .
Figure 5

Relative error of sequence generated by the M-iteration impacted with operators T 1 , T 2 , and T 3 and controlled η n .

Figure 6 
               Convergence comparison of sequence generated by the M-iteration impacted with operators 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 1
                              
                           
                           ,
                           
                              
                                 T
                              
                              
                                 2
                              
                           
                        
                        {{\mathcal{T}}}_{1},{{\mathcal{T}}}_{2}
                     
                  , and 
                     
                        
                        
                           
                              
                                 T
                              
                              
                                 3
                              
                           
                        
                        {{\mathcal{T}}}_{3}
                     
                   and controlled 
                     
                        
                        
                           
                              
                                 η
                              
                              
                                 n
                              
                           
                        
                        {\eta }_{n}
                     
                  .
Figure 6

Convergence comparison of sequence generated by the M-iteration impacted with operators T 1 , T 2 , and T 3 and controlled η n .

We noted from the aforementioned two figures that bringing the parameter η n closer to 1 enhances the efficiency of our proposed technique for each operator under consideration.

5 Simulated results for image deblurring and signal recovering problems

Now, we apply our proposed algorithm to solve image deblurring and signal recovering problems where all codes were written in Matlab and run on laptop Intel core i5, 16.00 GB RAM, windows 10 (64-bit).

The minimization problem of the sum of two functions is to find a solution of

(26) min w R n { F ( w ) f ( w ) + h ( w ) } ,

where h : R n R { } is the proper convex and lower semicontinuous function, and f : R n R is the convex differentiable function, with gradient f being L-Lipschitz constant for some L > 0 . The solution of (26) can be characterized by using Fermat’s rule, Theorem 16.3 of Bauschke and Combettes [32] as follows:

w is a minimizer of ( f + h ) if and only if 0 h ( w ) + f ( w ) ,

where h is the subdifferential of h and f is the gradient of f . The subdifferential of h at w , denoted by h ( w ) , is defined by:

h ( w ) { u : h ( w ) h ( w ) u , w w for all w } .

It is also well known that the solution of (26) is characterized by the following fixed-point problem:

w is a minimizer of ( f + h ) if and only if w = prox c h ( I c f ) ( w ) ,

where c > 0 , prox h is the proximity operator of h defined by prox h argmin { h ( s ) + 1 2 w s 2 2 } , (see [33] for more details). It is also known that prox c h ( I c f ) is a nonexpansive mapping when c ( 0 , 2 L ) .

5.1 Application to image deblurring problems

Let B be the degraded image of the true image W in the matrix form of m ˜ rows and n ˜ columns ( B , W R m ˜ × n ˜ ). The key to obtaining the image restoration model is to rearrange the elements of the images B and W into the column vectors by stacking the columns of these images into two long vectors b and w , where b = vec ( B ) and w = vec ( W ) , both of length n = m ˜ n ˜ . The image restoration problem can be modeled in one-dimensional vector by the following linear equation system:

(27) b = M w ,

where w R n is an original image, b R n is the observed image, M R n × n is the blurring operation, and n = m ˜ n ˜ . In order to solve Problem (27), we aim to approximate the original image, vector b , which is known as the following least-squares problem:

(28) min w 1 2 b M w 2 2 ,

where . 2 is defined by w 2 = i = 1 n w i 2 . By setting q ( w ) as equation (28), we will apply our main results for solving the image deblurring problem (27) by setting as follows:

Let M R n × n be a degraded matrix and b R n . By applying the M-iteration (3), we obtain the following proposed method to find the common solution of the image deblurring problem:

(29) r n = ( 1 η n ) w n + η n ( w n μ M T ( M w n b ) ) , s n = r n μ M T ( M r n b ) , w n + 1 = s n μ M T ( M s n b ) ,

where μ 0 , 2 M 2 2 and { η n } are the sequences in [ δ , 1 δ ] for all n N and for some δ in ( 0 , 1 ) . The proposed algorithm (29) is used in solving the image deblurring problem (27) with the default parameter (25) and μ = 1 M T M 2 . Then, { w n } converges to its solution.

The goal on image deblurring problem is to find the original image from the observed image without knowing which one is the blurring matrix. However, the blurring matrix M must be known in applying algorithm (29). The original RGB format for color image shown in Figure 7 is used to demonstrate the practicability of the proposed algorithm. The Cauchy and relative errors with the stopping criterion of the proposed algorithms are defined as w n w n 1 < 1 0 7 and w n w w < 1 0 2 , respectively. The performance of the comparing algorithms at w n on image deblurring process is measured quantitatively by means of the peak signal-to-noise ratio (PSNR) defined by:

PSNR ( w n ) = 20 log 10 25 5 2 MSE ,

where MSE = w n w 2 2 (Figure 8).

Figure 7 
                  The original RGB image with matrix size 
                        
                           
                           
                              340
                              ×
                              354
                           
                           340\times 354
                        
                     .
Figure 7

The original RGB image with matrix size 340 × 354 .

Figure 8 
                  The original RGB image degraded by blurred matrices 
                        
                           
                           
                              
                                 
                                    M
                                 
                                 
                                    G
                                 
                              
                           
                           {M}_{G}
                        
                     , 
                        
                           
                           
                              
                                 
                                    M
                                 
                                 
                                    O
                                 
                              
                           
                           {M}_{O}
                        
                     , and 
                        
                           
                           
                              
                                 
                                    M
                                 
                                 
                                    M
                                 
                              
                           
                           {M}_{M}
                        
                     , respectively.
Figure 8

The original RGB image degraded by blurred matrices M G , M O , and M M , respectively.

Next, we present the restoration of images that have been corrupted by the following blur types:

  1. Gaussian blur of filter size 9 × 9 with standard deviation σ = 4 (the original image has been degraded by the blurring matrix M G ).

  2. Out of focus blur (disk) with radius r ˆ = 6 (the original image has been degraded by the blurring matrix M O ).

  3. Motion blur specifying with motion length of 21 pixels (len = 21 ) and motion orientation 1 1 ( θ = 11 ) (the original image has been degraded by the blurring matrix M M ).

The red-green-blue component represents images W and three different kinds of blurring image B (See Figure 8). Then, we denote W r , W g , W b , and B r , B g , B b as the gray-scale images that constitute the red-green-blue (RGB) channels of using the image W and the blurring image B , respectively. Thus, we define the column vectors w = [ vec ( W r ) ; vec ( W g ) ; vec ( W b ) ] and b = [ vec ( B r ) ; vec ( B g ) ; vec ( B b ) ] from color images W and B and both of length n = 3 m ˜ n ˜ . After that, we apply the proposed algorithms in obtaining the solution of deblurring problem with these three blurring matrices.

Figures 9, 10, 11 show the reconstructed RGB image using the proposed algorithms in obtaining the solution of the deblurring problem with three blurring matrices M G , M O , and M M for 50th, 1,000th, 20,000th iterations. It can be seen from these figures that the quality of restored image using the proposed algorithms in solving the deblurring problem obtain the quality improvements for the three different types of the degraded images.

Figure 9 
                  The reconstructed images being 50th, 1,000th, and 20,000th used iterations of the RGB image degraded by blurred matrices 
                        
                           
                           
                              
                                 
                                    M
                                 
                                 
                                    G
                                 
                              
                           
                           {M}_{G}
                        
                     .
Figure 9

The reconstructed images being 50th, 1,000th, and 20,000th used iterations of the RGB image degraded by blurred matrices M G .

Figure 10 
                  The reconstructed images being 50th, 1,000th, and 20,000th used iterations of the RGB image degraded by blurred matrices 
                        
                           
                           
                              
                                 
                                    M
                                 
                                 
                                    O
                                 
                              
                           
                           {M}_{O}
                        
                     .
Figure 10

The reconstructed images being 50th, 1,000th, and 20,000th used iterations of the RGB image degraded by blurred matrices M O .

Figure 11 
                  The reconstructed images being 50th, 1,000th, and 20,000th used iterations of the RGB image degraded by blurred matrices 
                        
                           
                           
                              
                                 
                                    M
                                 
                                 
                                    M
                                 
                              
                           
                           {M}_{M}
                        
                     .
Figure 11

The reconstructed images being 50th, 1,000th, and 20,000th used iterations of the RGB image degraded by blurred matrices M M .

Moreover, the behavior of Cauchy error, relative error, and the PSNR of the degraded RGB image by using the proposed algorithms with 100,000th iterations is demonstrated. It is remarkable that the Cauchy and relative error plot of the proposed method is decreased as the number of iterations increased. Thus, the Cauchy and relative error plot shows the validity and confirms the convergence of the proposed methods. Based on the PSNR plots shown in Figure 12, their graphs are also increased as the number of iteration is increased. Through these results, it can be concluded that the proposed algorithm produces the quality improvements of the three different types of the original RGB image degraded by the blurring matrices M G , M O , and M M .

Figure 12 
                  The Cauchy norm, relative figure norm, and PSNR quality plots of the proposed algorithms in all cases of degraded RGB images.
Figure 12

The Cauchy norm, relative figure norm, and PSNR quality plots of the proposed algorithms in all cases of degraded RGB images.

5.2 Application to signal recovering problems

In signal processing, compressed sensing can be modeled as the following under the determinated linear equation system:

y = A w + ν ,

where w R n is an original signal with n components to be recovered, ν , y R m are noise and the observed signal with noisy for m components, respectively, and A R m × n is a degraded matrix. Finding the solutions of the previously determinated linear equation system can be seen as solving the Lasso problem:

(30) min w R N 1 2 y A w 2 2 + λ w 1 ,

where λ > 0 . As a result, various techniques and iterative schemes have been developed to solve the Lasso problem. We can apply the minimization problem of the sum of two functions with our method (3) for solving the Lasso problem (30) by setting T w = prox λ g ( w λ f ( w ) ) , where f ( w ) = 1 2 y A w 2 2 , g ( w ) = λ w 1 , and f ( w ) = A T ( A w y ) . And applying the M-iteration (1.3), we obtain the following proposed method to find the solution of the signal recovering problem (30):

(31) r n = ( 1 η n ) w n + η n prox λ g ( w n μ A T ( A w n y ) ) , s n = prox λ g ( r n μ A T ( A r n y ) ) , w n + 1 = prox λ g ( s n μ A T ( A s n y ) ) ,

where μ 0 , 2 A 2 2 and { η n } are the sequences in [ δ , 1 δ ] for all n N and for some δ in ( 0 , 1 ) .

Next, some experiments are provided to illustrate the convergence and the effectiveness of the proposed algorithm (31). The original signal w with n = 1,024 in Figure 10 generated by the uniform distribution in the interval [ 2 , 2 ] with 70 nonzero elements is used to create the observation signal y i = A i w + ν i , i = 1 , 2 , 3 with m = 512 (Figure 13).

Figure 13 
                  Original signal (w) with 70 nonzero elements.
Figure 13

Original signal (w) with 70 nonzero elements.

The observation signal y i is shown in Figure 14.

Figure 14 
                  Degraded signals 
                        
                           
                           
                              
                                 
                                    y
                                 
                                 
                                    1
                                 
                              
                           
                           {{\bf{y}}}_{1}
                        
                     , 
                        
                           
                           
                              
                                 
                                    y
                                 
                                 
                                    2
                                 
                              
                           
                           {{\bf{y}}}_{2}
                        
                     , and 
                        
                           
                           
                              
                                 
                                    y
                                 
                                 
                                    3
                                 
                              
                           
                           {{\bf{y}}}_{3}
                        
                     , respectively.
Figure 14

Degraded signals y 1 , y 2 , and y 3 , respectively.

The degraded matrixes A i that generated by the normal distribution with mean zero and variance one and the white Gaussian noise ν i for all i = 1 , 2 , 3 (Figure 15).

Figure 15 
                  Noise signals 
                        
                           
                           
                              
                                 
                                    n
                                 
                                 
                                    1
                                 
                              
                           
                           {n}_{1}
                        
                     , 
                        
                           
                           
                              
                                 
                                    n
                                 
                                 
                                    2
                                 
                              
                           
                           {n}_{2}
                        
                     , and 
                        
                           
                           
                              
                                 
                                    n
                                 
                                 
                                    3
                                 
                              
                           
                           {n}_{3}
                        
                     , respectively.
Figure 15

Noise signals n 1 , n 2 , and n 3 , respectively.

The process is started when the signal initial data w 0 , with n = 1,024 being picked randomly (Figure 16).

Figure 16 
                  Initial signals 
                        
                           
                           
                              
                                 
                                    w
                                 
                                 
                                    0
                                 
                              
                           
                           {w}_{0}
                        
                     .
Figure 16

Initial signals w 0 .

The parameters ξ n , ϱ n , and η n on an implemented algorithm (31) in solving the signal recovering problem are set as equation (25) and μ = 1 A T A 2 . The Cauchy error and the relative signal error are measured by using max-norm w n w n 1 and w n w w , respectively. The performance of the proposed method at nth iteration is measured quantitatively by the means of the signal-to-noise ratio (SNR), which is defined by:

SNR ( w n ) = 20 log 10 w n 2 w n w 2 ,

where w n is the recovered signal at nth iteration by using the proposed method. The Cauchy error, signal relative error, and SNR quality of the proposed methods for recovering the degraded signal are shown in Figure 17.

Figure 17 
                  Cauchy error, signal relative error and SNR quality of the proposed methods in recovering the observed signal with 100,000th.
Figure 17

Cauchy error, signal relative error and SNR quality of the proposed methods in recovering the observed signal with 100,000th.

It is remarkable that the Cauchy’s error plot of the proposed method decreases as the number of iterations increases. And the signal relative error plot will decrease until it converges to a constant value. Thus, the Cauchy and relative error plots show the validity and confirm the convergence of the proposed methods. For the SNR quality plot, it can be seen that the SNR value increases until it converges to a constant value. Through these results, it can be concluded that the solution of the signal recovering problem solved by the proposed algorithm gets the quality improvements of the observed signal. Figures 18, 19, 20 shows the restored signal by using the proposed algorithms with the group of operator and noise A i and ν i , i = 1 , 2 , 3 .

Figure 18 
                  Recovering signals based on SNR quality for the degraded signal with operator 
                        
                           
                           
                              
                                 
                                    A
                                 
                                 
                                    1
                                 
                              
                           
                           {A}_{1}
                        
                      and noise 
                        
                           
                           
                              
                                 
                                    ν
                                 
                                 
                                    1
                                 
                              
                           
                           {\nu }_{1}
                        
                     .
Figure 18

Recovering signals based on SNR quality for the degraded signal with operator A 1 and noise ν 1 .

Figure 19 
                  Recovering signals based on SNR quality for the degraded signal with operator 
                        
                           
                           
                              
                                 
                                    A
                                 
                                 
                                    2
                                 
                              
                           
                           {A}_{2}
                        
                      and noise 
                        
                           
                           
                              
                                 
                                    ν
                                 
                                 
                                    2
                                 
                              
                           
                           {\nu }_{2}
                        
                     .
Figure 19

Recovering signals based on SNR quality for the degraded signal with operator A 2 and noise ν 2 .

Figure 20 
                  Recovering signals based on SNR quality for the degraded signal with operator 
                        
                           
                           
                              
                                 
                                    A
                                 
                                 
                                    3
                                 
                              
                           
                           {A}_{3}
                        
                      and noise 
                        
                           
                           
                              
                                 
                                    ν
                                 
                                 
                                    3
                                 
                              
                           
                           {\nu }_{3}
                        
                     .
Figure 20

Recovering signals based on SNR quality for the degraded signal with operator A 3 and noise ν 3 .

The improvement of SNR quality for the recovering signals based on 5,000th, 10,000th, and 20,000th number of iterations is also shown in Figures 1820. It can be seen from these figures that the quality of recovering signal using the proposed algorithms in solving the signal recovering problem gets the quality improvements for the three different types of the degraded signals.

6 Conclusion

In this article, we have proved weak and strong convergence theorems of the M-iteration method for G -nonexpansive mapping in a uniformly convex Banach space with a directed graph. Also, we have proved the weak convergence theorem without using Opial’s condition (see Theorem 3.4). The conditions for convergence of the method are established by systematic proof. The M-iteration algorithm was found to be faster than the Noor and SP iterations for the class G -contraction mappings (see Theorem 4.6). A numerical example illustrating the performance of the suggested algorithm was provided. All numerical experiments for a fixed-point solution by using the M-iteration, the three-step Noor iteration, and the SP-iteration methods with the three operators are shown in Figures 1–4. The M-iteration technique was shown to be more efficient than the three-step Noor iteration and the SP-iteration approaches. As applications, we applied the M-iteration algorithm to solve the image deblurring problems (Figures 7–12). We also apply the M-iteration algorithm for solving signal recovery in situations where the type of noise is unknown (Figures 13–17). We found that the M-iteration algorithm is flexible and has good quality for use with common types of blur and noise effects in image deblurring and signal recovery problems.

Acknowledgement

The authors sincerely thank the anonymous reviewers for their valuable comments and suggestions that improved the original version of this article.

  1. Funding information: This project is funded by National Research Council of Thailand (NRCT) and University of Phayao Grant No. N42A660382. Also, the authors acknowledge the partial support provided by the University of Phayao and Thailand Science Research and Innovation under Project FF66-UoE015.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Ethical approval: The conducted research is not related to either human or animal use.

  5. Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analyzed during this study.

References

[1] S. Banach, Sur les oprations dans les ensembles abstraits et leur application aux quations intgrales, Fund. Math. 3 (1922), 133–181. 10.4064/fm-3-1-133-181Search in Google Scholar

[2] J. Jachymski, The contraction principle for mappings on a metric space with a graph, Proc. Amer. Math. Soc. 136 (2008), no. 4, 1359–1373. 10.1090/S0002-9939-07-09110-1Search in Google Scholar

[3] R. P. Kelisky and T. J. Rivlin, Iterates of Bernstein polynomials, Pacific J. Math. 21 (1967), 511–520. 10.2140/pjm.1967.21.511Search in Google Scholar

[4] S. M. A. Aleomraninejad, S. Rezapour, and N. Shahzad, Some fixed-point result on a metric space with a graph, Topol. Appl. 159 (2012), 659–663. 10.1016/j.topol.2011.10.013Search in Google Scholar

[5] M. R. Alfuraidan and M. A. Khamsi, Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph, Fixed Point Theory Appl. 2015 (2015), 44. 10.1186/s13663-015-0294-5Search in Google Scholar

[6] M. R. Alfuraidan, Fixed points of monotone nonexpansive mappings with a graph, Fixed Point Theory Appl. 2015 (2015), 49. 10.1186/s13663-015-0299-0Search in Google Scholar

[7] J. Tiammee, A. Kaewkhao, and S. Suantai, On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs, Fixed Point Theory Appl. 2015 (2015), 187. 10.1186/s13663-015-0436-9Search in Google Scholar

[8] O. Tripak, Common fixed-points of G-nonexpansive mappings on Banach spaces with a graph, Fixed Point Theory Appl. 2016 (2016), 87. 10.1186/s13663-016-0578-4Search in Google Scholar

[9] S. Khatoon, and I. Uddin, Convergence analysis of modified Abbas iteration process for two G-nonexpansive mappings, Rend. Circ. Mat. Palermo II. Ser. 70 (2021), 31–44. 10.1007/s12215-020-00481-xSearch in Google Scholar

[10] S. Khatoon, I. Uddin, J. Ali, and R. George, Common fixed-points of two G-nonexpansive mappings via a faster iteration procedure, J. Funct. Spaces 2021 (2021), Article ID 9913540, 8 pages, DOI: https://doi.org/10.1155/2021/9913540. 10.1155/2021/9913540Search in Google Scholar

[11] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl. 251 (2000), no. 1, 217–229. 10.1006/jmaa.2000.7042Search in Google Scholar

[12] R. Glowinski and P. L. Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanic, SIAM, Philadelphia, 1989. 10.1137/1.9781611970838Search in Google Scholar

[13] S. Haubruge, V. H. Nguyen, and J. J. Strodiot, Convergence analysis and applications of the Glowinski Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optim. Theory Appl. 97 (1998), 645–673. 10.1023/A:1022646327085Search in Google Scholar

[14] W. Phuengrattana and S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval, J. Comput. Appl. Math. 235 (2011), 3006–3014. 10.1016/j.cam.2010.12.022Search in Google Scholar

[15] K. Ullah and M. Arshad, Numerical reckoning fixed-points for Suzuki’s generalized nonexpansive mappings via iteration process, Filomat 32 (2018), no. 1, 187–196. 10.2298/FIL1801187USearch in Google Scholar

[16] F. Gursoy and V. Karakaya, A Picard-S hybrid type iteration method for solving a differential equation with retarded argument, arXiv (2014), no. 1403.2546v2, 1–16. Search in Google Scholar

[17] R. P. Agarwal, D. O’Regan, and D. R. Sahu, Iterative construction of fixed-points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex Anal. 8 (2007), 61–79. Search in Google Scholar

[18] C. Garodia and I. Uddin, A new iterative method for solving split feasibility problem, J. Appl. Anal. Comput. 10 (2020), no. 3, 986–1004. 10.11948/20190179Search in Google Scholar

[19] C. Garodia and I. Uddin, A new fixed-point algorithm for finding the solution of a delay differential equation, AIMS Mathematics 5 (2020), no. 4, 3182–3200. 10.3934/math.2020205Search in Google Scholar

[20] C. Garodia, I. Uddin, and S. H. Khan, Approximating common fixed-points by a new faster iteration process, Filomat 34 (2020), no. 6, 2047–2060. 10.2298/FIL2006047GSearch in Google Scholar

[21] R. Johnsonbaugh, Discrete Mathematics, 7th. Prentice Hall, New Jersey, 1997. Search in Google Scholar

[22] Z. Opial, Weak convergence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc. 73 (1967), 591–597. 10.1090/S0002-9904-1967-11761-0Search in Google Scholar

[23] H. F. Senter and W. G. Dotson, Approximating fixed-points of nonexpansive mappings, Proc. Amer. Math. Soc. 44 (1974), 375–380. 10.1090/S0002-9939-1974-0346608-8Search in Google Scholar

[24] S. Shahzad and R. Al-Dubiban, Approximating common fixed-points of nonexpansive mappings in Banach spaces, Georgian Math. J. 13 (2006), no. 3, 529–537. 10.1515/GMJ.2006.529Search in Google Scholar

[25] P. Sridarat, R. Suparaturatorn, S. Suantai, and Y. J. Cho, Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs, Bull. Malay. Math. Sci. Soc. 42 (2019), 2361–2380. 10.1007/s40840-018-0606-0Search in Google Scholar

[26] J. Schu, Weak and strong convergence to fixed-points of asymptotically nonexpansive mappings, Bull. Aust. Math. Soc. 43 (1991), no. 1, 153–159. 10.1017/S0004972700028884Search in Google Scholar

[27] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl. 331 (2005), 506–517. 10.1016/j.jmaa.2005.03.002Search in Google Scholar

[28] D. Yambangwai, S. Aunruean, and T. Thianwan, A new modified three-step iteration method for G-nonexpansive mappings in Banach spaces with a graph, Numer. Algor. 84 (2020), 537–565. 10.1007/s11075-019-00768-wSearch in Google Scholar

[29] M. G. Sangago, Convergence of Iterative Schemes for Nonexpansive Mappings, Asian-European J. Math. 4 (2011), no. 4, 671–682. 10.1142/S1793557111000551Search in Google Scholar

[30] V. Berinde, Picard iteration converges faster than Mann iteration for a class of quasicontractive operators, Fixed Point Theory Appl. 2 (2004), 97–105. 10.1155/S1687182004311058Search in Google Scholar

[31] R. L. Burden and J. D. Faires, Numerical Analysis, 9th edn. Brooks/Cole Cengage Learning, Boston, 2010. Search in Google Scholar

[32] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, Berlin, 2017. 10.1007/978-3-319-48311-5Search in Google Scholar

[33] J. J. Moreau, Proximité et dualité dans un espace hilbertien, Bulletin de la Société Mathématique de France 93 (1965), 273–299. 10.24033/bsmf.1625Search in Google Scholar

Received: 2021-07-05
Revised: 2023-03-07
Accepted: 2023-04-28
Published Online: 2023-07-05

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. A novel class of bipolar soft separation axioms concerning crisp points
  3. Duality for convolution on subclasses of analytic functions and weighted integral operators
  4. Existence of a solution to an infinite system of weighted fractional integral equations of a function with respect to another function via a measure of noncompactness
  5. On the existence of nonnegative radial solutions for Dirichlet exterior problems on the Heisenberg group
  6. Hyers-Ulam stability of isometries on bounded domains-II
  7. Asymptotic study of Leray solution of 3D-Navier-Stokes equations with exponential damping
  8. Semi-Hyers-Ulam-Rassias stability for an integro-differential equation of order 𝓃
  9. Jordan triple (α,β)-higher ∗-derivations on semiprime rings
  10. The asymptotic behaviors of solutions for higher-order (m1, m2)-coupled Kirchhoff models with nonlinear strong damping
  11. Approximation of the image of the Lp ball under Hilbert-Schmidt integral operator
  12. Best proximity points in -metric spaces with applications
  13. Approximation spaces inspired by subset rough neighborhoods with applications
  14. A numerical Haar wavelet-finite difference hybrid method and its convergence for nonlinear hyperbolic partial differential equation
  15. A novel conservative numerical approximation scheme for the Rosenau-Kawahara equation
  16. Fekete-Szegö functional for a class of non-Bazilevic functions related to quasi-subordination
  17. On local fractional integral inequalities via generalized ( h ˜ 1 , h ˜ 2 ) -preinvexity involving local fractional integral operators with Mittag-Leffler kernel
  18. On some geometric results for generalized k-Bessel functions
  19. Convergence analysis of M-iteration for 𝒢-nonexpansive mappings with directed graphs applicable in image deblurring and signal recovering problems
  20. Some results of homogeneous expansions for a class of biholomorphic mappings defined on a Reinhardt domain in ℂn
  21. Graded weakly 1-absorbing primary ideals
  22. The existence and uniqueness of solutions to a functional equation arising in psychological learning theory
  23. Some aspects of the n-ary orthogonal and b(αn,βn)-best approximations of b(αn,βn)-hypermetric spaces over Banach algebras
  24. Numerical solution of a malignant invasion model using some finite difference methods
  25. Increasing property and logarithmic convexity of functions involving Dirichlet lambda function
  26. Feature fusion-based text information mining method for natural scenes
  27. Global optimum solutions for a system of (k, ψ)-Hilfer fractional differential equations: Best proximity point approach
  28. The study of solutions for several systems of PDDEs with two complex variables
  29. Regularity criteria via horizontal component of velocity for the Boussinesq equations in anisotropic Lorentz spaces
  30. Generalized Stević-Sharma operators from the minimal Möbius invariant space into Bloch-type spaces
  31. On initial value problem for elliptic equation on the plane under Caputo derivative
  32. A dimension expanded preconditioning technique for block two-by-two linear equations
  33. Asymptotic behavior of Fréchet functional equation and some characterizations of inner product spaces
  34. Small perturbations of critical nonlocal equations with variable exponents
  35. Dynamical property of hyperspace on uniform space
  36. Some notes on graded weakly 1-absorbing primary ideals
  37. On the problem of detecting source points acting on a fluid
  38. Integral transforms involving a generalized k-Bessel function
  39. Ruled real hypersurfaces in the complex hyperbolic quadric
  40. On the monotonic properties and oscillatory behavior of solutions of neutral differential equations
  41. Approximate multi-variable bi-Jensen-type mappings
  42. Mixed-type SP-iteration for asymptotically nonexpansive mappings in hyperbolic spaces
  43. On the equation fn + (f″)m ≡ 1
  44. Results on the modified degenerate Laplace-type integral associated with applications involving fractional kinetic equations
  45. Characterizations of entire solutions for the system of Fermat-type binomial and trinomial shift equations in ℂn#
  46. Commentary
  47. On I. Meghea and C. S. Stamin review article “Remarks on some variants of minimal point theorem and Ekeland variational principle with applications,” Demonstratio Mathematica 2022; 55: 354–379
  48. Special Issue on Fixed Point Theory and Applications to Various Differential/Integral Equations - Part II
  49. On Cauchy problem for pseudo-parabolic equation with Caputo-Fabrizio operator
  50. Fixed-point results for convex orbital operators
  51. Asymptotic stability of equilibria for difference equations via fixed points of enriched Prešić operators
  52. Asymptotic behavior of resolvents of equilibrium problems on complete geodesic spaces
  53. A system of additive functional equations in complex Banach algebras
  54. New inertial forward–backward algorithm for convex minimization with applications
  55. Uniqueness of solutions for a ψ-Hilfer fractional integral boundary value problem with the p-Laplacian operator
  56. Analysis of Cauchy problem with fractal-fractional differential operators
  57. Common best proximity points for a pair of mappings with certain dominating property
  58. Investigation of hybrid fractional q-integro-difference equations supplemented with nonlocal q-integral boundary conditions
  59. The structure of fuzzy fractals generated by an orbital fuzzy iterated function system
  60. On the structure of self-affine Jordan arcs in ℝ2
  61. Solvability for a system of Hadamard-type hybrid fractional differential inclusions
  62. Three solutions for discrete anisotropic Kirchhoff-type problems
  63. On split generalized equilibrium problem with multiple output sets and common fixed points problem
  64. Special Issue on Computational and Numerical Methods for Special Functions - Part II
  65. Sandwich-type results regarding Riemann-Liouville fractional integral of q-hypergeometric function
  66. Certain aspects of Nörlund -statistical convergence of sequences in neutrosophic normed spaces
  67. On completeness of weak eigenfunctions for multi-interval Sturm-Liouville equations with boundary-interface conditions
  68. Some identities on generalized harmonic numbers and generalized harmonic functions
  69. Study of degenerate derangement polynomials by λ-umbral calculus
  70. Normal ordering associated with λ-Stirling numbers in λ-shift algebra
  71. Analytical and numerical analysis of damped harmonic oscillator model with nonlocal operators
  72. Compositions of positive integers with 2s and 3s
  73. Kinematic-geometry of a line trajectory and the invariants of the axodes
  74. Hahn Laplace transform and its applications
  75. Discrete complementary exponential and sine integral functions
  76. Special Issue on Recent Methods in Approximation Theory - Part II
  77. On the order of approximation by modified summation-integral-type operators based on two parameters
  78. Bernstein-type operators on elliptic domain and their interpolation properties
  79. A class of strongly convergent subgradient extragradient methods for solving quasimonotone variational inequalities
  80. Special Issue on Recent Advances in Fractional Calculus and Nonlinear Fractional Evaluation Equations - Part II
  81. Application of fractional quantum calculus on coupled hybrid differential systems within the sequential Caputo fractional q-derivatives
  82. On some conformable boundary value problems in the setting of a new generalized conformable fractional derivative
  83. A certain class of fractional difference equations with damping: Oscillatory properties
  84. Weighted Hermite-Hadamard inequalities for r-times differentiable preinvex functions for k-fractional integrals
  85. Special Issue on Recent Advances for Computational and Mathematical Methods in Scientific Problems - Part II
  86. The behavior of hidden bifurcation in 2D scroll via saturated function series controlled by a coefficient harmonic linearization method
  87. Phase portraits of two classes of quadratic differential systems exhibiting as solutions two cubic algebraic curves
  88. Petri net analysis of a queueing inventory system with orbital search by the server
  89. Asymptotic stability of an epidemiological fractional reaction-diffusion model
  90. On the stability of a strongly stabilizing control for degenerate systems in Hilbert spaces
  91. Special Issue on Application of Fractional Calculus: Mathematical Modeling and Control - Part I
  92. New conticrete inequalities of the Hermite-Hadamard-Jensen-Mercer type in terms of generalized conformable fractional operators via majorization
  93. Pell-Lucas polynomials for numerical treatment of the nonlinear fractional-order Duffing equation
  94. Impacts of Brownian motion and fractional derivative on the solutions of the stochastic fractional Davey-Stewartson equations
  95. Some results on fractional Hahn difference boundary value problems
  96. Properties of a subclass of analytic functions defined by Riemann-Liouville fractional integral applied to convolution product of multiplier transformation and Ruscheweyh derivative
  97. Special Issue on Development of Fuzzy Sets and Their Extensions - Part I
  98. The cross-border e-commerce platform selection based on the probabilistic dual hesitant fuzzy generalized dice similarity measures
  99. Comparison of fuzzy and crisp decision matrices: An evaluation on PROBID and sPROBID multi-criteria decision-making methods
  100. Rejection and symmetric difference of bipolar picture fuzzy graph
Downloaded on 25.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2022-0234/html
Scroll to top button