Startseite Investigating the modified UO-iteration process in Banach spaces by a digraph
Artikel Open Access

Investigating the modified UO-iteration process in Banach spaces by a digraph

  • Esra Yolacan ORCID logo EMAIL logo
Veröffentlicht/Copyright: 17. Juni 2025

Abstract

Graph theory (GT) is a highly captivating area of study within applied mathematics. Over the last 50 years, the number of mathematicians contributing to research in this field has consistently risen. This surge in interest can largely be attributed to its vast potential for practical applications. It has found uses in various disciplines such as engineering, physical sciences, genetics, computer science, sociology, operations research and economics. Driven and encouraged by these points, we introduce and analyze the convergence theorems for the modified UO-iterative sequences applied to G -nonexpansive mapping in Banach spaces via a graph. In addition, the modified UO-iterative algorithm’s applications to image deblurring and signal enhancement are demonstrated.

MSC 2010: 47H09; 47H10

1 Introduction

Fixed point theory is a dynamic and rapidly developing area of research, primarily due to its broad applicability in many fields. It focuses on results that under specific conditions, a self-map on a set has at least one fixed point. The Banach contraction principle (BCP) [1] is arguably the most renowned result in metric fixed point theory, largely due to its straightforwardness and the convenience with which it can be applied in key areas of mathematics. Following that, the BCP has been generalized in a wide variety of ways. Jachymski [2] gave a new concept of G -contraction, demonstrating that it was a meaningful extension of the BCP within a metric space via a digraph. With this idea, he offered a straightforward proof of the Kelisky-Rivlin theorem [3]. Aleomraninejad et al. [4] developed an iterative approach for G -contraction and G -nonexpansive mappings in a metric space with a directed graph and showed that these mappings converge. Alfuraidan and Khamsi [5] proposed the concept of G -monotone nonexpansive multivalued mappings in the context of a hyperbolic metric space via a graph. Tiammee et al. [6] examined the convergence of the Halpern iteration process to the projection of the initial point onto the set of fixed points of the G -nonexpansive mapping in a Hilbert space with a directed graph and demonstrated the Browder convergence theorem for G -nonexpansive mappings. Tripak [7] introduced the two-step iterative method for G -nonexpansive mappings and provided both weak and strong convergence theorems for these methods in a Banach space equipped with a graph. Suparatulatorn et al. [8] presented and analyzed the modified S -iteration for G - nonexpansive mappings in a uniformly convex Banach space with a graph. Yambangwai et al. [9] established a modified three-step iterative technique to determine a common fixed point of three G -nonexpansive mappings and showed image deblurring and signal recovery problems with unknown noise types as application. They also compared the convergence speeds with well-known iteration techniques. For further details, one can refer to the extensive literature on this topic [1012]. Recently, Okeke et al. [13] constructed a new five-step iteration process known as the UO-iteration process and proved its convergence for a contraction mapping. Further, they applied this process to approximate the solution both to the oxygen diffusion model and to the boundary value problems using Green’s functions with an example.

In [13], they study the following iterative sequence to approximate some fixed point of T under suitable conditions.

Definition 1.1

Let L be a nonvoid convex subset of a Banach space ϒ , for any random a 0 L and n 1 ,

(1.1) e n = T a n d n = ( 1 σ n 2 ) e n + σ n 2 T e n c n = T d n b n = ( 1 σ n 4 ) c n + σ n 4 T c n a n + 1 = ( 1 σ n 5 ) b n + σ n 5 T b n ,

where { σ n 2 } , { σ n 4 } , { σ n 5 } ( 0,1 ) and T is a self-mapping.

Building on the aforementioned work, we devised a modified UO-iteration process containing a G -nonexpansive mapping, by generating the sequence { x n } as follows:

Definition 1.2

Let L be a nonvoid convex subset of a Banach space ϒ , for any random x 0 L and n 1 ,

(1.2) r n = ( 1 σ n 1 ) x n + σ n 1 T x n s n = ( 1 σ n 2 ) r n + σ n 2 T r n t n = ( 1 σ n 3 ) s n + σ n 3 T s n u n = ( 1 σ n 4 ) t n + σ n 4 T t n x n + 1 = ( 1 σ n 5 ) u n + σ n 5 T u n ,

where { σ n j } ( 0,1 ) for j = 1,5 ¯ and T : L L is a G -nonexpansive mapping.

If σ n 1 = σ n 3 1 for n 1 , then (1.2) reduces to the UO-iteration (1.1) defined by Okeke et al. [13].

The remainder of this article is structured as follows: Section 2 focuses on preliminary definitions and lemmas. Section 3 presents the convergence analysis of (1.2) for a G -nonexpansive mapping in Banach space with graph. Sections 4 and 5 are devoted to the applications of signal enhancement and image deblurring, respectively. Finally, in Section 6, we provide a numerical example to support our main theorem and offer a comparative analysis of the convergence rates between the proposed iteration method and the UO-iteration.

2 Preliminaries

In this section, we bring together some familiar concepts and relevant conclusions that will be frequently used.

Let L be a nonvoid subset within the real Banach space ϒ . Let the diagonal of the Cartesian product L × L be denoted by Δ = { ( ϑ , ϑ ) : ϑ L } . E , the set of edges in G , includes all loops, viz, E Δ and V represents the set of vertices in a digraph G that coincide with L . Given that G contains no parallel edges, the graph G can be characterized by the couple ( V , E ) . If ϑ and ϖ are vertices of G , a path in G of length N ranging ϑ from ϖ is a sequence { ϑ i } i = 0 N of N + 1 vertices such that ϑ = ϑ 0 , ϖ = ϑ N and ( ϑ i 1 , ϑ i ) E for i = 1 , N ¯ . A graph G is called connected if there exists a path between every pair of vertices in G .

Definition 2.1

[2] A digraph G is considered transitive if, for every ϑ , ϖ , ϱ V , where ( ϑ , ϖ ) and ( ϖ , ϱ ) are edges in E , it holds that ( ϑ , ϱ ) is also an edge in E .

Definition 2.2

[5] A mapping T : L L is said to be G -nonexpansive if T adheres to the circumstances:

  1. T maintains the edges of G , viz, ( ϑ , ϖ ) E ( T ϑ , T ϖ ) E ,

  2. T preserves or reduces the weights of the edges of G in the following manner:

    ( ϑ , ϖ ) E T ϑ T ϖ ϑ ϖ .

Definition 2.3

[8] Let ϑ 0 V and Ω a subset of V . We declare that

  1. Ω is dominated by ϑ 0 if ( ϑ 0 , ϑ ) E for every ϑ Ω ,

  2. Ω dominates ϑ 0 if for all ϑ Ω , ( ϑ 0 , ϑ ) E .

From this point on, ( ) will be used to indicate the strong (weak) convergence.

Definition 2.4

[14] A Banach space ϒ is said to satisfy the Opial’s property if, for all sequences ϒ { ϑ n } with { ϑ n } ϑ ϒ , the inequality lim sup n ϑ n ϑ < lim sup n ϑ n ϖ holds for each ϑ ϖ in ϒ .

Definition 2.5

[8] Let L be a nonvoid subset of a Banach space ϒ and let T : L ϒ be a mapping. Then, T is called to be G -demiclosed at τ ϒ if, for any sequence { ϑ n } ϒ such that { ϑ n } ϑ ϒ , { T ϑ n } τ and ( ϑ n , ϑ n + 1 ) E imply T ϑ = τ .

Definition 2.6

[8] Let L be a nonvoid subset of a normed space ϒ and let ( V , E ) = G be a digraph where V = L . We say that L has the property wg(sg) if for every sequence { ϑ n } in ϒ converging weakly (strongly) to ϑ L and ( ϑ n , ϑ n + 1 ) E , there exists a subsequence { ϑ n l } of { ϑ n } such that ( x n l , x ) E for every l N .

Definition 2.7

[15] Let L be a nonvoid subset of a normed space ϒ , and T : L L be a mapping. Then T is called to be semi-compact if for { ϑ n } L with lim n ϑ n T ϑ n = 0 , there appears a subsequence { ϑ n l } of { ϑ n } such that ϑ n l ϑ L .

Lemma 2.8

[16] Let { ϑ n } and { τ n } be two sequences of nonnegative real numbers satisfying the inequality ϑ n + 1 ϑ n + τ n for every n 1 . If n = 1 τ n < , then lim n ϑ n exists.

Lemma 2.9

[17] Let ϒ be a uniformly convex Banach space, and let m and n be two constants with 0 < m < n < 1 . Assume that { a n } [ m , n ] is a real sequence and { κ n } , { τ n } ϒ . Then the terms

lim n a n κ n + ( 1 a n ) τ n = e , limsup n κ n e , limsup n τ n e

imply that lim n κ n τ n = 0 ; here, e 0 is a constant.

Lemma 2.10

[18] Let ϒ be a Banach space in which Opial’s property holds, and let { ϑ n } ϒ be a sequence. Let ϑ , ϖ in ϒ be such that lim n ϑ n ϑ and lim n ϑ n ϖ exist. If { ϑ n l } and { ϑ n k } are subsequences of { ϑ n } that converge weakly to ϑ and ϖ , resp., then ϑ = ϖ .

3 Main results

Unless specified differently, we will assume for the entirety of this part that L is a nonvoid closed convex subset of a Banach space ϒ via a digraph ( V , E ) = G such that V = L and E is convex. We further assume that the graph G is transitive. We next represent the set of all fixed points of T as F . The map T : L L is a G -nonexpansive mapping with F . For any random x 0 L , consider { x n } be the sequence generated by (1.2).

Proposition 3.1

Let υ 0 F be such that ( x 0 , υ 0 ) and ( υ 0 , x 0 ) are in E . Then ( x n , υ 0 ) , ( υ 0 , x n ) , ( r n , υ 0 ) , ( υ 0 , r n ) , ( s n , υ 0 ) , ( υ 0 , s n ) , ( t n , υ 0 ) , ( υ 0 , t n ) , ( u n , υ 0 ) , ( υ 0 , u n ) , ( x n , r n ) , ( r n , x n ) , ( x n , s n ) , ( s n , x n ) , ( t n , x n ) , ( x n , t n ) , ( u n , x n ) , ( x n , u n ) , and ( x n , x n + 1 ) are in E .

Proof

We carry on with induction. Considering T is edge-preserving and ( x 0 , υ 0 ) E , we possess ( T x 0 , υ 0 ) E and so

( r 0 , υ 0 ) = ( ( 1 σ 0 1 ) x 0 + σ 0 1 T x 0 , υ 0 ) = ( 1 σ 0 1 ) ( x 0 , υ 0 ) + σ 0 1 ( T x 0 , υ 0 ) E ( by convex of E ) .

Due to edge-preserving of T and ( r 0 , υ 0 ) E , we know ( T r 0 , υ 0 ) E . Be aware that

( s 0 , υ 0 ) = ( ( 1 σ 0 2 ) r 0 + σ 0 2 T r 0 , υ 0 ) = ( 1 σ 0 2 ) ( r 0 , υ 0 ) + σ 0 2 ( T r 0 , υ 0 ) E ( by convex of E ) .

As T is edge-preserving and ( s 0 , υ 0 ) E , we attain ( T s 0 , υ 0 ) E , and thus,

( t 0 , υ 0 ) = ( ( 1 σ 0 3 ) s 0 + σ 0 3 T s 0 , υ 0 ) = ( 1 σ 0 3 ) ( s 0 , υ 0 ) + σ 0 3 ( T s 0 , υ 0 ) E ( by convex of E ) .

By using edge-preserving of T and ( t 0 , υ 0 ) E , we acquire ( T t 0 , υ 0 ) E . Keep in mind that

( u 0 , υ 0 ) = ( ( 1 σ 0 4 ) t 0 + σ 0 4 T t 0 , υ 0 ) = ( 1 σ 0 4 ) ( t 0 , υ 0 ) + σ 0 4 ( T t 0 , υ 0 ) E ( by convex of E ) .

Since T is edge-preserving and ( u 0 , υ 0 ) E , we attain ( T u 0 , υ 0 ) E , and hence,

( x 1 , υ 0 ) = ( ( 1 σ 0 5 ) u 0 + σ 0 5 T u 0 , υ 0 ) = ( 1 σ 0 5 ) ( u 0 , υ 0 ) + σ 0 5 ( T u 0 , υ 0 ) E ( by convex of E ) .

Continuing in this fashion for ( x 1 , υ 0 ) instead of ( x 0 , υ 0 ) , we obtain ( r 1 , υ 0 ) , ( s 1 , υ 0 ) , ( t 1 , υ 0 ) , ( u 1 , υ 0 ) E . Suppose that ( x j , u 0 ) E ( G ) for j 1 . Given that T is edge-preserving and ( x j , υ 0 ) E , we have ( T x j , υ 0 ) E and so ( r j , υ 0 ) E from convex of E . Due to edge-preserving of T and ( r j , υ 0 ) E , we know ( T r j , υ 0 ) E . Be aware that ( s j , υ 0 ) E by convex of E . Since T is edge-preserving and ( s j , υ 0 ) E , we attain ( T s j , υ 0 ) E . By using the convexity of E , we own ( t j , υ 0 ) E . From edge-preserving of T and ( t j , υ 0 ) E , we belong ( T t j , υ 0 ) E . Owing to the convexity of E , we have ( u j , υ 0 ) E . Since T is edge-preserving and ( u j , υ 0 ) E , we attain ( T u j , υ 0 ) E and so ( x j + 1 , υ 0 ) E from convex of E . Executing the procedure again for ( x j + 1 , υ 0 ) E , we have ( r j + 1 , υ 0 ) , ( s j + 1 , υ 0 ) , ( t j + 1 , υ 0 ) , ( u j + 1 , υ 0 ) E . Thus, ( x n , υ 0 ) , ( r n , υ 0 ) , ( s n , υ 0 ) , ( t n , υ 0 ) , ( u n , υ 0 ) E for n 1 . By means of an analogical argument, we conclude that ( υ 0 , x n ) , ( υ 0 , r n ) , ( υ 0 , s n ) , ( υ 0 , t n ) , ( υ 0 , u n ) E for n 1 . Considering that the graph G is transitive, we infer ( x n , r n ) , ( r n , x n ) , ( x n , s n ) , ( s n , x n ) , ( t n , x n ) , ( x n , t n ) , ( u n , x n ) , ( x n , u n ) and ( x n , x n + 1 ) E .

Lemma 3.2

If L is a nonvoid closed convex subset of a real uniformly convex Banach space ϒ . Suppose that { σ n 1 } , { σ n 2 } , { σ n 3 } , { σ n 4 } , and { σ n 5 } are real sequences in [ ξ , 1 ξ ] for some ξ ( 0,1 ) and ( x 0 , υ 0 ) , ( υ 0 , x 0 ) E for random x 0 L , and υ 0 F , then

  1. x n υ 0 0 as n ;

  2. x n T x n 0 as n .

Proof

(i) By Proposition 3.1, ( x n , υ 0 ) , ( υ 0 , x n ) , ( r n , υ 0 ) , ( υ 0 , r n ) , ( s n , υ 0 ) , ( υ 0 , s n ) , ( t n , υ 0 ) , ( υ 0 , t n ) , ( u n , υ 0 ) , ( υ 0 , u n ) , ( x n , r n ) , ( r n , x n ) , ( x n , s n ) , ( s n , x n ) , ( t n , x n ) , ( x n , t n ) , ( u n , x n ) , ( x n , u n ) , and ( x n , x n + 1 ) are in E . It follows from (1.2) and G -nonexpansiveness of T that

(3.1) r n υ 0 ( 1 σ n 1 ) x n υ 0 + σ n 1 T x n υ 0 x n υ 0 ,

(3.2) s n υ 0 ( 1 σ n 2 ) r n υ 0 + σ n 2 T r n υ 0 r n υ 0 (by (3.1)) x n υ 0 ,

(3.3) t n υ 0 ( 1 σ n 3 ) s n υ 0 + σ n 3 T s n υ 0 s n υ 0 (by (3.2)) x n υ 0 ,

(3.4) u n υ 0 ( 1 σ n 4 ) t n υ 0 + σ n 4 T t n υ 0 t n υ 0 (by (3.3)) x n υ 0 ,

and

(3.5) x n + 1 υ 0 ( 1 σ n 5 ) u n υ 0 + σ n 5 T u n υ 0 u n υ 0 (by (3.4)) x n υ 0 .

By using Lemma 2.8, we have lim n x n υ 0 exists. In particular, the sequence { x n } is bounded.

(ii) Suppose that lim n x n υ 0 = e . If e = 0 , then it is obvious. Thus, assume that e > 0 . Taking the lim sup on both sides in the inequality (3.1)–(3.4), we obtain

(3.6) lim sup n r n υ 0 lim sup n x n υ 0 = e ,

(3.7) lim sup n s n υ 0 lim sup n x n υ 0 = e ,

(3.8) lim sup n t n υ 0 lim sup n x n υ 0 = e ,

(3.9) lim sup n u n υ 0 lim sup n x n υ 0 = e .

In addition, from G -nonexpansiveness of T , taking the lim sup on both sides in this inequality and using (3.6)–(3.9), we have

(3.10) lim sup n T r n υ 0 lim sup n r n υ 0 = e

(3.11) lim sup n T x n υ 0 lim sup n x n υ 0 = e

(3.12) lim sup n T s n υ 0 lim sup n s n υ 0 = e ,

(3.13) lim sup n T t n υ 0 lim sup n t n υ 0 = e ,

(3.14) lim sup n T u n υ 0 lim sup n u n υ 0 = e .

As lim n x n + 1 υ 0 = e . Letting n in the inequality (3.5), we obtain

(3.15) lim n ( 1 σ n 5 ) ( u n υ 0 ) + σ n 5 ( T u n υ 0 ) = e .

Due to (3.9), (3.14), (3.15), and Lemma 2.9, we have

(3.16) lim n u n T u n = 0 .

Moreover, we see that

(3.17) x n + 1 υ 0 ( 1 σ n 5 ) u n υ 0 + σ n 5 T u n υ 0 ( 1 σ n 5 ) u n υ 0 + σ n 5 ( T u n u n + u n υ 0 ) u n υ 0 + σ n 5 T u n u n .

By using (3.16) and (3.17), we obtain

(3.18) lim inf n u n υ 0 e ,

so that (3.9) and (3.18), we obtain

(3.19) lim n u n υ 0 = e .

Note that

(3.20) lim n ( 1 σ n 4 ) ( t n υ 0 ) + σ n 4 ( T t n υ 0 ) = e .

Due to (3.8), (3.13), (3.20), and Lemma 2.9, we have

(3.21) lim n t n T t n = 0 .

Further, we see that

(3.22) u n υ 0 ( 1 σ n 4 ) t n υ 0 + σ n 4 T t n υ 0 ( 1 σ n 4 ) t n υ 0 + σ n 4 ( T t n t n + t n υ 0 ) t n υ 0 + σ n 4 T t n t n .

Using (3.21) and (3.22),

(3.23) lim inf n t n υ 0 e ,

such that (3.8) and (3.23) are satisfied, we obtain

(3.24) lim n t n υ 0 = e .

Note that

(3.25) lim n ( 1 σ n 3 ) ( s n υ 0 ) + σ n 3 ( T s n υ 0 ) = e .

Owing to (3.7), (3.12), (3.25), and Lemma 2.9, we have

(3.26) lim n s n T s n = 0 .

Furthermore, we see that

(3.27) t n υ 0 ( 1 σ n 3 ) s n υ 0 + σ n 3 T s n υ 0 ( 1 σ n 3 ) s n υ 0 + σ n 3 ( s n T s n + s n υ 0 ) s n υ 0 + σ n 3 s n T s n .

Using (3.26) and (3.27)

(3.28) lim inf n s n υ 0 e ,

and combining them with (3.7) and (3.28), we obtain

(3.29) lim n s n υ 0 = e .

Note that

(3.30) lim n ( 1 σ n 2 ) ( r n υ 0 ) + σ n 2 ( T r n υ 0 ) = e .

Owing to (3.6), (3.10), (3.30), and Lemma 2.9, we have

(3.31) lim n r n T r n = 0 .

Moreover, we see that

(3.32) s n υ 0 ( 1 σ n 2 ) r n υ 0 + σ n 2 T r n υ 0 ( 1 σ n 2 ) r n υ 0 + σ n 2 ( r n T r n + r n υ 0 ) r n υ 0 + σ n 2 r n T r n .

By using (3.31) and (3.32)

(3.33) lim inf n r n υ 0 e ,

such that (3.6) and (3.33) are satisfied, we obtain

(3.34) lim n r n υ 0 = e .

Note that

(3.35) lim n ( 1 σ n 1 ) ( x n υ 0 ) + σ n 2 ( T x n υ 0 ) = e .

Due to (3.11) and Lemma 2.9, we have

(3.36) □ lim n x n T x n = 0 .

Proposition 3.3

Assume that ϒ is Banach space providing Opial’s property, L hold property wg and T : L L is a G-nonexpansive mapping. Then, I T is G -demiclosed at 0.

Proof

Suppose that { x n } is a sequence in L that converges weakly to ϑ with ( x n , x n + 1 ) E and lim n x n T x n = 0 . Due to property wg, there is a subsequence { x n j } of { x n } such that ( x n j , ϑ ) E for j N . Suppose for contradiction that T ϑ ϑ . From Opial property, we have

lim sup n x n j ϑ lim sup n x n j T ϑ lim sup n ( x n j T x n j + T x n j T ϑ ) lim sup n x n j ϑ .

This is a contradiction. Therefore, T ϑ = ϑ .□

Theorem 3.4

Assume that ϒ is a uniformly convex Banach space providing Opial property, L hold property wg and { σ n 1 } , { σ n 2 } , { σ n 3 } , { σ n 4 } , { σ n 5 } [ ξ , 1 ξ ] for some ξ ( 0,1 ) . If ( x 0 , υ 0 ) , ( υ 0 , x 0 ) E for some υ 0 F , then { x n } converges weakly to a fixed point of T.

Proof

Let υ 0 F be such that ( x 0 , υ 0 ) , ( υ 0 , x 0 ) E . As in Lemma 3.2 (i), it follows { x n } is bounded. Let { x n j } and { x n l } be subsequences of the sequence { x n } with two weak limits ϑ 1 and ϑ 2 , resp. Due to Lemma 3.2 (ii), we also obtain lim j T x n j x n j = 0 and lim l T x n l x n l = 0 . From Proposition 3.3, we can conclude that T ϑ 1 = ϑ 1 and T ϑ 2 = ϑ 2 , and thus, ϑ 1 , ϑ 2 F . From Lemma 2.10, we obtain ϑ 1 = ϑ 2 . Therefore, { x n } converges weakly to a fixed point of T .

Theorem 3.5

Assume that ϒ is a uniformly convex Banach space, L hold property sg and { σ n 1 } , { σ n 2 } , { σ n 3 } , { σ n 4 } , { σ n 5 } [ ξ , 1 ξ ] for some ξ ( 0,1 ) , T is semicompact, F is dominated by x 0 , and F dominates x 0 , then { x n } converges strongly to a fixed point of T.

Proof

By Lemma 3.2 (ii), we know lim n x n T x n = 0 . As T is semicompact, there exists ϑ L and a subsequence { x n l } of { x n } such that lim l x n l ϑ = 0 . Since L hold property sg and G is transitive, we have ( x n l , ϑ ) E . Then

ϑ T ϑ ϑ x n l + x n l T x n l + T x n l T ϑ 0 as l .

Here, at ϑ F so that lim n x n ϑ exists. Thus, x n ϑ as n .□

4 Image deblurring

Image deblurring is a well-established challenge in low-level computer vision, which has drawn considerable interest from both the image processing and computer vision fields. The goal of image deblurring is to restore a clear image from a blurred input, with the blur often resulting from factors such as insufficient focus, camera shake, or rapid motion of the subject [1921]. Image deblurring often relies on iterative algorithms as a standard approach. On the basis of these insights, we devised an iterative technique to deblur (sharpen) a blurred image by combining the powerful scientific computing and graphics capabilities in Matlab R2016a.

We shall now describe the operations carried out in the developed code, systematically presenting the sequence of steps to illustrate the order of execution:

Step 1: Retrieving and Preparing the Image.

Step 2: Modeling Motion.

Step 3: Setting Novel Iteration Parameters.

Step 4: Implementing Iterative Refinements.

Step 5: Displaying the Results.

This code is designed to iteratively restore a blurred image. A blurred image is first generated, and then an iterative process using inverse filtering is employed to progressively improve the estimation of the sharp image. The results are visualized at regular intervals (every 10 iterations) to observe the advancement of the deblurring process. The images reconstructed with the recommended algorithm at the 10th, 20th, 30th, 40th, and 50th iterations are shown using the subplot command, which visually presents the results at every 10th iteration. The original image, the blurred image, and the deblurred image (obtained after 50 iterations) are presented together as shown in Figure 1.

Figure 1 
               The original image, the blurred image, and the deblurred image.
Figure 1

The original image, the blurred image, and the deblurred image.

This allows for monitoring the evolution of the deblurring process. Balloon fig displays a side-by-side comparison of the original image, the blurred image, and the deblurred image after 50 iterations. The visualization highlights the difference between the original and blurred images, as well as the improvements made by the 50th iteration in the deblurring process.

In conclusion, while these algorithms are applicable in various fields, more complex blur types may require the incorporation of additional strategies and careful parameter adjustment to achieve optimal results. By revising the code and fine-tuning the parameters, the accuracy of the output can be improved.

Example 4.1

The value of each pixel in the original image (that is, balloon) is actually a number (between 0 and 1, grayscale). Patch 3 × 3 represents a small region that we will take from this image. We created a new code in matlab to select a 3 × 3 patch (sub_image) from the real image. We obtained the following patch matrix as the output of the code:

sub _ image = 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 .

Notice that each pixel value is given as 0.4745. This means that all pixels in a grayscale image have the same intensity. Now, let me explain step by step how to run the proposed algorithm (iterative deblurring) on this patch:

Step 1: Getting the Patch Matrix.

Step 2: Motion Blur (Point Spread Function) Definition.

Step 3: Creating Blurred Image

blurred _ patch = 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 0.4745 .

As a result, the initial patch and the blurred image were the same because all pixels in the patch already had the same value. That is, the blurring effect was not very noticeable.

Step 4: Iterative Deblurring Algorithm.

Step 5: Deblurring Process (Iterations).

Consequently, the patch we first supplied was a 3 × 3 matrix, where each pixel had a value of 0.4745. The blurred image is the same patch with a horizontal motion blur applied. Yet, the key point here is that, since all the pixels had identical values, the difference was not particularly perceptible. The iterative deblurring (sharpening) process adhered to the steps of the algorithm, progressively attempting to enhance the sharpness of the image. Alternatively, in this case, the enhancement in the outcome was negligible due to the fact that the original matrix contained identical values for each pixel.

5 Signal enhancement

Signal enhancement is a technique used to reduce noise, thereby improving the signal’s clarity and increasing the proportion of the desired signal relative to the noise [22]. This technique is essential in various engineering fields, particularly in areas like image analysis, audio processing, medical imaging, and telecommunications [23]. To address intricate signal enhancement issues, iterative methods are frequently employed [24]. These methods are important for refining procedures and increasing the precision of the results in signal enhancement [25]. Drawing from these ideas, we created a code that generates a noisy signal based on a sine wave, utilizes an iterative algorithm to refine the signal, and visualizes the original, noisy, and enhanced signals in Matlab R2016a. Below is a comprehensive step-by-step explanation of this code:

Step 1: Producing the Original Signal (Clean Sine Wave) and the Noisy Signal (Sine Wave With Gaussian Noise introduced).

Step 2: Developing a New Iterative Signal Improvement Algorithm.

Step 3: Plotting the Results.

In this regard, x n stands for the initial signal, and σ n (taking σ n 1 = σ n 2 = σ n 3 = σ n 4 = σ n 5 σ n ) is the learning rate that determines the balance between the influence of the previous iteration and the current signal. This rate governs the interaction between the current and previous signals. If the learning rate is set too low, the denoising process may be delayed or fail to achieve the desired results (Figure 2).

Figure 2 
               Original/noisy/enhanced signals as 
                     
                        
                        
                           
                              
                                 σ
                              
                              
                                 n
                              
                           
                           =
                           0.2
                        
                        {\sigma }_{n}=0.2
                     
                  .
Figure 2

Original/noisy/enhanced signals as σ n = 0.2 .

Iterative methods adjust the signal in a balanced way, giving equal weight to both previous and current signals, which leads to a smoother and more progressive enhancement (Figure 3).

Figure 3 
               Original/noisy/enhanced signals as 
                     
                        
                        
                           
                              
                                 σ
                              
                              
                                 n
                              
                           
                           =
                           0.5
                           .
                        
                        {\sigma }_{n}=0.5.
Figure 3

Original/noisy/enhanced signals as σ n = 0.5 .

If the learning rate is set too high, the algorithm might overadjust and fail to reach the optimal solution (Figure 4).

Figure 4 
               Original/noisy/enhanced signals as 
                     
                        
                        
                           
                              
                                 σ
                              
                              
                                 n
                              
                           
                           =
                           0.8
                        
                        {\sigma }_{n}=0.8
                     
                  .
Figure 4

Original/noisy/enhanced signals as σ n = 0.8 .

The gradual refinement of the signal highlights the algorithm’s effective and stable noise reduction process. As the signal improves, it becomes increasingly clearer and eventually stabilizes, closely aligning with the original. This indicates that the algorithm is functioning properly, removing noise without excessively smoothing the signal [26,27].

Let’s explain the working logic of this code more clearly with a numerical example.

Example 5.1

Let’s take the learning rate as σ n = 0.8 . The Table 1 presents the time vector ( t = 0 : 0.01 : 2 π (ranges from 0 to 6.28 s in 0.01 s steps), the original signal ( sin ( t ) ), the noisy signal (Gaussian noise with a standard deviation scaled by 0.8), and the enhanced signal after 50 iterations using the proposed iterative algorithm. Here, we created a sine wave with a frequency of 0.159 ( H z ) . Total time is approximately 6.28 s. Total number of samples is 629.

These ten values are the first ten sample points of the enhanced signal ( x _ k ) obtained after 50 iterations. This output shows that the improved signal is closer to the original signal, and the noise is significantly reduced. Increasing the number of iterations results in a flatter signal, but over-improvement is generally undesirable, so proper filtering must be done. The table also illustrates the difference between the original signal and the noisy signal, demonstrating how the noise distorts the clean signal.

Table 1

Comparison of original, noisy, and enhanced signal values for the first 10 time steps

Step Time (t) Original signal Noisy signal Enhanced signal (after 50 iterations)
1 0.00 sin ( 0.00 ) 0.2142 0.0143
2 0.01 sin ( 0.01 ) 0.0724 0.0245
3 0.02 sin ( 0.02 ) 0.1911 0.0339
4 0.03 sin ( 0.03 ) 0.0950 0.0424
5 0.04 sin ( 0.04 ) 0.2034 0.0510
6 0.05 sin ( 0.05 ) 0.2780 0.0589
7 0.06 sin ( 0.06 ) 0.3397 0.0666
8 0.07 sin ( 0.07 ) 0.3072 0.0740
9 0.08 sin ( 0.08 ) 0.4892 0.0812
10 0.09 sin ( 0.09 ) 0.3921 0.0881

6 Convergence rate

In 1976, Rhoades [28] proposed a framework for evaluating the rate of convergence between two iterative algorithms, as outlined as follows:

Definition 6.1

[28] Let L be a nonvoid closed convex subset of a Banach space ϒ and T : L L be a mapping. Presume { x n } and { a n } are two iterations that converge to a fixed point ϑ in T . Then { x n } is said to converge faster than { a n } if x n ϑ a n ϑ for every n 1 .

In 2002, Berinde [29] applied the aforementioned concept to compare the rate of convergence between two iterative methods in the following manner:

Definition 6.2

[29] Let { x n } and { a n } denote two sequences of positive numbers that converge to x , a , resp. Assume that there is

lim n x n x a n a = l .

(a) If l = 0 , then it is termed the sequence { x n } converges to x faster than the sequence { a n } to a .

(b) If 0 < l < , then we call the sequence { x n } and { a n } have the same rate of convergence.

Following the approach presented in [30], we provide an illustrative example to validate our findings and conduct a comparative analysis of the convergence rates between the proposed iteration scheme and the UO-iteration proposed by Okeke et al. [13].

Example 6.3

Let ϒ = R and L = [ 0,3 ] . Presume that ( x , y ) E iff 0 x , y 2 or x = y . Define a mapping T : L L by T x = 2 3 x 3 4 for every x L . Let σ n 1 = σ n 2 = σ n 3 = σ n 4 = σ n 5 = 11 14 . This can be confirmed by noting that T is a G -nonexpansive mapping; however, it is not nonexpansive with respect to x = 2.8 and y = 2 . Accordingly, T x T y > 0.8 x y . The first five terms of the sequences { x n } and { a n } , corresponding to the initial value x 1 = a 1 = 1.7923 , are presented in Table 2.

By using (1.1) and (1.2), we possess the numerical results to approximate the value of 1 as shown in Figure 5 with n = 10 .

Based on Tables 2 and 3, it can be inferred that { x n } and { a n } converge to 1.00000 F , and it should be noted that a n 1 x n 1 = 0 , then the sequence { a n } converges to 1 faster than the sequence { x n } to 1 .

Table 2

Numerical experiments for Example 6.3

n (1.2) (1.1) Rate of convergence
x n a n x n 1 a n 1 a n 1 x n 1
1 1.7923 1.7923 0.7923 0.7923 1
2 1.1098 1.0425 0.1098 0.0425 0.38707
3 1.0108 1.0025 0.0108 0.0025 0.23148
4 1.0010 1.0002 0.001 0.0002 0.2
5 1.0001 1.0000 0.0001 0 0
Table 3

Numerical errors of the proposed iteration (1.2) and iteration (1.1)

n Rate (1.2) (1.1)
x n x n x n 1 a n a n a n 1
1 1.7923 0.6825 1.7923 0.7498
2 1.1098 0.099 1.0425 0.04
3 1.0108 0.0098 1.0025 0.0023
4 1.0010 0.0009 1.0002 0.0002
5 1.0001 0.0001 1.0000 0
Figure 5 
               The convergence of 
                     
                        
                        
                           
                              {
                              
                                 
                                    
                                       x
                                    
                                    
                                       n
                                    
                                 
                              
                              }
                           
                        
                        \{{x}_{n}\}
                     
                   and 
                     
                        
                        
                           
                              {
                              
                                 
                                    
                                       a
                                    
                                    
                                       n
                                    
                                 
                              
                              }
                           
                        
                        \{{a}_{n}\}
                     
                   with initial value 
                     
                        
                        
                           
                              
                                 x
                              
                              
                                 1
                              
                           
                           =
                           
                              
                                 a
                              
                              
                                 1
                              
                           
                           =
                           1.7923
                        
                        {x}_{1}={a}_{1}=1.7923
                     
                   and 
                     
                        
                        
                           n
                           =
                           10
                           .
                        
                        n=10.
Figure 5

The convergence of { x n } and { a n } with initial value x 1 = a 1 = 1.7923 and n = 10 .

7 Conclusion

Iterative algorithms are a key element in the image deblurring process. These algorithms progressively refine the image, providing an effective way to lessen blur and recover a result that closely mirrors the original. Beginning with the blurred image, they iteratively refine the solution, improving clarity with each step. On the other hand, signal enhancement and iterative algorithms are key components in contemporary engineering, where methods such as Bayesian enhancement, adaptive techniques, and filtering are employed to enhance signal clarity. These approaches are vital for minimizing noise and distortions, which in turn enhances the accuracy of results. Meanwhile, iterative methods are fundamental in the image deblurring process. These methods progressively refine the image over multiple iterations, gradually reducing blur and providing a clearer result that is closer to the original. Starting with a blurry image, each iteration works to improve and sharpen the output. Both signal enhancement and image recovery techniques will contribute to advancements in various areas, including satellite and remote sensing [3134], autonomous systems [35,36], and medical imaging [37,38].

Acknowledgments

The author wishes to express profound appreciation to the reviewers for their detailed evaluations and recommendations, which greatly enriched the final version of this work.

  1. Funding information: The author states no funding involved.

  2. Author contribution: The author confirms the sole responsibility for the conception of the study, presented results, and manuscript preparation.

  3. Conflict of interest: The author states no conflict of interest.

  4. Data availability statement: No datasets were generated or analyzed during the current study.

References

[1] S. Banach, Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales, Fund. Math. 3 (1922), 133–181, DOI: https://doi.org/10.4064/fm-3-1-133-181. 10.4064/fm-3-1-133-181Suche in Google Scholar

[2] J. Jachymski, The contraction principle for mappings on a metric space with a graph, Proc. Amer. Math. Soc. 136 (2008), 1359–1373, DOI: https://doi.org/10.1090/S0002-9939-07-09110-1. 10.1090/S0002-9939-07-09110-1Suche in Google Scholar

[3] R. Kelisky, T. Rivlin, Iterates of Bernstein polynomials, Pacific J. Math. 21 (1967), no. 3, 511–520. 10.2140/pjm.1967.21.511Suche in Google Scholar

[4] S. M. A. Aleomraninejad, S. Rezapour, and N. Shahzad, Some fixed point results on a metric space with a graph, Topology Appl. 159 (2012), 659–663, DOI: https://doi.org/10.1016/j.topol.2011.10.013. 10.1016/j.topol.2011.10.013Suche in Google Scholar

[5] M. R. Alfuraidan, M. A. Khamsi, Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph, Fixed Point Theory Appl. 2015 (2015), 44, DOI: https://doi.org/10.1186/s13663-015-0294-5. 10.1186/s13663-015-0294-5Suche in Google Scholar

[6] J. Tiammee, A. Kaewkhao, and S. Suantai, On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs, Fixed Point Theory Appl. 2015 (2015), 187, DOI: https://doi.org/10.1186/s13663-015-0436-9. 10.1186/s13663-015-0436-9Suche in Google Scholar

[7] O. Tripak, Common fixed points of G-nonexpansive mappings on Banach spaces with a graph, Fixed Point Theory Appl. 2016 (2016), 87, DOI: https://doi.org/10.1186/s13663-016-0578-4. 10.1186/s13663-016-0578-4Suche in Google Scholar

[8] R. Suparatulatorn, W. Cholamjiak, and S. Suantai, A modified S-iteration process for G-nonexpansive mappings in Banach spaces with graphs, Numer. Algorithms 77 (2018), no. 2, 479–490, DOI: https://doi.org/10.1007/s11075-017-0324-y. 10.1007/s11075-017-0324-ySuche in Google Scholar

[9] D. Yambangwai, T. Thianwan, Convergence point of G-nonexpansive mappings in Banach spaces endowed with graphs applicable in image deblurring and signal recovering problems, Ric. Mat. 73 (2024), 633–660, DOI: https://doi.org/10.1007/s11587-021-00631-y. 10.1007/s11587-021-00631-ySuche in Google Scholar

[10] C. Chairatsiripong, Y. Chonjaroen, D. Yambangwai, and T. Thianwan, Convergence analysis of M-iteration for G-nonexpansive mappings with directed graphs applicable in image deblurring and signal recovering problems, Demonstr. Math. 56 (2023), no. 1, 20220234, DOI: https://doi.org/10.1515/dema-2022-0234. 10.1515/dema-2022-0234Suche in Google Scholar

[11] D. Yambangwai, T. Thianwan, A parallel inertial SP-iteration monotone hybrid algorithm for a finite family of G-nonexpansive mappings and its application in linear system, differential, and signal recovery problems, Carpathian J. Math. 40 (2024), no. 2, 535–557, DOI: https://doi.org/10.37193/CJM.2024.02.19. 10.37193/CJM.2024.02.19Suche in Google Scholar

[12] S. Suantai, K. Kankam, W. Cholamjiak, and W. Yajai, Parallel hybrid algorithms for a finite family of G-nonexpansive mappings and its application in a novel signal recovery, Mathematics 10 (2022), 2140, DOI: https://doi.org/10.3390/math10122140. 10.3390/math10122140Suche in Google Scholar

[13] G. A. Okeke, A. V. Udo, N. H. Alharthi, and R. T. Alqahtani, A new robust iterative scheme applied in solving a fractional diffusion model for oxygen delivery via a capillary of tissues, Mathematics 12 (2024), 1339, DOI: https://doi.org/10.3390/math12091339. 10.3390/math12091339Suche in Google Scholar

[14] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc. 73 (1967), 591–597, DOI: https://doi.org/10.1090/S0002-9904-1967-11761-0. 10.1090/S0002-9904-1967-11761-0Suche in Google Scholar

[15] N. Shahzad, A. Udomene, Approximating common fixed points of nonexpansive mappings in Banach spaces, Fixed Point Theory Appl. 2006 (2006), no. 1, 18909. 10.1155/FPTA/2006/18909Suche in Google Scholar

[16] K. K. Tan, H. K. Xu, Approximating fixed points of nonexpansive mapping by the Ishikawa iteration process, J. Math. Anal. Appl. 178 (1993), 301–308. 10.1006/jmaa.1993.1309Suche in Google Scholar

[17] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, Bull. Aust. Math. Soc. 43 (1991), 153–159. 10.1017/S0004972700028884Suche in Google Scholar

[18] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl. 331 (2005), 506–517. 10.1016/j.jmaa.2005.03.002Suche in Google Scholar

[19] K. Zhang, W. Ren, W. Luo, W. S. Lai, B. Stenger, M. H. Yang, et al., Deep image deblurring: a survey, Int. J. Comput. Vis. 130 (2022), 2103–2130, DOI: https://doi.org/10.1007/s11263-022-01633-5. 10.1007/s11263-022-01633-5Suche in Google Scholar

[20] S. B. Kang, Automatic removal of chromatic aberration from a single image, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2007. 10.1109/CVPR.2007.383214Suche in Google Scholar

[21] A. Abuolaim, M. S. Brown, Defocus Deblurring using dual-pixel data, in Proc. Eur. Conf. Comput. Vis. (ECCV), 2020. 10.1007/978-3-030-58607-2_7Suche in Google Scholar

[22] B. G. M. Vandeginste, D. L. Massart, L. M. C. Buydens, S. De Jong, P. J. Lewi, et al. Data Handling in Science and Technology, Ch. 40, Elsevier, Amsterdam, Netherlands, 1998, 507–574, DOI: https://doi.org/10.1016/S0922-3487(98)80050-6. 10.1016/S0922-3487(98)80050-6Suche in Google Scholar

[23] S. V. Vaseghi, Advanced Digital Signal Processing and Noise Reduction, John Wiley & Sons, Chichester, 2008. 10.1002/9780470740156Suche in Google Scholar

[24] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems 20 (2004), no. 1, 103, DOI: https://doi.org/10.1088/0266-5611/20/1/006. 10.1088/0266-5611/20/1/006Suche in Google Scholar

[25] J. A. Cadzow, Signal enhancement–a composite property mapping algorithm, IEEE Trans. Acoust. Speech Signal Process. 36 (1988), no. 1, 49–62, DOI: https://doi.org/10.1109/29.1488. 10.1109/29.1488Suche in Google Scholar

[26] J. G. Proakis, D. G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Upper Saddle River, Prentice Hall, 2007. Suche in Google Scholar

[27] S. Haykin, Adaptive Filter Theory, Pearson, Upper Saddle River, NJ, USA, 2002. Suche in Google Scholar

[28] B. E. Rhoades, Comments on two fixed point iteration methods, J. Math. Anal. Appl. 56 (1976), no. 2, 741–750. 10.1016/0022-247X(76)90038-XSuche in Google Scholar

[29] V. Berinde, Iterative Approximation of Fixed Points, Lecture Notes in Mathematics, Springer-Verlag Berlin Heidelberg, Baia Mare, 2007, DOI: https://doi.org/10.1007/978-3-540-72234-2. 10.1007/978-3-540-72234-2Suche in Google Scholar

[30] R. Suparatulatorn, S. Suantai, and W. Cholamjiak, Hybrid methods for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with graphs, AKCE Int. J. Graphs Comb. 14 (2017), no. 2, 101–111, DOI: https://doi.org/10.1016/j.akcej.2017.01.001. 10.1016/j.akcej.2017.01.001Suche in Google Scholar

[31] B. Rasti, Y. Chang, E. Dalsasso, L. Denis, and P. Ghamisi, Image restoration for remote sensing: overview and toolbox, IEEE Geosci. Remote Sens. Mag. 10 (2022), no. 2, 201–230, DOI: https://doi.org/10.1109/MGRS.2021.3121761. 10.1109/MGRS.2021.3121761Suche in Google Scholar

[32] N. K. Greeshma, M. Baburaj, and N. G. Sudhish, Reconstruction of cloud-contaminated satellite remote sensing images using kernel PCA-based image modelling, Arab. J. Geosci. 9 (2016), 239, DOI: https://doi.org/10.1007/s12517-015-2199-3. 10.1007/s12517-015-2199-3Suche in Google Scholar

[33] S. Zhong, X. Zhao, D. Liu, H. Su, Z. Xie, and B. Fan, High-resolution, lightweight remote sensing via harmonic diffractive optical imaging systems and deep denoiser prior image restoration, IEEE Trans. Geosci. Remote Sens. 62 (2024), 5621717, DOI: https://doi.org/10.1109/TGRS.2024.3394154. 10.1109/TGRS.2024.3394154Suche in Google Scholar

[34] C. H. Chen (Ed.), Signal and Image Processing for Remote Sensing, CRC Press, Boca Raton, 2012. 10.1201/b11656Suche in Google Scholar

[35] M. Jamshidi, Autonomous control systems: applications to remote sensing and image processing, in SPIE 4471, Algorithms and Systems for Optical Information Processing V, SPIE, Bellingham, WA, USA, 2001, DOI: https://doi.org/10.1117/12.449352. 10.1117/12.449352Suche in Google Scholar

[36] Z. Bairi, O. Ben-Ahmed, A. Amamra, A. Bradai, and K. B. Beghdad, PSCS-Net: Perception optimized image reconstruction network for autonomous driving systems, IEEE Trans. Intell. Transp. Syst. 24 (2023), no. 2, 1564–1579, DOI: https://doi.org/10.1109/TITS.2022.3223167. 10.1109/TITS.2022.3223167Suche in Google Scholar

[37] S. Tong, A. M. Alessio, and P. E. Kinahan, Image reconstruction for PET/CT scanners: past achievements and future challenges, Imaging Med. 2 (2010), no. 5, 529–545. 10.2217/iim.10.49Suche in Google Scholar PubMed PubMed Central

[38] Y. Liu, M. De Vos, I. Gligorijevic, V. Matic, Y. Li, and S. Van Huffel, Multi-structural signal recovery for biomedical compressive sensing, IEEE Trans. Biomed. Eng. 60 (2013), no. 10, 2794–2805, DOI: https://doi.org/10.1109/TBME.2013.2264772. 10.1109/TBME.2013.2264772Suche in Google Scholar PubMed

Received: 2025-01-27
Revised: 2025-04-08
Accepted: 2025-05-12
Published Online: 2025-06-17

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Special Issue on Contemporary Developments in Graphs Defined on Algebraic Structures
  2. Forbidden subgraphs of TI-power graphs of finite groups
  3. Finite group with some c#-normal and S-quasinormally embedded subgroups
  4. Classifying cubic symmetric graphs of order 88p and 88p 2
  5. Simplicial complexes defined on groups
  6. Two-sided zero-divisor graphs of orientation-preserving and order-decreasing transformation semigroups
  7. Further results on permanents of Laplacian matrices of trees
  8. Special Issue on Convex Analysis and Applications - Part II
  9. A generalized fixed-point theorem for set-valued mappings in b-metric spaces
  10. Research Articles
  11. Dynamics of particulate emissions in the presence of autonomous vehicles
  12. The regularity of solutions to the Lp Gauss image problem
  13. Exploring homotopy with hyperspherical tracking to find complex roots with application to electrical circuits
  14. The ill-posedness of the (non-)periodic traveling wave solution for the deformed continuous Heisenberg spin equation
  15. Some results on value distribution concerning Hayman's alternative
  16. 𝕮-inverse of graphs and mixed graphs
  17. A note on the global existence and boundedness of an N-dimensional parabolic-elliptic predator-prey system with indirect pursuit-evasion interaction
  18. On a question of permutation groups acting on the power set
  19. Chebyshev polynomials of the first kind and the univariate Lommel function: Integral representations
  20. Blow-up of solutions for Euler-Bernoulli equation with nonlinear time delay
  21. Spectrum boundary domination of semiregularities in Banach algebras
  22. Statistical inference and data analysis of the record-based transmuted Burr X model
  23. A modified predictor–corrector scheme with graded mesh for numerical solutions of nonlinear Ψ-caputo fractional-order systems
  24. Dynamical properties of two-diffusion SIR epidemic model with Markovian switching
  25. Classes of modules closed under projective covers
  26. On the dimension of the algebraic sum of subspaces
  27. Periodic or homoclinic orbit bifurcated from a heteroclinic loop for high-dimensional systems
  28. On tangent bundles of Walker four-manifolds
  29. Regularity of weak solutions to the 3D stationary tropical climate model
  30. A new result for entire functions and their shifts with two shared values
  31. Freely quasiconformal and locally weakly quasisymmetric mappings in metric spaces
  32. On the spectral radius and energy of the degree distance matrix of a connected graph
  33. Solving the quartic by conics
  34. A topology related to implication and upsets on a bounded BCK-algebra
  35. On a subclass of multivalent functions defined by generalized multiplier transformation
  36. Local minimizers for the NLS equation with localized nonlinearity on noncompact metric graphs
  37. Approximate multi-Cauchy mappings on certain groupoids
  38. Multiple solutions for a class of fourth-order elliptic equations with critical growth
  39. A note on weighted measure-theoretic pressure
  40. Majorization-type inequalities for (m, M, ψ)-convex functions with applications
  41. Recurrence for probabilistic extension of Dowling polynomials
  42. Unraveling chaos: A topological analysis of simplicial homology groups and their foldings
  43. Global existence and blow-up of solutions to pseudo-parabolic equation for Baouendi-Grushin operator
  44. A characterization of the translational hull of a weakly type B semigroup with E-properties
  45. Some new bounds on resolvent energy of a graph
  46. Carmichael numbers composed of Piatetski-Shapiro primes in Beatty sequences
  47. The number of rational points of some classes of algebraic varieties over finite fields
  48. Singular direction of meromorphic functions with finite logarithmic order
  49. Pullback attractors for a class of second-order delay evolution equations with dispersive and dissipative terms on unbounded domain
  50. Eigenfunctions on an infinite Schrödinger network
  51. Boundedness of fractional sublinear operators on weighted grand Herz-Morrey spaces with variable exponents
  52. On SI2-convergence in T0-spaces
  53. Bubbles clustered inside for almost-critical problems
  54. Classification and irreducibility of a class of integer polynomials
  55. Existence and multiplicity of positive solutions for multiparameter periodic systems
  56. Averaging method in optimal control problems for integro-differential equations
  57. On superstability of derivations in Banach algebras
  58. Investigating the modified UO-iteration process in Banach spaces by a digraph
  59. The evaluation of a definite integral by the method of brackets illustrating its flexibility
  60. Existence of positive periodic solutions for evolution equations with delay in ordered Banach spaces
  61. Tilings, sub-tilings, and spectral sets on p-adic space
  62. The higher mapping cone axiom
  63. Continuity and essential norm of operators defined by infinite tridiagonal matrices in weighted Orlicz and l spaces
  64. A family of commuting contraction semigroups on l 1 ( N ) and l ( N )
  65. Pullback attractor of the 2D non-autonomous magneto-micropolar fluid equations
  66. Maximal function and generalized fractional integral operators on the weighted Orlicz-Lorentz-Morrey spaces
  67. On a nonlinear boundary value problems with impulse action
  68. Normalized ground-states for the Sobolev critical Kirchhoff equation with at least mass critical growth
  69. Decompositions of the extended Selberg class functions
  70. Subharmonic functions and associated measures in ℝn
  71. Some new Fejér type inequalities for (h, g; α - m)-convex functions
  72. The robust isolated calmness of spectral norm regularized convex matrix optimization problems
  73. Multiple positive solutions to a p-Kirchhoff equation with logarithmic terms and concave terms
  74. Joint approximation of analytic functions by the shifts of Hurwitz zeta-functions in short intervals
  75. Green's graphs of a semigroup
  76. Some new Hermite-Hadamard type inequalities for product of strongly h-convex functions on ellipsoids and balls
  77. Infinitely many solutions for a class of Kirchhoff-type equations
  78. On an uncertainty principle for small index subgroups of finite fields
Heruntergeladen am 5.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/math-2025-0166/html?lang=de
Button zum nach oben scrollen