Startseite Generalized split null point of sum of monotone operators in Hilbert spaces
Artikel Open Access

Generalized split null point of sum of monotone operators in Hilbert spaces

  • Akindele A. Mebawondu , Hammed A. Abass , Olalwale K. Oyewole , Kazeem O. Aremu EMAIL logo und Ojen K. Narain
Veröffentlicht/Copyright: 18. November 2021
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

In this paper, we introduce a new type of a generalized split monotone variational inclusion (GSMVI) problem in the framework of real Hilbert spaces. By incorporating an inertial extrapolation method and an Halpern iterative technique, we establish a strong convergence result for approximating a solution of GSMVI and fixed point problems of certain nonlinear mappings in the framework of real Hilbert spaces. Many existing results are derived as corollaries to our main result. Furthermore, we present a numerical example to support our main result and propose an open problem for interested researchers in this area. The result obtained in this paper improves and generalizes many existing results in the literature.

MSC 2010: 47H06; 47H09; 47J05; 47J25

1 Introduction

The concept of split feasibility problem (SFP) introduced by Censor and Elfving [1] in the framework of finite-dimensional Hilbert spaces is to find

(1) x C such that x D ,

where C and D are nonempty, closed and convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and : H 1 H 2 is a bounded linear operator. The SFP finds its applications in image recovery, signal processing, control theory, data compression, computer tomography and so on (see [2,3, 4,5]). Due to this advantage, a lot of researchers have studied the SFP in various abstract spaces (see [6,7,8, 9,10,11]). In 2009, Censor and Segal [12] further extended the notion of SFP by introducing the concept of split common fixed point problem (SCFPP) and found

(2) x F ( T ) such that x F ( S ) ,

where F ( T ) , F ( S ) denote the set of fixed points of two nonlinear operators T : C C and S : D D , respectively, and : H 1 H 2 is a bounded linear operator. For surveys on methods for approximating the solutions of SFP, [13,14,15] and references therein.

The variational inclusion problems (VIPs) are being used as mathematical models for the study of several optimization problems arising in finance, economics, network, transportation, science and engineering. For a real Hilbert space H , the VIP consists of finding a point x H such that

(3) 0 B x ,

where B : H 2 H is a multivalued (point-to-set) operator. Whenever B is a maximal monotone operator, such element x H is called the zero of the maximal monotone operator B . The VIP for monotone operators was introduced by Martinet [16]. For approximating the zeroes of (3), the relation generated by the fixed equation x = J λ B x has been considered. The single-valued mapping J λ B = ( I + λ B ) 1 with λ > 0 is called the resolvent of the operator B . Thus, the method for approximating the zeros of B is given by x 1 H and

x n + 1 = ( I + λ n B ) 1 x n .

Byrne et al. in [17] combined the concept of the VIP and SFP to introduce the split null point problem (SNPP). The SNPP is given as the problem of finding

(4) 0 B 1 ( x ) such that 0 B 2 ( x ) ,

where B i : H i 2 H i , i = 1 , 2 are maximal monotone operators and : H 1 H 2 is a bounded linear operator. They considered the following iterative algorithm: for λ > 0 and x 1 H 1

(5) x n + 1 = J λ B 1 ( x n γ ( I J λ B 2 ) ) ( x n ) n N ,

where γ 0 , 2 2 and established that { x n } converges weakly to a point x in the solution set of (4). The SNPP has been considered by many authors (see [18,19, 20,21] and references therein).

A generalization of the VIP (3) is a problem of finding an element x H such that

(6) 0 ( A + B ) x ,

where A : H H is a single-valued operator and B : H 2 H is a multivalued operator. In the case where A and B are monotone operators, the elements in the solution set of (6) are called the zeros of the sum of monotone operators. We note that the solution of (6) is the fixed points of the operator J λ B ( I λ A ) , when λ > 0 (see [22]). Several authors have employed different types of iterative methods for approximating the solution of (6) (see for example [23,24] and references therein).

In 2011, Moudafi [25] introduced the split monotone variational inclusion problem (SMVI) which generalizes the SNPP. Let H 1 and H 2 be two real Hilbert spaces, A : H i H i , i = 1 , 2 be single-valued operator, B i : H i 2 H i , i = 1 , 2 be set-valued operators and : H 1 H 2 be bounded linear operator. The SMVI is to find

x H 1 such that 0 ( A 1 + B 1 ) x

and

(7) x H 2 such that 0 ( A 2 + B 2 ) x .

For solving the SMVI, Moudafi [25] proposed the following iterative algorithm: For any x 1 H 1 and

(8) x n + 1 = T 1 ( x n γ ( I T 2 ) B x n ) n N ,

where γ 0 , 1 2 and T i = J λ B i ( I A i ) for i = 1 , 2 . They established that { x n } converges weakly to a point x in the solution set of (7).

Question: Can further generalize the SMVI and propose a natural modification of an iterative scheme to obtain a strong convergence result for this type of generalization?

On the other hand, the inertial extrapolation method has proven to be an effective way for accelerating the rate of convergence of iterative algorithms. The technique was introduced in 1964 based on a discrete version of a second-order dissipative dynamical system (see [26,27]). The inertial-type algorithm uses its two previous iterates to obtain its next iterate (see [28,29]). In [30], Moudafi and Oliny proposed the following inertial proximal point algorithm for finding the zero of sum of two maximal monotone operators:

(9) y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( I + λ n B ) 1 ( y n λ n A y n ) ,

where A : H H and B : H 2 H are single and multi-valued operators, respectively, and λ n > 0 . They established the weak convergence result under the assumption that λ n < 2 L with L being the Lipschitz constant of A and n = 0 θ n x n x n 1 2 < . Also, Lorenz and Pock [24] introduced a modified forward-backward splitting algorithm with inertial term. They define the algorithm as follows:

(10) y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( I + λ n M 1 B ) 1 ( I λ n M 1 A ) x n ,

where θ n [ 0 , 1 ) , M is a linear self adjoint positive definite map and λ n > 0 is a step size parameter. They proved that the sequence { x n } generated by (10) converges weakly to the zero of A + B .

Motivated by the results discussed above, we study a generalized split monotone variational inclusion (GSMVI) and then introduce an iterative method for approximating the solution of GSMVI and fixed point problem of the composition of two nonlinear mappings in the framework of real Hilbert spaces. We display a numerical example to show the behavior of our result.

For i = 1 , 2 , 3 , , N , let H i be real Hilbert spaces, A i : H i H i be k i -ism, B i : H i 2 H i be a maximal monotone operator and for i = 1 , 2 , 3 , , N 1 , i : H i H i + 1 be bounded linear operators with dual B i : H i + 1 H i , such that i , find point x H 1 such that

(11) 0 ( A 1 + B 1 ) x , such that 0 ( A 2 + B 2 ) 1 x , such that 0 ( A 3 + B 3 ) 1 2 x , such that 0 ( A N + B N ) 1 2 N 1 x .

The GSMVI 11 is much more general as it includes SFP, SNPP, SMVI as special cases and thus has more real-life applications. In addition, the GSMVI problem has wide applications in many fields such as machine learning, statistical regression, image processing and signal recovery (see [5,31,32] and references therein).

In this paper, we consider the problem of finding a common element in the solution set of the GSMVI and fixed point of a composition of two mappings T 1 and T 2 , where T 1 : C C is an α -strongly quasi-nonexpansive mapping, T 2 : C C is a firmly nonexpansive mapping and C is a nonempty, closed and convex subset of H 1 . That is, find x C such that

(12) 0 F ( T 1 T 2 ) ( A 1 + B 1 ) x , such that 0 ( A 2 + B 2 ) 1 x , such that 0 ( A 3 + B 3 ) 1 2 x , such that 0 ( A N + B N ) 1 2 N 1 x .

We denote by Γ the solution set of problem (12).

As far as we are concerned, the GSMVI is new and has yet to be considered in any of the recent or previous research papers in this direction. For obtaining the solution of (12), we propose a modified Halpern iterative algorithm together with an inertial term and prove a strong convergence result for approximating the solution of Γ . The result in this paper generalizes, unifies and extends many related results in the literature.

2 Preliminaries

In this section, we begin by recalling some known and useful results which are needed in the sequel.

Let H be a real Hilbert space. The set of fixed point of T will be denoted by F ( T ) , that is F ( T ) = { x H : T x = x } . We denote strong and weak convergence by “ ” and “ ,” respectively. For any x , y H and α [ 0 , 1 ] , it is well known that

(13) x , y = 1 2 ( x 2 + y 2 x y 2 ) ,

(14) α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 .

Definition 2.1

Let T : H H be an operator. Then the operator T is called

  1. L -Lipschitz continuous, if

    T x T y L x y ,

    where L > 0 and x , y H . If L = 1 , then T is called nonexpansive. Also, if y F ( T ) and L = 1 , then T is called quasi-nonexpansive;

  2. monotone, if

    T x T y , x y 0 , x , y H ;

  3. firmly nonexpansive, if

    T x T y 2 T x T y , x y , x , y H ;

    or equivalently

    T x T y 2 x y 2 ( I T ) x ( I T ) y 2 , x , y H ;

  4. k -inverse strongly monotone ( k -ism), if there exists k > 0 , such that

    T x T y , x y k T x T y 2 , x , y H ;

  5. α -strongly quasi-nonexpansive mapping, with α > 0 if

    (15) T x z 2 x z 2 α x T x 2 , z F ( T ) , x H .

It is well known that for any nonexpansive mapping T , the set of fixed point is closed and convex. Also, T satisfies the following inequality:

(16) ( x T x ) ( y T y ) , T y T x 1 2 ( T x x ) ( T y y ) 2 , x , y H .

Thus, for all x H and x F ( T ) , we have that

(17) x T x , x T x 1 2 T x x 2 , x , y H .

Lemma 2.2

[33] Let T be a k -ism operator, then

  1. T is a 1 k -Lipschitz continuous monotone operator.

  2. If λ ( 0 , 2 k ] , then ( I λ T ) is a nonexpansive mapping, where I is the identity operator on H .

A multi-valued operator B : H 2 H is called monotone, if for all x , y H , u B x and v B y implies that u v , x y 0 . A monotone operator B is maximal monotone, if for all ( x , u ) H × H , u v , x y 0 for every ( y , v ) G r a p h ( B ) (the graph of B) implies that u B x . Let B : H 2 H be a multi-valued maximal monotone operator. The resolvent operator J λ B : H H associated with B is defined by

J λ B ( x ) ( I + λ B ) 1 ( x ) , x H ,

where λ > 0 and I is the identity operator on H .

Lemma 2.3

[34] Let B : H 2 H be a set-valued maximal monotone mapping and λ > 0 , then J λ B is single-valued and a firmly nonexpansive mapping.

Lemma 2.4

[35] Let X be a real Banach space, B : X 2 X be a maximal monotone mapping and A : X X be a k -inverse strongly monotone operator on X . Define T λ ( I + λ B ) 1 ( I λ A ) , λ > 0 . Then, the following hold:

  1. for λ > 0 , F ( T λ ) = ( A + B ) 1 ( 0 ) ;

  2. for 0 < r λ and x X , x T r x 2 x T λ x .

Lemma 2.5

[36] Let A : H H be a k -ism operator and B : H 2 H be a maximal monotone operator. Then, for all λ , μ > 0 and x H , we have

(18) J λ B x = J μ B μ λ x + 1 μ λ J λ B x , J λ B ( I λ A ) x = J μ B ( I μ A ) μ λ x + 1 μ λ J λ B ( I λ A ) x .

Lemma 2.6

[37] Let { a n } be a sequence of positive real numbers, { α n } be a sequence of real numbers in ( 0 , 1 ) such that n = 1 α n = and { d n } be a sequence of real numbers. Suppose that

a n + 1 ( 1 α n ) a n + α n d n , n 1 .

If lim sup k d n k 0 for all subsequences { a n k } of { a n } satisfying the condition

lim inf k { a n k + 1 a n k } 0 ,

then, lim n a n = 0 .

Let H be a real Hilbert space and C a nonempty, closed and convex subset of H . For any u H , there exists a unique point P C u C such that

u P C u y , y C .

P C is called the metric projection of H onto C . It is well known that P C is a nonexpansive mapping and that P C satisfies

x y , P C x P C y P C x P C y 2 ,

for all x , y H . Furthermore, P C x is characterized by the properties P C x C ,

x P C x , P C x y 0

for all y C and

x y 2 x P C x 2 + y P C x 2

for all x H and y C .

3 Main result

Lemma 3.1

Let H be a real Hilbert space and C be a nonempty closed and convex subset of H . Let T 1 : C C be an α -strongly quasi-nonexpansive and T 2 : C C be a firmly nonexpansive mapping, such that F ( T 1 ) F ( T 2 ) . Then F ( T 1 T 2 ) = F ( T 1 ) F ( T 2 ) .

Proof

It is required that we show that F ( T 1 T 2 ) F ( T 1 ) F ( T 2 ) and F ( T 1 ) F ( T 2 ) F ( T 1 T 2 ) . It is easy to see that F ( T 1 ) F ( T 2 ) F ( T 1 T 2 ) . We now establish that F ( T 1 T 2 ) F ( T 1 ) F ( T 2 ) . Let y F ( T 1 T 2 ) and x F ( T 1 ) F ( T 2 ) , we have

(19) y x 2 = T 1 ( T 2 y ) T 1 x 2 T 2 y x 2 α T 2 y T 1 ( T 2 y ) 2 T 2 y x 2 .

Also, using (19), we have

(20) T 2 y x 2 = T 2 y x , y x = 1 2 T 2 y x 2 + 1 2 y x 2 1 2 T 2 y y 2 1 2 T 2 y x 2 + 1 2 T 2 y x 2 1 2 T 2 y y 2 ,

which implies that T 2 y y 2 = 0 T 2 y y = 0 T 2 y = y . Using this fact, we have that

(21) y = ( T 1 T 2 ) y = T 1 ( T 2 y ) = T 1 y y F ( T 1 ) F ( T 2 ) .

Hence, F ( T 1 T 2 ) F ( T 1 ) F ( T 2 ) , and so F ( T 1 T 2 ) = F ( T 1 ) F ( T 2 ) .□

Lemma 3.2

Let H be a real Hilbert space and C be a nonempty closed and convex subset of H . Let T 1 : C C be an α -strongly quasi-nonexpansive mapping and T 2 : C C be a firmly nonexpansive mapping. Then, T 1 T 2 is a quasi-nonexpansive mapping.

Proof

Let x C and y F ( T 1 T 2 ) , using Lemma 3.1, we have that y F ( T 1 ) F ( T 2 ) , which implies that y = T 1 y and y = T 2 y . Now, observe that

( T 1 T 2 ) x y 2 = T 1 ( T 2 x ) T 1 y 2 T 2 x y 2 α T 2 x T 1 ( T 2 x ) 2 T 2 x T 2 y 2 x y 2 ( x y ) ( T 2 x T 2 y ) 2 = x y 2 x T 2 x 2 x y 2 .

Lemma 3.3

Let C be a nonempty closed and convex subset of a real Hilbert space H . Let A : H H be a k -ism operator, B : H 2 H be a maximal monotone operator. Then, for all μ λ , x , y H and p ( A + B ) 1 ( 0 ) , we have

  1. x J λ B ( I λ A ) x 2 x J μ B ( I μ A ) x ;

  2. x y , J λ B ( I λ A ) x J λ B ( I λ A ) y J λ B ( I λ A ) x J λ B ( I λ A ) y 2 ;

  3. ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y , x y ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y 2 ;

  4. J λ B ( I λ A ) x p 2 x p 2 x J λ B ( I λ A ) x 2 .

Proof

For all μ λ , x , y H and p ( A + B ) 1 ( 0 ) , we have established the aforementioned results.

  1. Using (18), we have

    x J λ B ( I λ A ) x x J μ B ( I μ A ) x + J μ B ( I μ A ) x J λ B ( I λ A ) x = x J μ B ( I μ A ) x + J λ B ( I λ A ) λ μ x + 1 λ μ J μ B ( I μ A ) x J λ B ( I λ A ) x x J μ B ( I μ A ) x + λ μ x + 1 λ μ J μ B ( I μ A ) x x x J μ B ( I μ A ) x + 1 λ μ x J μ B ( I μ A ) x x J μ B ( I μ A ) x + x J μ B ( I μ A ) x = 2 x J μ B ( I μ A ) x .

  2. Using the property of a monotone operator, we have that

    1 λ J λ B ( I λ A ) x J λ B ( I λ A ) y , x J λ B ( I λ A ) x ( y J λ B ( I λ A ) y ) 0 ,

    clearly, we have that

    x y , J λ B ( I λ A ) x J λ B ( I λ A ) y J λ B ( I λ A ) x J λ B ( I λ A ) y 2 .

  3. Using (2), we have that

    ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y , x y = ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y , ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y + [ J λ B ( I λ A ) x J λ C ( I λ A ) y ] = ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y 2 + ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y , J λ B ( I λ A ) x J λ B ( I λ A ) y = ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y 2 + x y , ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y J λ B ( I λ A ) x J λ B ( I λ A ) y ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y 2 + J λ B ( I λ A ) x J λ B ( I λ A ) y J λ B ( I λ A ) x J λ B ( I λ A ) y = ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) y 2 .

  4. It is well known that for any p ( A + B ) 1 ( 0 ) , we have p F ( J λ B ( I λ A ) ) . Using ( 3 ) , we have that

    J λ B ( I λ A ) x p 2 = x p + J λ B ( I λ A ) x x = x p 2 + x J λ B ( I λ A ) x 2 2 x J λ B ( I λ A ) x , x p = x p 2 + x J λ B ( I λ A ) x 2 2 ( I J λ B ( I λ A ) ) x ( I J λ B ( I λ A ) ) p , x p x p 2 + x J λ B ( I λ A ) x 2 2 x J λ B ( I λ A ) x 2 = x p 2 x J λ B ( I λ A ) x 2 .

Lemma 3.4

Let H be a real Hilbert space and C be a nonempty closed and convex subset of H . Let T 1 : C C be an α -strongly quasi-nonexpansive mapping, T 2 : C C be a firmly nonexpansive mapping and J λ B ( I λ A ) the resolvent of ( A + B ) 1 ( 0 ) . Then, F ( U J λ B ( I λ A ) ) = F ( U ) F ( J λ B ( I λ A ) ) , where U = ( T 1 T 2 ) .

Proof

It is easy to see that F ( U ) F ( J λ B ( I λ A ) ) F ( U J λ B ( I λ A ) ) . We now establish that F ( U J λ B ( I λ A ) ) F ( U ) F ( J λ B ( I λ A ) ) . Let y F ( U J λ B ( I λ A ) ) and x F ( U ) F ( J λ B ( I λ A ) ) , we have

(22) y x 2 = U ( J λ B ( I λ A ) y ) U x 2 J λ B ( I λ A ) y x 2 .

Using Lemma 3.3 ( 4 ) and (22), we have

J λ B ( I λ A ) y x 2 y x 2 y J λ B ( I λ A ) y 2 J λ B ( I λ A ) y x 2 y J λ B ( I λ A ) y 2 .

This implies that y J λ B ( I λ A ) y = 0 y = J λ B ( I λ A ) y . Since, y = U ( J λ B ( I λ A ) y ) = U y . It follows that y F ( U ) F ( J λ B ( I λ A ) ) .□

Lemma 3.5

Let C be a nonempty closed and convex subset of a real Hilbert space H . Let A : H H be a k -ism operator, B : H 2 H be a maximal monotone operator, T 1 : C C be an α -strongly quasi-nonexpansive mapping and T 2 : C C be a firmly nonexpansive mapping. Then, for all μ λ , x H , p F ( T 1 T 2 ) p and p ( A + B ) 1 ( 0 ) , we have ( U J λ B ( I λ A ) ) x p 2 x p 2 x ( U J λ B ( I λ A ) ) x 2 , where U = T 1 T 2 .

Proof

For any p ( A + B ) 1 ( 0 ) and p = F ( U ) , we have p F ( J λ B ( I λ A ) ) , using Lemma 3.4, we have that p F ( U ) F ( J λ B ( I λ A ) ) = F ( U J λ B ( I λ A ) ) . Using a similar approach to that in (4) of Lemma 3.3, we obtain the desired result.□

In the sequel, we suppose that H i , i = 1 , 2 , 3 be real Hilbert spaces. Let A i : H i H i be a k -ism operator and B i : H i 2 H i be a maximal monotone operator. For i = 1 , 2 , let i : H i H i + 1 be bounded linear operators with i the adjoint of i . Let T 1 : C C be an α strongly quasi-nonexapansive mapping and T 2 : C C be a firmly nonexpansive mapping. We assume Γ , where Γ denotes the solution set (12) with N = 3 .

Algorithm 3
Initialization: Given γ 1 , γ 2 > 0 and θ n , η n , β n , α n ( 0 , 1 ) , for all n N , such that η n β n α n with η n + β n + α n = 1 . Let x 0 , x 1 , u C be arbitrary.
Iterative step:
Step 1: Given the iterates x n 1 and x n for all n N , choose θ n such that 0 θ n θ ¯ n , where
θ ¯ n = min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise , ( 23 )
where θ > 0 and { ε n } is a positive sequence such that ε n = ( α n ) .
Step 2: Set
w n = x n + θ n ( x n x n 1 ) .
Then compute
z 1 , n = w n γ 1 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n , z 2 , n = z 1 , n γ 2 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n , z 3 , n = U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n ,
where γ 1 ε , ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 ε , γ 2 ε , ( I J λ 2 , n B 2 ( I λ 2 , n A 3 ) ) 1 z 1 , n 2 B 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 3 ) ) 1 z 1 , n 2 ε , and U = ( T 1 T 2 ) .
Step 3: Compute
x n + 1 = α n u + β n x n + η n z 3 , n .
Stopping criterion: If w n = z 1 , n = z 2 , n = z 3 , n = x n , then stop, otherwise, set n n + 1 and go back to Step 1.

Remark 3.6

The following are some of the highlights of our method:

  1. The choice of stepsize γ 1 ε , ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 ε and γ 2 ε , ( I J λ 2 , n B 2 ( I λ 2 , n A 3 ) ) 1 z 1 , n 2 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 3 ) ) 1 z 1 , n 2 ε used in Algorithm 3 does not require the knowledge of the operator norms 1 2 and 3 . It is known that algorithms with parameters depending on operator norm are not easy to execute in practice.

  2. It is easy to see from (23) that lim n θ n α n x n x n 1 = 0 .

    Since { ε n } is a positive sequence such that ε n = ( α n ) , which means that lim n ε n α n = 0 . Clearly, we have that θ n x n x n 1 ε n for all n N , which together with lim n ε n α n = 0 , it follows that

    lim n θ n α n x n x n 1 lim n ε n α n = 0 .

Lemma 3.7

Assume that lim n θ n α n x n x n 1 = 0 . Then, the sequence { x n } generated by Algorithm 3 is bounded and consequently, { z 1 , n } , { z 2 , n } , { z 3 , n } and { w n } are bounded.

Proof

Let p Γ and since lim n θ n α n x n x n 1 = 0 , there exists N 1 > 0 such that θ n α n x n x n 1 N 1 . Then from Algorithm 3, we have

(24) w n p 2 = x n + θ n ( x n x n 1 ) p 2 = x n p 2 + 2 θ n x n p , x n x n 1 + θ n 2 x n x n 1 2 x n p 2 + 2 θ n x n p x n x n 1 + θ n 2 x n x n 1 2 = x n p 2 + θ n x n x n 1 [ 2 x n p + θ n x n x n 1 ] = x n p 2 + θ n x n x n 1 [ 2 x n p + α n θ n α n x n x n 1 ] x n p 2 + θ n x n x n 1 [ 2 x n p + α n N 1 ] x n p 2 + θ n x n x n 1 N 2 ,

where N 2 2 x n x + α n N 1 .

Also, using Algorithm 3, we have

(25) z 1 , n p 2 = w n γ 1 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n p 2 = w n p 2 2 γ 1 w n p , 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n + γ 1 2 B 1 B 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 = w n p 2 2 γ 1 2 1 ( w n p ) , ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n + γ 1 2 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 w n p 2 2 γ 1 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 + γ 1 2 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2

using γ 1 ε , ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 ε , and (25) becomes

(26) z 1 , n p 2 = w n p 2 ( γ 1 2 + 2 ε ) [ 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 ] w n p 2 .

Similarly, using Algorithm 3, we have

(27) z 2 , n p 2 = z 1 , n γ 2 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n p 2 = z 1 , n p 2 2 γ 2 z 1 , n p , 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n + γ 2 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 z 1 , n p 2 2 γ 2 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 + γ 2 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ,

since γ 2 ε , ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ε , and we have from (27) that

(28) z 2 , n p 2 = z 1 , n p 2 ( γ 2 + 2 ε ) [ 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ] z 1 , n p 2 w n p 2 .

Furthermore, using Algorithm 3, Lemma 3.2 and (28), we have

(29) z 3 , n p 2 = U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n p 2 J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n p 2 z 2 , n p 2 z 1 , n p 2 w n p 2 .

Finally, using Algorithm 3 and (24), we have

(30) x n + 1 p 2 = α n u + β n x n + η n z 3 , n p 2 α n u p 2 + β n x n p 2 + η n z 3 , n p 2 α n u p 2 + ( 1 α n η n ) x n p 2 + η n ( x n p 2 + θ n x n x n 1 N 2 ) α n u p 2 + ( 1 α n ) x n p 2 + θ n x n x n 1 N 2 max { x n p 2 , u p 2 } + θ n x n x n 1 N 2 max { max { x n p 2 , u p 2 } + θ n 1 x n 1 x n 2 N 2 , u p 2 } + θ n x n x n 1 N 2 = max { x n p 2 , u p 2 } + α n 1 θ n 1 α n 1 x n 1 x n 2 N 2 + α n θ n α n x n x n 1 N 2 ,

using the fact that lim n θ n α n x n x n 1 = 0 , there exists N 1 > 0 such that θ n α n x n x n 1 N 1 , (30) becomes

(31) x n + 1 p 2 max { x n p 2 , u p 2 } + N 4 ,

where N 4 = N 3 N 2 + N 2 N 1 , thus { x n } generated by Algorithm 3 is bounded and consequently { z 1 , n } , { z 2 , n } , { z 3 , n } and { w n } are bounded.□

Theorem 3.8

Let { x n } be the sequence generated by Algorithm 3. If lim n α n = 0 , n = 1 α n = and lim n θ n α n x n x n 1 = 0 , then, { x n } converges strongly to p P Γ u .

Proof

Let p Γ , we have

(32) x n + 1 p 2 = α n u + β n x n + η n z 3 , n p 2 = α n ( u p ) + β n ( x n p ) + η n ( z 3 , n p ) 2 β n ( x n p ) + η n ( z 3 , n p ) 2 + 2 α n u p , x n + 1 p β n 2 x n p 2 + η n 2 z 3 , n p 2 + 2 β n η n x n p z 3 , n p + 2 α n u p , x n + 1 p β n 2 x n p 2 + η n 2 z 3 , n p 2 + β n η n [ x n p 2 + z 3 , n p 2 ] + 2 α n u p , x n + 1 p = β n ( β n + η n ) x n p 2 + η n ( η n + β n ) z 3 , n p 2 + 2 α n u p , x n + 1 p = β n ( 1 α n ) x n p 2 + η n ( 1 α n ) z 3 , n p 2 + 2 α n u p , x n + 1 p β n ( 1 α n ) x n p 2 + η n ( 1 α n ) z 3 , n x n 2 + 2 η n ( 1 α n ) z 3 , n x n x n p + η n ( 1 α n ) x n p 2 + 2 α n u p , x n + 1 p ( β n + η n ) x n p 2 + η n ( 1 α n ) z 3 , n x n 2 + 2 η n ( 1 α n ) z 3 , n x n x n p + 2 α n u p , x n + 1 p ( β n + η n ) x n p 2 + α n ( 1 α n ) z 3 , n x n 2 + 2 α n ( 1 α n ) z 3 , n x n x n p + 2 α n u p , x n + 1 p ( 1 α n ) x n p 2 + α n [ z 3 , n x n 2 + 2 z 3 , n x n x n p + 2 u p , x n + 1 p ] = ( 1 α n ) x n p 2 + α n δ n ,

where δ n z 3 , n x n 2 + 2 z 3 , n x n x n p + 2 u p , x n + 1 p . According to Lemma 3.1, to conclude our proof, it is sufficient to establish that lim sup k δ n k 0 for every subsequence { x n k p } of { x n p } satisfying the condition:

(33) lim inf k { x n k + 1 p x n k p } 0 .

To establish that lim sup k δ n k 0 , we suppose that for every subsequence { x n k p } of { x n p } such that (33) holds. Then,

(34) lim inf k { x n k + 1 p 2 x n k p 2 } = lim inf k { ( x n k + 1 p x n k p ) ( x n k + 1 p x n k p ) } 0 .

Now, using Algorithm 3, we have

(35) x n + 1 p 2 = α n u + β n x n + η n z 3 , n p 2 = α n ( u p ) + β n ( x n p ) + η n ( z 3 , n p ) 2 β n ( x n p ) + η n ( z 3 , n p ) 2 + α n 2 u p 2 β n x n p 2 + η n z 3 , n p 2 β n η n z 3 , n x n 2 + α n 2 u p 2 η n [ x n p 2 + θ n x n x n 1 N 2 ] + β n x n p 2 β n η n z 3 , n x n 2 + α n 2 u p 2 = ( η n + β n ) x n p 2 + η n θ n x n x n 1 N 2 + α n 2 u p 2 β n η n z 3 , n x n 2 x n p 2 + α n θ n x n x n 1 N 2 + α n 2 u p 2 β n η n z 3 , n x n 2 ,

this implies from (34)

(36) lim sup k [ β n k η n k z 3 , n k x n k 2 ] lim sup k [ x n k p 2 x n k + 1 p 2 + α n k θ n k x n k x n k 1 N 2 + α n k 2 u p 2 ] lim inf k [ x n k p 2 x n k + 1 p 2 ] 0 ,

which gives

(37) lim k z 3 , n k x n k = 0 .

Also, using Algorithm 3 and Lemma 3.5, we have

(38) x n + 1 p 2 = α n u + β n x n + η n z 3 , n p 2 = α n ( u p ) + β n ( x n p ) + η n ( z 3 , n p ) 2 α n u p 2 + β n x n p 2 + η n z 3 , n p 2 = α n u p 2 + β n x n p 2 + η n U J λ 1 , n B 1 ( I λ 1 , n A 3 ) z 2 , n p 2 α n u p 2 + β n x n p 2 + η n [ z 2 , n p 2 z 2 , n U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n 2 ] α n u p 2 + β n x n p 2 + η n w n p 2 η n z 2 , n U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n 2 = α n u p 2 + ( β n + η n ) x n p 2 + η n θ n x n x n 1 N 2 η n z 2 , n U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n 2 α n u p 2 + x n p 2 + α n θ n x n x n 1 N 2 η n z 2 , n U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n 2 .

This implies from (34) that

(39) lim sup k [ η n k z 2 , n k U J λ 1 , n B 1 ( I λ 1 , n A 1 ) z 2 , n k 2 ] lim sup k [ x n k p 2 x n k + 1 p 2 + α n k θ n k x n k x n k 1 N 2 + α n k u p 2 ] lim inf k [ x n k p 2 x n k + 1 p 2 ] 0 ,

which gives

(40) lim k z 2 , n k U J λ 1 , n k B 1 ( I λ 1 , n k A 1 ) z 2 , n k = 0 .

By Lemma 3.3 (1), we have

(41) lim k z 2 , n k U J λ 1 B 1 ( I λ 1 A 1 ) z 2 , n k = 0 .

More so, using Algorithm 3 and (28), we have

(42) x n + 1 p 2 = α n u + β n x n + η n z 3 , n p 2 = α n ( u p ) + β n ( x n p ) + η n ( z 3 , n p ) 2 α n u p 2 + β n x n p 2 + η n z 3 , n p 2 α n u p 2 + β n x n p 2 + η n z 2 , n p 2 α n u p 2 + β n x n p 2 + η n z 1 , n p 2 ( γ 2 + 2 ε ) [ 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ] α n u p 2 + β n x n p 2 + η n w n p 2 η n ( γ 2 + 2 ε ) [ 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ] = α n u p 2 + ( β n + η n ) x n p 2 + η n θ n x n x n 1 η n ( γ 2 + 2 ε ) [ B 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ] α n u p 2 + x n p 2 + α n θ n x n x n 1 η n ( γ 2 + 2 ε ) [ 1 ( I J λ 2 , n B 2 ( I λ 2 , n A 2 ) ) 1 z 1 , n 2 ] ,

this implies from (34)

(43) lim sup k [ η n k ( γ 2 + 2 ε ) [ 1 ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k 2 ] ] lim sup k [ x n k p 2 x n k + 1 p 2 + α n k θ n k x n k x n k 1 N 2 + α n k u p 2 ] lim inf k [ x n k p 2 x n k + 1 p 2 ] 0 ,

which gives

(44) lim k 1 ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k = 0 .

More so, using Algorithm 3 and (26), we have

(45) x n + 1 p 2 = α n u + β n x n + η n z 3 , n p 2 = α n ( u p ) + β n ( x n p ) + η n ( z 3 , n p ) 2 α n u p 2 + β n x n p 2 + η n z 3 , n p 2 α n u p 2 + β n x n p 2 + η n z 1 , n p 2 α n u p 2 + β n x n p 2 + η n w n p 2 η n ( γ 1 2 + 2 ε ) [ 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 ] α n u p 2 + x n p 2 + α n θ n x n x n 1 η n ( γ 1 2 + 2 ε ) [ 1 2 ( I J λ 3 , n B 3 ( I λ 3 , n A 3 ) ) 2 1 w n 2 ] ,

this implies from (34)

(46) lim sup k [ η n k ( γ 1 2 + 2 ε ) [ 1 2 ( I J λ 3 , n k B 3 ( I λ 3 , n A 3 ) ) 2 1 w n k 2 ] ] lim sup k [ x n k p 2 x n k + 1 p 2 + α n k θ n k x n k x n k 1 N 2 + α n k u p 2 ] lim inf k [ x n k p 2 x n k + 1 p 2 ] 0 ,

which gives

(47) lim k 1 2 ( I J λ 3 , n k B 3 ( I λ 3 , n k A 3 ) ) 2 1 w n k = 0 .

Using (47), we have that

(48) z 1 , n k w n k = w n k γ 1 1 2 ( I J λ 3 , n k B 3 ( I λ 3 , n A 3 ) ) 2 1 w n k w n k 0 , as n .

Using this, (25) and (47), we have

2 γ 1 ( I J λ 3 , n k B 3 ( I λ 3 , n k A 3 ) ) 2 1 w n k w n k p 2 z 1 , n k p 2 + γ 1 2 2 1 ( I J λ 3 , n k B 3 ( I λ 3 , n k A 3 ) ) 2 1 w n k w n k z 1 , n k ( w n k p + z 1 , n k p ) + γ 1 2 2 1 ( I J λ 3 , n k B 3 ( I λ 3 , n k A 3 ) ) 2 1 w n k 0 as n .

Hence,

lim k ( I J λ 3 , n k B 3 ( I λ 3 , n k A 3 ) ) 2 1 w n k = 0 ,

and by Lemma 3.3 (1), we obtain

(49) lim k ( I J λ 3 B 3 ( I λ 3 A 3 ) ) 2 1 w n k = 0 .

Also, from (40), we obtain that

(50) z 2 , n k z 1 , n k = z 1 , n k γ 2 1 ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k z 1 , n k 0 , as n .

Now from (27), (40) and (50), we have

2 γ 2 ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k 2 z 1 , n k p 2 z 2 , n k p 2 + γ 2 1 ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k 2 z 1 , n k z 2 , n k ( z 1 , n k p + z 2 , n k p ) + γ 2 1 ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k 2 0 , as n .

Thus,

(51) lim k ( I J λ 2 , n k B 2 ( I λ 2 , n k A 2 ) ) 1 z 1 , n k = 0 ,

again by Lemma 3.3 (1), we obtain

(52) lim k ( I J λ 2 B 2 ( I λ 2 A 2 ) ) 1 z 1 , n k = 0 .

We also have

(53) z 2 , n k w n k = z 2 , n k z 1 , n k + z 1 , n k w n k 0 , as n .

It is easy to see that, as k , we have

(54) lim k w n k x n k = θ n k x n k x n k 1 = α n k θ n k α n k x n k x n k 1 = 0 .

In addition, we have by triangular inequality that

(55) z 2 , n k x n k = z 2 , n k w n k + w n k x n k 0 , as n , z 3 , n k z 2 , n k = z 3 , n k x n k + x n k z z , n k 0 , as n , z 1 , n k x n k = z 2 , n k x n k + z 1 , n k z 2 , n k 0 , as n .

From the hypothesis and (37), we have that

(56) x n k + 1 z 3 , n k = α n u + β n x n + η n z 3 , n k z 3 , n k α n k u z 3 , n k + β n k x n k z 3 , n k + η n k z 3 , n k z 3 , n k 0 , as k .

Using (56) and (37), it is easy to see that

(57) x n k + 1 x n k x n k + 1 z 3 , n k + z 3 , n k x n k 0 , as k .

Since { x n k } is bounded, it follows that there exists a subsequence { x n k j } of { x n k } that converges weakly to x such that

(58) lim sup k u p , x n k p = lim j u p , x n k j p = u p , x p .

Thus by x n k x , (37), (54), we obtain that { z 3 , n k j } , { w n k j } , converge weakly to x . Also, from (55), we obtain that { z 2 , n k j } and { z 1 , n k j } converge weakly to x . It follows from this and the fact that 1 and 2 are bounded linear operator that 1 z 1 , n k 1 x and 2 1 w n k 2 1 x . Thus, we have by (41) and Lemma 3.4 that x F ( U ) = F ( T 1 T 2 ) ( A 1 + B 1 ) 1 ( 0 ) . Also by using Lemma 19(1) with (49) and (52), respectively, we obtain 1 x ( A 2 + B 2 ) 1 ( 0 ) and 2 1 x ( A 3 + B 3 ) 1 ( 0 ) . Hence, x Γ .

Hence, since p = P Γ u , we have obtained from (58) that

(59) lim sup k u p , x n k p = u p , x p 0 ,

which implies by (57) that

(60) lim sup k u p , x n k + 1 p 0 .

Using (37) and (60), we have that lim sup k δ n k z 3 , n k x n k 2 + 2 z 3 , n k x n k x n k p + 2 u p , x n k + 1 p 0 . Thus, the last part of Lemma 3.1 is achieved. Hence, we have that lim n x n p = 0 . Thus, { x n } converges strongly to p = P Γ u .□

4 Numerical example

Example 4.1

Let H 1 = H 2 = H 3 = C = R 2 with the Euclidean norm and inner product defined by x , y = x 1 y 1 + x 2 y 2 for all x = ( x 1 , x 2 ) , y = ( y 1 , y 2 ) R 2 . Define the bounded linear operators 1 and 2 , respectively, by 1 = ( 2 x 1 + x 2 , x 1 2 x 2 ) and 2 = ( x 1 + 2 x 2 , x 1 + x 2 ) . For i = 1 , 2 , 3 , let the operators A i : H i H i and B i : H i 2 H i be defined as follows:

A 1 = 2 0 1 2 , B 1 = 1 2 2 1 , A 2 = 1 1 2 1 , B 2 = 2 1 0 2 , A 3 = 1 1 1 1 , B 3 = 1 1 1 2 .

Let U : C C be defined by U ( y ) = y 2 for all y R 2 . Choose λ i , n = 2 n 3 i n + 5 , γ 1 = 0.36 , γ 2 = 0.10 , α n = 2 n 2 7 n 3 + 2 n 1 , β n = 7 n 3 7 n 3 + 2 n 1 , η n = 1 7 n 3 + 2 n 1 and θ n = 1 30 n + 5 .

For initial values

  1. x 0 = ( 1.5 , 0.1 ) T , x 1 = ( 3 , 2.5 ) T and u 0 = ( 0.5 , 1 ) T ;

  2. x 0 = ( 5 , 5 ) T , x 1 = ( 10 , 5 ) T and u 0 = ( 0.5 , 1 ) T ;

  3. x 0 = ( 5 , 5 ) T , x 1 = ( 10 , 5 ) T and u 0 = ( 10 , 10 ) T , see Figure 1.

Figure 1 
               Top: (i); bottom left: (ii); bottom right: (iii).
Figure 1

Top: (i); bottom left: (ii); bottom right: (iii).

Example 4.2

Let H 1 = H 2 = H 3 = C = L 2 ( [ 0 , 1 ] ) be the space of function with norm and inner product defined, respectively, by x = 0 1 x ( t ) 2 d t 1 2 , t L 2 ( [ 0 , 1 ] ) and x , y = 0 1 x ( t ) y ( t ) d t , t L 2 ( [ 0 , 1 ] ) . Let the operators 1 and 2 be defined, respectively, by 1 ( x ) = x and 1 ( x ) = 3 x 2 for all x L 2 ( [ 0 , 1 ] ) . For i = 1 , 2 , 3 , we define the operators A 1 , A 2 and A 3 by A 1 ( x ) = 5 x 1 , A 2 ( x ) = 3 x 1 and A 3 ( x ) = x 1 , respectively, for every x L 2 ( [ 0 , 1 ] ) . Also, for i = 1 , 2 , 3 , we define the mappings B i by B i ( x ) = i x , where x L 2 ( [ 0 , 1 ] ) . Let U : C C be defined by U ( y ) = 2 y 3 for all y L 2 ( [ 0 , 1 ] ) . For this example, we choose λ i , n = 2 n 3 i n + 5 , γ 1 = 0.36 , γ 2 = 0.10 , α n = 2 n 2 7 n 3 + 2 n 1 , β n = 7 n 3 7 n 3 + 2 n 1 , η n = 1 7 n 3 + 2 n 1 and θ n = 1 30 n + 5 . It is easy to see that the conditions of Theorem 3 are satisfied. The report of this numerical illustration is given in Figure 2 with varying initial values of x 0 and x 1 . Again, we compare the accelerated and unaccelerated algorithms by choosing a tolerance level x n + 1 x n < ε with ε = 1 0 4 .

  1. u = 0.25 , x 0 = t 2 + 1 and x 1 = 3 * t ;

  2. u = 0.55 , x 0 = t 2 3 + 1 and x 1 = 3 * t 2 t 2 ;

  3. u = 1.0 , x 0 = cos ( t + 1 ) and x 1 = t 2 .

Figure 2 
               Top: (Case 1); bottom left: (Case 2); bottom right: (Case 3).
Figure 2

Top: (Case 1); bottom left: (Case 2); bottom right: (Case 3).

5 Conclusion and open problem

In this work, we introduce and study a new type of GSMVI and established a strong convergence result for approximating a solution of GSMVI in the framework of real Hilbert spaces using an inertial extrapolation term and Halpern iterative technique. We have to only consider this problem for N = 3 in the framework of Hilbert spaces. It is therefore left open for interested researchers in this area of research to extend the concept to more general spaces and also consider the case when N 4 . In addition, the authors in [38] introduced the notion of finding the zero of the sum of three monotone operators in the framework of Hilbert spaces. It is natural to ask, if the present results in this work can be extended to three monotone operators.

Abbreviations

SPF

split feasibility problem

SCFPP

split common fixed point problem

VIP

variational inclusion problem

SNPP

split null point problem

SMVI

split monotone variational inclusion problem

GSMVI

generalized split monotone variational inclusion problem


, , ,

Acknowledgements

The first, second and third authors acknowledge with thanks the bursary and financial support from Department of Science and Technology and National Research Foundation, Republic of South Africa Center of Excellence in Mathematical and Statistical Sciences (DST-NRF COE-MaSS) Post Doctoral Bursary. Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the CoE-MaSS.

  1. Author contributions: AAM conceptualized the research problem. All authors (AAM, HAA, OOK, KOA and OKN) established and validated the results. OOK drew the graphs for the numerical experiment. All authors (AAM, HAA, OOK, KOA and OKN) proof-read the manuscript.

  2. Conflict of interest: The authors declare that they have no competing interests.

References

[1] Y. Censor and T. Elfving , A multiprojection algorithm using Bregman projections in a product space, Numer. Algorithms 8 (1994), 221–239. 10.1007/BF02142692Suche in Google Scholar

[2] C. Byrne , Iterative oblique projection onto convex subsets and the split feasibility problem, Inverse Prob. 18 (2002), 441–453. 10.1088/0266-5611/18/2/310Suche in Google Scholar

[3] Y. Censor , X. A. Motova , and A. Segal , Perturbed projections and subgradient projections for the multiple-set split feasibility problem, J. Math. Anal. Appl. 327 (2007), 1224–1256. 10.1016/j.jmaa.2006.05.010Suche in Google Scholar

[4] Y. Censor , T. Elfving , N. Kopt , and T. Bortfeld , The multiple-sets split feasibility problem and its applications, Inverse Prob. 21 (2005), 2071–2084. 10.1088/0266-5611/21/6/017Suche in Google Scholar

[5] P. Cholamjiak and Y. Shehu , Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math. 64 (2019), 409–435. 10.21136/AM.2019.0323-18Suche in Google Scholar

[6] H. A. Abass , C. Izuchukwu , F. U. Ogbuisi , and O. T. Mewomo , An iterative algorithm for finite family of split minimization problem and fixed point problem, Novi Sad J. Math. 49 (2019), no. 1, 117–136. 10.30755/NSJOM.07925Suche in Google Scholar

[7] S. S. Chang , L. Wang , and L. J. Qin , Split equality fixed point problem for quasi-pseudo-contractive mappings with applications, Fixed Point Theory Appl. 2015 (2015), 208. 10.1186/s13663-015-0458-3Suche in Google Scholar

[8] S. Suantai , N. Pholosa , and P. Cholamjiak , The modified inertial relaxed CQ algorithm for solving the split feasibility problems, J. Indust. Manag. Optim. 14 (2018), no. 4, 1595–1615. 10.3934/jimo.2018023Suche in Google Scholar

[9] S. Suantai , Y. Shehu , and P. Cholamjiak , Nonlinear iterative methods for solving the split common null point problems in Banach spaces, Optim. Meth. Softw. 34 (2019), 853–874. 10.1080/10556788.2018.1472257Suche in Google Scholar

[10] S. Suantai , N. Pholasa , and P. Cholamjiak , Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problems, RACSAM 113 (2019), 1081–1099. 10.1007/s13398-018-0535-7Suche in Google Scholar

[11] F. Wang and H. K. Xu , Approximating curve and strong convergence of the CQ algorithm for split feasibility problem, J. Inequal. Appl. 2010 (2010), 102085. 10.1155/2010/102085Suche in Google Scholar

[12] Y. Censor and A. Segal , The split common fixed point for directed operators, J. Convex Anal. 16 (2009), 587–600. 10.1111/j.1475-3995.2008.00684.xSuche in Google Scholar

[13] M. Abbas , M. Alshahrani , Q. H. Ansari , O. S. Iyiola , and Y. Shehu , Iterative methods for solving proximal split minimization problem, Numer. Algor. 78 (2018), 193–215. 10.1007/s11075-017-0372-3Suche in Google Scholar

[14] H. A. Abass , K. O. Aremu , and C. Izuchukwu , A common solution of family of minimization problem and fixed point problem for multivalued type one demicontractive type mapping, Adv. Nonlinear Var. Inequal. 21 (2018), no. 2, 94–108. Suche in Google Scholar

[15] S. Y. Cho , X. Qin , and L. Wang , Strong convergence of a splitting algorithm for treating monotone operators, Fixed Point Theory Appl. 2014 (2014), 94. 10.1186/1687-1812-2014-94Suche in Google Scholar

[16] B. Martinet , Réegularisation ďinequalities variationnelles par approximation successives, Rev. Franaise Informat. Recherche Opérationnelle 4 (1970), 154–158. 10.1051/m2an/197004R301541Suche in Google Scholar

[17] C. Byrne , Y. Censor , and A. Gibali , Weak and strong convergence of algorithms for the split common null point problem, J. Nonlinear Convex Anal. 13 (2012), 759–775. Suche in Google Scholar

[18] C. S. Chuang , Algorithms with new parameter conditions for split variational inclusion problems in Hilbert spaces with application to split feasibility problem, Optimization 65 (2016), 859–876. 10.1080/02331934.2015.1072715Suche in Google Scholar

[19] V. Dadashi , Shrinking projection algorithms for the split common null point problem, Bull. Aust. Math. Soc. 96 (2017), 299–306. 10.1017/S000497271700017XSuche in Google Scholar

[20] S. Takahashi and W. Takahashi , The split common null point problem and the shrinking projection method in Banach spaces, Optimization 65 (2016), 281–287. 10.1080/02331934.2015.1020943Suche in Google Scholar

[21] W. Takahashi , The split common null point problem in Banach spaces, Arch Math. 104 (2015), 357–365. 10.1007/s00013-015-0738-5Suche in Google Scholar

[22] H. H. Bauschke and P. L. Combettes , Convex Analysis and Monotone Operator Theory in Hilbert Spaces , CMS Books in Mathematics/Ouvrages de Mathmatiques de la SMC , Springer, New York, 2011. 10.1007/978-1-4419-9467-7Suche in Google Scholar

[23] P. L. Lions and B. Mercier , Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal. 16 (1979), 964–979. 10.1137/0716071Suche in Google Scholar

[24] D. A. Lorenz and T. Pock , An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis. 51 (2015), 311–325. 10.1007/s10851-014-0523-2Suche in Google Scholar

[25] A. Moudafi , Split monotone variational inclusions, J. Optim. Theory Appl. 150 (2011), 275–283. 10.1007/s10957-011-9814-6Suche in Google Scholar

[26] H. Attouch , X. Goudon , and P. Redont , The heavy ball with friction method, I. The continuous dynamical system: Global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system, Commun. Contemp. Math. 2 (2000), no. 1, 1–34. 10.1142/S0219199700000025Suche in Google Scholar

[27] H. Attouch and M. O. Czarnecki , Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibria, J. Diff. Equ. 179 (2002), 278–310. 10.1006/jdeq.2001.4034Suche in Google Scholar

[28] F. Alvarez and H. Attouch , An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal. 9 (2001), 3–11. 10.1023/A:1011253113155Suche in Google Scholar

[29] P. E. Mainge , Regularized and inertial algorithms for common fixed points of nonlinear operators, J. Math. Anal. Appl. 34 (2008), 876–887. 10.1016/j.jmaa.2008.03.028Suche in Google Scholar

[30] A. Moudafi and M. Oliny , Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math. 155 (2013), 447–454. 10.1016/S0377-0427(02)00906-8Suche in Google Scholar

[31] P. L. Combettes , M. Defrise , and C. De Mol , Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul. 4 (2005), 1168–1200. 10.1137/050626090Suche in Google Scholar

[32] I. Daubechies , M. Defrise , and C. De Mol , An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math. 57 (2004), 1413–1457. 10.1002/cpa.20042Suche in Google Scholar

[33] W. Takahashi and M. Toyoda , Weak convergence theorems for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl. 118 (2003), 417–428. 10.1023/A:1025407607560Suche in Google Scholar

[34] W. Takahashi , Nonlinear Functional Analysis-Fixed Point Theory and Its Applications, Yokohama Publishers, Yokohama, 2000. Suche in Google Scholar

[35] G. Lopez , M. V. Marquez , F. Wang , and H. K. Xu , Forward-backward splitting method for accretive operators in Banach space, Abstr. Appl. Anal. 2012 (2012), 109236. 10.1155/2012/109236Suche in Google Scholar

[36] V. Barbu and Th. Precupanu , Convexity and Optimization in Banach Spaces, Editura Academiei R. S. R, Bucharest, 1978. Suche in Google Scholar

[37] S. Saejung and P. Yotkaew , Approximation of zeros of inverse strongly monotone operators in Banach spaces, Nonlinear Anal. 75 (2012), 742–750. 10.1016/j.na.2011.09.005Suche in Google Scholar

[38] D. Van Hieu , L. Van Vy , and P. K. Quy , Three-operators splitting algorithm for a class of variational inclusion problems, Bull. Iran. Math. Soc. 46 (2020), 1055–1071. 10.1007/s41980-019-00312-5Suche in Google Scholar

Received: 2021-01-11
Revised: 2021-07-03
Accepted: 2021-07-30
Published Online: 2021-11-18

© 2021 Akindele A. Mebawondu et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Regular Articles
  2. Graded I-second submodules
  3. Corrigendum to the paper “Equivalence of the existence of best proximity points and best proximity pairs for cyclic and noncyclic nonexpansive mappings”
  4. Solving two-dimensional nonlinear fuzzy Volterra integral equations by homotopy analysis method
  5. Chandrasekhar quadratic and cubic integral equations via Volterra-Stieltjes quadratic integral equation
  6. On q-analogue of Janowski-type starlike functions with respect to symmetric points
  7. Inertial shrinking projection algorithm with self-adaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings
  8. On new stability results for composite functional equations in quasi-β-normed spaces
  9. Sampling and interpolation of cumulative distribution functions of Cantor sets in [0, 1]
  10. Meromorphic solutions of the (2 + 1)- and the (3 + 1)-dimensional BLMP equations and the (2 + 1)-dimensional KMN equation
  11. On the equivalence between weak BMO and the space of derivatives of the Zygmund class
  12. On some fixed point theorems for multivalued F-contractions in partial metric spaces
  13. On graded Jgr-classical 2-absorbing submodules of graded modules over graded commutative rings
  14. On almost e-ℐ-continuous functions
  15. Analytical properties of the two-variables Jacobi matrix polynomials with applications
  16. New soft separation axioms and fixed soft points with respect to total belong and total non-belong relations
  17. Pythagorean harmonic summability of Fourier series
  18. More on μ-semi-Lindelöf sets in μ-spaces
  19. Range-Kernel orthogonality and elementary operators on certain Banach spaces
  20. A Cauchy-type generalization of Flett's theorem
  21. A self-adaptive Tseng extragradient method for solving monotone variational inequality and fixed point problems in Banach spaces
  22. Robust numerical method for singularly perturbed differential equations with large delay
  23. Special Issue on Equilibrium Problems: Fixed-Point and Best Proximity-Point Approaches
  24. Strong convergence inertial projection algorithm with self-adaptive step size rule for pseudomonotone variational inequalities in Hilbert spaces
  25. Two strongly convergent self-adaptive iterative schemes for solving pseudo-monotone equilibrium problems with applications
  26. Some aspects of generalized Zbăganu and James constant in Banach spaces
  27. An iterative approximation of common solutions of split generalized vector mixed equilibrium problem and some certain optimization problems
  28. Generalized split null point of sum of monotone operators in Hilbert spaces
  29. Comparison of modified ADM and classical finite difference method for some third-order and fifth-order KdV equations
  30. Solving system of linear equations via bicomplex valued metric space
  31. Special Issue on Computational and Theoretical Studies of free Boundary Problems and their Applications
  32. Dynamical study of Lyapunov exponents for Hide’s coupled dynamo model
  33. A statistical study of COVID-19 pandemic in Egypt
  34. Global existence and dynamic structure of solutions for damped wave equation involving the fractional Laplacian
  35. New class of operators where the distance between the identity operator and the generalized Jordan ∗-derivation range is maximal
  36. Some results on generalized finite operators and range kernel orthogonality in Hilbert spaces
  37. Structures of spinors fiber bundles with special relativity of Dirac operator using the Clifford algebra
  38. A new iteration method for the solution of third-order BVP via Green's function
  39. Numerical treatment of the generalized time-fractional Huxley-Burgers’ equation and its stability examination
  40. L -error estimates of a finite element method for Hamilton-Jacobi-Bellman equations with nonlinear source terms with mixed boundary condition
  41. On shrinkage estimators improving the positive part of James-Stein estimator
  42. A revised model for the effect of nanoparticle mass flux on the thermal instability of a nanofluid layer
  43. On convergence of explicit finite volume scheme for one-dimensional three-component two-phase flow model in porous media
  44. An adjusted Grubbs' and generalized extreme studentized deviation
  45. Existence and uniqueness of the weak solution for Keller-Segel model coupled with Boussinesq equations
  46. Special Issue on Advanced Numerical Methods and Algorithms in Computational Physics
  47. Stability analysis of fractional order SEIR model for malaria disease in Khyber Pakhtunkhwa
Heruntergeladen am 19.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/dema-2021-0034/html
Button zum nach oben scrollen