Home A strong convergence theorem for a zero of the sum of a finite family of maximally monotone mappings
Article Open Access

A strong convergence theorem for a zero of the sum of a finite family of maximally monotone mappings

  • Getahun B. Wega , Habtu Zegeye EMAIL logo and Oganeditse A. Boikanyo
Published/Copyright: July 8, 2020
Become an author with De Gruyter Brill

Abstract

The purpose of this article is to study the method of approximation for zeros of the sum of a finite family of maximally monotone mappings and prove strong convergence of the proposed approximation method under suitable conditions. The method of proof is of independent interest. In addition, we give some applications to the minimization problems and provide a numerical example which supports our main result. Our theorems improve and unify most of the results that have been proved for this important class of nonlinear mappings.

MSC 2010: 47H04; 47H05; 47H10; 47J25

1 Introduction

Let H be a real Hilbert space with inner product .,. and induced norm . Recall that for a mapping A:H2H, the domain of A, Dom(A), is given by Dom(A)={xH:Ax}, the graph of A, Gph(A), is given by Gph(A)={(x,y)H×H:yAx} and the range of A, ran(A), is given by ran(A)={Ax:xDom(A)}. The mapping A is called monotone if

(1)uv,xy0,(x,u),(y,v)Gph(A),

and it is called maximally monotone if it is monotone and the graph of A is not properly contained in the graph of any other monotone mapping. The resolvent of A with parameter λ>0 is JλA=(I+λA)1, where I is the identity mapping on H, and it enjoys firmly nonexpansive property, that is, for any x,yran(I+λA), we have

(2)JλAxJλAy2xy,JλAxJλAy.

A monotone mapping A:HH is called α-inverse strongly monotone if there exists a positive real number α such that for any x,yH we have

(3)AxAy,xyαAxAy2.

Let Ak:H2H,k=1,2,,m, be maximally monotone mappings. Consider the inclusion problem of finding zH such that

(4)0A1z+A2z++Amz,

where m2. We denote the solution set of (4) by zero(A1+A2++Am)=(A1+A2++Am)1(0). This problem, which includes variational inequality problems, equilibrium problems, complementary problems, minimization problems, nonlinear evolution equations, fixed point problems as special cases, is quite general. In fact, a number of problems arising in applied areas such as image recovery, machine learning and signal processing can be mathematically modeled as (4), see [1,2] and references therein. To be more precise, a stationary solution to the initial value problem of the evolution equation

(5)0F(t)+xt,x(0)=x0

can be formulated as (4) when the governing maximal monotone F is of the form FA1+A2++Am (see, e.g., [3]). Furthermore, optimization problems often need (see, e.g., [4]) to solve a minimization problem of the form

(6)minxH{g1(x)+g2(x)++gm(x)},

where gi,i=1,2,,m are proper lower semicontinuous convex functions from H to the extended real line R¯(,]. If in (6), we assume that Aigi, for i=1,2,,m, where of gi is the subdifferential operator of gi in the sense of convex analysis, then (6) is equivalent to (4). Consequently, considerable research efforts have been devoted to methods of finding approximate solutions (when they exist) of inclusions of the form (4) for a sum of a finite number of monotone mappings (see, e.g., [3,5]).

For the case where m=2, the inclusion problem (4) reduces to the problem of finding zH such that

(7)0Az+Bz,

where A and B are monotone mappings. For solving problem (7), several authors have studied different iterative schemes (see, e.g., [6,7,8,9,10,11,12,13,14,15,16] and references therein). The most attractive methods for solving the inclusion problem (7) are the Peaceman-Rachford and Douglas-Rachford iterative methods.

The nonlinear Peaceman-Rachford and Douglas-Rachford, splitting iterative methods, introduced by Lions and Mercier [3], are given by

(8)xn+1=(2JλAI)(2JλBI)xn,n  1,

and

(9)xn+1=JλA(2JλBI)xn+(IJλB)xn,n  1,

respectively, where λ>0 is a fixed scalar. The nonlinear Peaceman-Rachford algorithm (8) fails, in general, to converge (even in the weak topology in the infinite-dimensional setting). This is due to the fact that the generating mapping (2JλAI)(2JλBI) is merely nonexpansive. The nonlinear Douglas-Rachford algorithm (9) was initially proposed in [3] for finding a zero of the sum of two maximally monotone mappings and has been studied by many authors (see, e.g., [1,3,11,17,18] and references therein). This method always converges in the weak topology to a solution of (7), since the generating operator JλA(2JλBI)+(IJλB) for this algorithm is firmly nonexpansive (see, e.g., [11]).

In 1979, Passty [11] studied the forward-backward splitting method which is given by

(10)xn+1=(I+λnB)1(IλnA)xn,n  1,

where {λn} is a sequence of positive scalars, A and B are maximal monotone mappings. He proved that the sequence in (10) converges weakly to the solution of problem (7). Different authors have used algorithm (10), for the inclusion problem (7), when A is a single-valued α-inversely strong monotone (or α-strongly monotone) mapping and B is a maximal monotone mapping defined in real Hilbert spaces (see, e.g., [18,19]).

We remark that the aforementioned results provide weak convergence. But we also indicate that several authors have studied different iterative methods (see, e.g., [21,22,23,24] and references therein) and proved strong convergent results to approximate zeros of the sum of monotone mappings A and B, where A:HH is an α-inverse strongly monotone mapping and B:H2H is a maximally monotone mapping under certain conditions (see, e.g., [19,25,26,27]).

In 2012, Takahashi et al. [19] studied the following Halpern-type iteration in a Hilbert space setting: for any x0H,

(11)xn+1=βnxn+(1βn)(αnu+(1αn)JrnB(xnrnAxn)),n0,

where uH is a fixed vector and A is an α-inverse strongly monotone and single-valued mapping on H, and B is a maximally monotone mapping on H. They proved that the sequence {xn} generated by (11) converges strongly to a point x(A+B)1(0) provided that the control sequences {βn}, {αn} and {rn} satisfy appropriate conditions.

Recently, Bot et al. [25] studied the following classical Douglas-Rachford method:

(12)yn=JγB(βnxn);zn=JγA(2ynβnxn);xn+1=βnxn+λn(znyn),n1,

where A:H2H and B:H2H are maximally monotone mappings and {λn} and {βn} are real sequences satisfying certain conditions and showed that {xn} converges strongly to x¯=PF(RγARγB)(0), as n, where RγA=2JγAI while {yn} and {zn} converge strongly to JγB(x¯)zer(A+B), as n.

More recently, Wega and Zegeye [15] constructed an algorithm that converges strongly to a solution of the sum of two maximally monotone mappings using a different technique.

Question 1

A natural question arises whether we can obtain a strong convergence result for approximating zeros of the sum of a finite family of maximally monotone mappings via the extended solution set of the sum of maximally monotone mappings?

In 2009, Svaiter [28] constructed a new approach for splitting algorithms, which starts by reformulating (4) as the problem of locating a point in a certain extended solution setSe(A1,A2,,Am) subset of H×Hm, which is defined by:

Se(A1,,Am)={(z,w1,,wm)V:zH,wkAk(z),k=1,2,,m},

where

V={(z,w1,,wm)H×Hm:w1++wm=0}.

He proved weak convergence results provided that H has a finite dimension or A1+A2++Am is maximal monotone.

We remark that the extended solution set is associated with the common fixed points of a countable family of nonexpansive mappings and so the methods of approximating fixed points are used to approximate the solution of problem (4).

Motivated and inspired by the above results, our purpose in this article is to construct a viscosity-type algorithm for finding zeros of the sum of a finite family of maximally monotone mappings via the extended solution set Se(A1,A2,,Am) and discuss its strong convergence. The viscosity method introduced by Moudafi [30] involves a contraction mapping f in the procedure and it can be regarded as a regularization process for the solution of problem (4), which is supposed to induce the convergence in norm of the iterates. Another advantage of this method is that it allows one to select a particular solution point of (4), which satisfies some variational inequality. The assumption that one of the mappings is α-inverse strongly monotone is dispensed with. Our results provide an affirmative answer to our question. Our method of proof is of independent interest. Our results improve and generalize several results in the literature.

2 Preliminaries

In this section, we recall some definitions and known results that will be used in the sequel.

Let C be a nonempty, closed and convex subset of a real Hilbert space H. A mapping T:CC is said to be a contraction if there exists α[0,1) such that TxTyαxy, x,yC and it is said to be nonexpansive if α=1. The set of fixed points of T is defined by F(T){xC:Tx=x}.

Lemma 2.1

[29] Let C be a closed and convex subset of a real Hilbert space H, andT:CCbe a nonexpansive mapping. Let{xn}CandxCsuch thatxnxandxnTxn0asn. Then,xF(T).

The following lemmas shall be used in the later section.

Lemma 2.2

[28] Finding a point inSe(A1,,Am)is equivalent to solving (4) in the sense that

0A1(z)++Am(z)w1,,wmH:(z,w1,,wm)Se(A1,,Am).

Lemma 2.3

[28] If the monotone operatorsA1,,Amare maximally monotone, the corresponding extended solution setSe(A1,,Am)is closed and convex inHm+1.

Lemma 2.4

[28] Given(xk,yk)Gph(Ak),k=1,,m, defineφ:Vvia

(13)φ(z,w1,,wm)=k=1mzxk,ykwk.

Then, for any(z,w1,,wm)Se(A1,,Am), one hasφ(z,w1,,wm)0, that is,

Se(A1,,Am){(z,w1,,wm)V:φ(z,w1,,wm)0}.

In addition, φis affine on V, with

(14)φ=k=1myk,x1x¯,x2x¯,,xmx¯,wherex¯=1mk=1mxk
and
(15)φ=0(x1,y1,,ym)Se(A1,,Am),x1=x2==xmφ(z,w1,,wm)=0, (z,w1,,wm)V.

The function φ in Lemma 2.4 is called decomposable separators.

Lemma 2.5

[31] Let{an}be a sequence of nonnegative real numbers satisfying the following relation:

an+1(1αn)an+αnδn,nn0,
where{αn}(0,1) and {δn}satisfying the following conditions: n=1αn=, andlimsupnδn0orn=1|αnδn|<. Then, limnan=0.

Lemma 2.6

[32] Letx,yH. If H is a real Hilbert space, then the following inequality holds:

x+y2x2+2y,x+y.

Lemma 2.7

[33] Let C be a closed and convex subset of a real Hilbert space H andxHbe given. The metric projection of H onto C,PC, is characterized by the following:

  1. PCxC,xPCx,PCxz0, for allzC;

  2. PCis firmly nonexpansive and hence nonexpansive.

3 Main results

In this section, we introduce an algorithm for finding a point in Se(A1,,Am), which will lead us to a solution of the sum of a finite family of maximally monotone mappings in a Hilbert H space and discuss its strong convergence.

In what follows, let H be a real Hilbert space and Ak:H2H,k=1,,m, where m2, be a family of maximal monotone mappings satisfying Se(A1,,Am). Let {αn}(0,1) such that limnαn=0, n=1αn= and n=1|αnαn1|<, and let {βn} be a decreasing sequence in (0,1] with β0=1.

We now propose the following algorithm which basically uses Algorithm 3 of [28].

Algorithm 3.1

Step 0: Select initial guess u0=(z0,w1,0,,wm,0)V.

Step 1: Given nth iterates (n0), compute xk,n and yk,n satisfying:

xk,n=(I+λk,nAk)1(zn+λk,nwk,n),k=1,2,,m,yk,n=wk,n+znxk,nλk,n,k=1,2,,m,

where the real sequence λk,n>0, for each k=1,2,,m and n0.

Step 2: Define φi:V by

(16)φi(z,w1,,wm)=k=1mzxk,i,yk,iwk,

and compute

(17)Tiun=(zn,w1,n,,wm,n)max0,φi(zn,w1,n,,wm,n)φi2φi,

where φi=(k=1myk,i,x1,ixi¯,x2,ixi¯,,xm,ixi¯) and ||φi||2=k=1mxk,ixi¯2+k=1myk,i2 with xi¯=1mk=1mxk,i, which is the projection of un=(zn,w1,n,,wm,n) onto the half space

Hi={(z,w1,,wm)V:φi(z,w1,,wm)0}.

Step 3: Compute

(18)vn=βnun+i=1n(βi1βi)Tiun,un+1=αnf(un)+(1αn)vn,n0,

where f:VV is a contraction mapping with constant α. Set nn+1 and go to step 1.

Remark 3.1

The following points indicate that Algorithm 3.1 is well defined.

  1. By maximality of Ak the resolvent mapping (I+λk,nAk)1 is single-valued and everywhere defined for each k=1,2,,m (see, e.g., [34]). By rearranging the equation in step 1, one has the following equation:

    (19)xk,n+λk,nyk,n=zn+λk,nwk,n,n0,

    where for each k=1,2,,m and n0, λk,n>0 and yk,nAk(xk,n). Hence, for each k=1,2,,m, (xk,n,yk,n)Gph(Ak) exists and is unique. Thus, the decomposable separator function φ in Lemma 2.4 is well defined.

  2. Since φi:V is affine on V (see Lemma 2.4) and the half space Hi is closed and convex subspace of V for all i1, the projection of un onto Hi given by (17) exists and is firmly nonexpansive (see, e.g., [20,28]).

Lemma 3.2

The sequence{un}generated by Algorithm 3.1 is bounded.

Proof

Let u=(z,w1,w2,,wm)Se(A1,,Am), then by Lemma 2.4 we have uHi for all i0, which implies that Se(A1,,Am)i=1F(Ti)=. Now, the fact that Ti is nonexpansive and (18) yields

(20)||vnu||=βnun+i=1n(βi1βi)Tiunu=βnun+i=1n(βi1βi)Tiuni=1n(βi1βi)u+i=1n(βi1βi)uu=βn(unu)+i=1n(βi1βi)(Tiunu)βnunu+i=1n(βi1βi)unu=unu.

Thus, it follows that

un+1u=αnf(un)+(1αn)vnuαnf(un)u+(1αn)vnuαnf(un)f(u)+αnf(u)u+(1αn)vnu(1αn(1α))unu+αnf(u)umaxunu,f(u)u1α.

Then, we have

unumaxu0u,f(u)u1α,

which implies that {un} is bounded. Hence, we can obtain that {Tiun}, {vn} and {f(un)} are bounded.□

Lemma 3.3

The sequence{un}generated by Algorithm 3.1 converges strongly tou=(z,w1,,wm)=i=1F(Ti).

Proof

We proceed with the following steps.

Step 1. First we show that limnun+1un=0.

From (18), we have

(21)vnvn1=βnun+i=1n(βi1βi)Tiunβn1un1i=1n1(βi1βi)Tiun1βnunun1+|βnβn1|un1+i=1n(βi1βi)TiunTiun1+|βnβn1|Tiun1βnunun1+|βnβn1|un1+i=1n(βi1βi)unun1+|βnβn1|Tiun1unun1+|βnβn1|M,

for some M>0 as the sequences {un} and {Tiun1} are bounded. Thus, the inequalities in (18) and (21) imply that

(22)un+1un=αnf(un)+(1αn)vnαn1f(un1)(1αn1)vn1αnf(un)f(un1)+|αnαn1|f(un1)+(1αn)vnvn1+|αnαn1|vn1ααnunun1+|αnαn1|(f(un1)+vn1)+(1αn)(unun1+|βnβn1|M)=(1αn(1α))unun1+|αnαn1|(f(un1)+vn1)+(1αn)(βn1βn)M.

Since {βn} is strictly decreasing, it implies that n=1(βn1βn)<. Hence, from (22), conditions of {αn} and Lemma 2.5, we immediately obtain that

(23)||un+1un||0asn .

Step 2. We show that ||Tiunun||0 as n. Take u=P(f(u)). Note that

(24)unu2TiunTiu2=Tiunun+unTiu2=Tiunun2+unu2+2Tiunun,unu,

which yields

(25)12Tiunun2unTiun,unu.

Furthermore, from (18) and (25), we immediately obtain

(26)12i=1n(βi1βi)Tiunun2i=1n(βi1βi)unTiun,unu=(1βn)uni=1n(βi1βi)Tiun,unu=(1βn)unvn+βnun,unu=unvn,unu=unun+1,unu+un+1vn,unu,

and from (18) and (26), we have

(27)12i=1n(βi1βi)||Tiunun||2unun+1,unu+αnf(un)+(1αn)vnvn,unuunun+1unu+αnf(un)αnvn,unuunun+1unu+αnf(un)vnunu.

Thus, from Lemma 3.2, (23), (27) and the condition on {αn}, we conclude that

(28)limni=1n(βi1βi)Tiunun2=0.

Since {βn} is strictly decreasing, for every i, equality (28) yields

(29)limnTiunun=0andlimnTnunun=0.
Step 3. We show that limsupnf(u)u,unu0.

Since {un} is bounded, there exist uH×Hm and a subsequence {unk} of {un} such that

limsupnf(u)u,unu=limkf(u)u,unku

and unku. From (29) and Lemma 2.1, we obtain that uF(Ti) for each i1 and hence u. Since u=P(f(u)), by Lemma 2.7, we have

(30)limsupnf(u)u,unu=limkf(u)u,unku=f(u)u,uu0,asrequired.

Step 4. Finally, we show that unu as n. From Lemma 2.6, (18) and (20), we have

(31)un+1u2=αnf(un)+(1αn)vnu2(ααnunu+(1αn)unu)2+2αnf(u)u,un+1u(1(1α)αn)||unu||2+2αnf(u)u,un+1u(1(1α)αn)unu2+2αnf(u)u,unu+2αnf(u)uun+1un.

Therefore, from (30), (31) and Lemma 2.5, we conclude that unu as n.□

Lemma 3.4

If in Algorithm 3.1, there existλ̲>λ̲>0such that the sequence{λk,n}[λ̲,λ̲], for eachk=1,2,,mandn0, then there existsη>0such that

(32)φn(un)ηφn(un)2  n0,
whereφn(un)=(k=1myk,n,x1,nxn¯,x2,nxn¯,,xm,nxn¯)andφn(un)2=k=1mxk,nxn¯2+k=1myk,n2withxn¯=1mk=1mxk,n.

Proof

From Eq. (19), we have

(33)znxk,nλk,n=yk,nwk,n.

From (16), (33) and condition of λk,n, we get

(34)φn(un)=k=1mznxk,n,yk,nwk,n=k=1mznxk,n,znxk,nλk,n=k=1m1λk,nznxk,n21λ¯k=1mznxk,n2.

By rearranging equation (33), we can also get

(35)yk,n=znxk,nλk,n+wk,n,

which implies that

(36)k=1myk,n=k=1mznxk,nλk,n+k=1mwk,n=k=1mznxk,nλk,n,

and hence

(37)k=1myk,n2=k=1mznxk,nλk,n21λ̲2k=1mznxk,n2=1λ̲2mznxn¯2=m2λ̲2znxn¯2,

which is equivalent to

(38)λ̲2mk=1myk,n2mznxn¯2.

In addition, from the properties of inner product, we have

(39)k=1mxk,nxn¯2=k=1mxk,nzn+znxn¯2=k=1m(xk,nzn)(xn¯zn)2=k=1mxk,nzn22k=1m(xk,nzn),xn¯zn+k=1mxn¯zn2=k=1mxk,nzn22mxn¯zn,xn¯zn+mxn¯zn2=k=1mxk,nzn22mxn¯zn2+mxn¯zn2=k=1mxk,nzn2mxn¯zn2,

which implies that

(40)k=1mxk,nxn¯2+mxn¯zn2=k=1mxk,nzn2.

From (38) and (40), we get

(41)k=1mxk,nxn¯2+λ̲2mk=1myk,n2k=1mxk,nzn2

and hence

(42)1λ̲k=1mxk,nxn¯2+λ̲2mk=1myk,n21λ̲k=1mxk,nzn2.

Thus, from (42), (34) and setting η=min{1λ̲,λ̲2mλ̲}, we get

(43)ηk=1mxk,nxn¯2+k=1myk,n2=ηφn(un)2φn(un).

Next, we state and prove our main theorem.

Theorem 3.5

Let H be a real Hilbert space and letAk:H2H, fork{1,2,,m}andm2, be maximally monotone mappings satisfyingΩ{zH:0A1z+A2z++Amz}. Letf:VVbe a contraction mapping with constantα. Let the real sequence{λk,n}[λ̲,λ̲]for someλ̲>λ̲>0and for eachk=1,2,,mandn0. Then, the sequence{un}generated by Algorithm 3.1 converges strongly to an elementu=(z,w1,,wm)Se(A1,A2,,Am), satisfying the variational inequality

(44)(fI)u,ux0,xSe(A1,A2,,Am),
wherezΩ.

Proof

By Lemma 3.4, there exists η>0 such that

(45)φn(un)ηφn2 n0.

This implies that φn(un) is always nonnegative, and from (17), we obtain

(46)Tnunun=φn(un)φn2φn,

which implies

(47)Tnunun=φn(un)φn2φn=φn(un)φn,

for all n such that φn0. Thus, dividing both sides of inequality (45) by φn, we obtain

(48)Tnununηφn,

which is also true for n having φn=0. From (29), we have Tnunun0 as n, so (48) implies φn0 as n. Thus, it follows from (47) that

(49)limnφn(un)=0.

From the expression for φn, we have k=1myk,n0 and xk,nxn¯0 for k=1,2,,m as n. Moreover, subtracting xk,n+λk,nwk,n from both sides of Eq. (19), we obtain

(50)znxk,n=λk,n(yk,nwk,n).

This and the definition of φn imply that

(51)φn(un)=k=1mznxk,n,yk,nwk,n=k=1mλk,n(yk,nwk,n),yk,nwk,n=k=1mλk,nyk,nwk,n2.

Hence, from (49), (51) and the fact that λk,n>λ̲, we have

(52)limnyk,nwk,n=0,forallk=1,2,,m,

and from (50) and (52), we obtain

(53)limnznxk,n=0,forallk=1,2,,m.

Moreover, from Lemma 3.3 the sequence {un}={(zn,w1,n,,wm,n)} converges strongly to a point u=(z,w1,,wm)=i=1F(Ti). In addition, Eqs. (52) and (53) imply that yk,nwk and xk,nz, for each k=1,2,,m. Since Gph(Ak) is closed and (xk,n,yk,n)Gph(Ak) for all k=1,2,,m and n0, we get wkAk(z). Furthermore, since {un}V and V is a closed subspace, we also have uV and hence u=(z,w1,,wm)Se(A1,A2,,Am) and by Lemma 2.2 we obtain z(A1+A2++Am)1(0). Moreover, since u=P(f(u)) by Lemma 2.7, we obtain the variational inequality

(54)(fI)u,ux0,xSe(A1,A2,,Am),

as Se(A1,A2,,Am). The proof is complete.□

Remark 3.6

We observe that Algorithm 3.1 is equivalent to the following scheme:

(55)u0=(z0,w1,0,,wm,0)Vchosenarbitrarly,xk,n=(I+λk,nAk)1(zn+λk,nwk,n),k=1,2,,m,yk,n=wk,n+znxk,nλk,n,k=1,2,,m,un+1=αnf(zn,w1,n,,wm,n)+(1αn)(cn,d1,n,,dm,n),n0,

where f:VV is a contraction mapping with constant α, λk,n>0 for k=1,2,,m, and n0, and

cn=βnzn+i=1m(βi1βi)cˆi,
dk,n=βnwk,n+i=1m(βi1βi)dˆk,i,
cˆi=zn,ifφi(un) 0,znδik=1myk,i,ifφi(un) > 0,
dˆk,i=wk,n,ifφi(un)  0,wk,nδi(xk,ixi¯),ifφi(un) > 0,

with

δi=k=1mznxk,i,yk,iwk,nk=1myk,i2+k=1mxk,ixi¯2,

for xi¯=1mk=1mxk,i.

Remark 3.7

At this point, we know that Se(A1,A2,,Am) and one can show that Se(A1,A2,,Am) and hence Se(A1,A2,,Am)=.

If in (55) we assume λk,n=λ>0, for k=1,2,,m and n0, then we get the following corollary for the sum of a finite family of maximally monotone mappings in Hilbert spaces.

Corollary 3.8

Let H be a real Hilbert space and letAk:H2H, fork{1,2,,m}andm2, be maximally monotone mappings satisfyingΩ{zH:0A1z+A2z++Amz}. Letf:VVbe a contraction mapping with constantα. For arbitraryu0=(z0,w1,0,,wm,0)Vdefine an iterative algorithm by

(56)xk,n=(I+λAk)1(zn+λwk,n),k=1,2,,m,yk,n=wn,k+znxk,nλ,k=1,2,,m,un+1=αnf(zn,w1,n,,wm,n)+(1αn)(cn,d1,n,,dm,n),n0,
where{cn}and{dk,n}are as in (55) andλ>0. Then,{un}converges strongly to an elementu=(z,w1,,wm)ofSe(A1,A2,,Am), wherezΩ.

If in Theorem 3.5 we replace the contraction mapping f by constant uV, then we get the following corollary for the sum of a finite family of maximally monotone mappings in Hilbert spaces.

Corollary 3.9

Let H be a real Hilbert space and letAk:H2H, fork{1,2,,m}andm2, be maximally monotone mappings satisfyingΩ{zH:0A1z+A2z++Amz}. For arbitraryu0=(z0,w1,0,,wm,0)Vdefine an iterative algorithm by

(57)xk,n=I+λk,nAk1zn+λk,nwk,n,k=1,2,,m,yk,n=wk,n+znxk,nλk,n,k=1,2,,m,un+1=αnu+(1αn)cn,d1,n,,dm,n,n0,
where{cn}and{dk,n}are as in (55), {λk,n}[λ̲,λ̲]for someλ̲>λ̲>0and for eachk=1,2,,mandn0, andu=(z,w1,,wm)VHm+1. Then,{un}converges strongly to an elementu=(z,w1,,wm)ofSe(A1,A2,,Am), wherezΩ.

We note that if in Corollary 3.9 we assume that u=(0,0,,0)V and we get the following theorem for approximating the minimum-norm point of the extended solution set of the sum of a finite family of maximally monotone mappings in Hilbert spaces.

Theorem 3.10

Let H be a real Hilbert space and letAk:H2H, fork{1,2,,m}andm2, be maximally monotone mappings satisfyingΩ{zH:0A1z+A2z++Amz}. For arbitraryu0=(z0,w0,1,,w0,m)Vdefine an iterative algorithm by

(58)xk,n=(I+λk,nAk)1(zn+λk,nwk,n),k=1,2,,m,yk,n=wk,n+znxk,nλk,n,k=1,2,,m,un+1=(1αn)(cn,d1,n,,dm,n),n0,
where{cn}and{dk,n}are as in (55) and{λk,n}[λ̲,λ̲]for someλ̲>λ̲>0and for eachk=1,2,,mandn0. Then,{un}converges strongly to the minimum-norm pointu=(z,w1,,wm)ofSe(A1,A2,,Am), wherezΩ.

Proof

We note that since u=(z,w1,,wm)=P(0), where uSe(A1,A2,,Am), we obtain that u is the minimum-norm point of Se(A1,A2,,Am).□

4 Application to convex minimization problem

In this section, we apply Theorem 3.5 to study the convex minimization problem.

Let gk:H, k=1,2,,m, where m2, be a finite family of convex and lower semicontinuous functions. We consider the problem of finding zH such that

(59)g1(z)+g2(z)++gm(z)=minzH{g1(z)+g2(z)++gm(z)}.

We note that problem (59) is equivalent, by Fermat’s rule, to the problem of finding zH such that

(60)0g1(z)+g2(z)++gm(z),

where g is a subdifferential of g, which is maximally monotone (see, e.g., [35]). So, we obtain the following theorem from Theorem 3.5.

Theorem 4.1

Let H be a real Hilbert space. Letgk:H, for eachk=1,2,3,,mandm2, be a convex and lower semicontinuous function such thatΩ=minzH{g1(z)+g2(z)++gm(z)}. Letf:VVbe a contraction mapping with constantα. For arbitraryu0=(z0,w0,1,,w0,m)Vdefine an iterative algorithm by

(61)xk,n=(I+λk,ngk)1(zn+λk,nwk,n),k=1,2,,m,yk,n=wk,n+znxk,nλk,n,k=1,2,,m,un+1=αnf(zn,w1,n,,wm,n)+(1αn)(cn,d1,n,,dm,n),n0,
where for eachk=1,2,,mandn0, {λk,n}[λ̲,λ̲]for someλ̲>λ̲>0,
cn=βnzn+i=1n(βi1βi)cˆi,
dk,n=βnwk,n+i=1n(βi1βi)dˆk,i,
cˆi=zn,ifφi(un) 0,znδik=1myk,i,ifφi(un) > 0,
dˆk,i=wk,n,ifφi(un)  0,wk,nδixk,ixi¯,ifφi(un) > 0,
with
δi=k=1mznxk,i,yk,iwk,nk=1myk,i2+k=1mxk,ixi¯2,
forxi¯=1mk=1mxk,i. Then,{un}converges strongly to an elementu=(z,w1,,wm)ofSe(g1,g2,,gm), wherezΩ.

Proof

Set Ak=gk, for k=1,2,,m. Then, we have that Ak, k=1,2,,m, is maximally monotone with Ω{zH:0A1z+A2z+...+Amz}. Hence, the conclusion follows from Theorem 3.5.□

5 Numerical example

In this section, we present some numerical experiment results to explain the conclusion of our result. The following numerical example verifies the conclusion of Corollary 3.8.

Example 5.1

Let H=l2, where l2 is the space of sequences. Let A1,A2,A3:l2l2 be defined by A1x=x+(1,2,3,0,0,), A2x=2x+(3,4,5,0,0,) and A3x=3x(2,2,2,0,0,), where x=(x1,x2,)l2. We see that A1, A2 and A3 are maximally monotone with R(I+λA1)=R(I+λA2)=R(I+λA3)=l2 for each λ>0. Now, by direct calculation we get that

x1,n=zn+λw1,nλ(1,2,3,0,0,)1+λ,y1,n=zn+λw1,n+(1,2,3,0,0,)1+λ,x2,n=zn+λw2,nλ(3,4,5,0,0,)1+2λ,y2,n=2zn+2λw2,n+(3,4,5,0,0,)1+2λ,x3,n=zn+λw3,n+λ(2,2,2,0,0,)1+3λandy3,n=3zn+3λw3,n(2,2,2,0,0,)1+3λ,

for some λ>0. Thus, if we assume λ=1, αn=1100(n+100), βn=1n, for all n1 and f(x)=x1000 then Algorithm (56) reduces to the following:

(62)un+1=1105(n+102)zn,w1,n,w2,n,w3,n+11102(n+102)cn,d1,n,d2,n,d3,n,

where {cn} and {dk,n}, for k=1,2,3, are as in (55).

Now, if we take initial point u1=(z1,w1,1,w2,1,w3,1), where z1=(2,1,2,0,0,), w1,1=(1,1,1,0,0,), w2,1=(0,0,0,0,0,) and w3,1=(1,1,1,0,0,), then the numerical experiment results using MATLAB provide that the first component {zn} of {un}={(zn,w1,n,w2,n,w3,n)} generated by (62) converges strongly to the solution z=13,23,1,0,0,(A1+A2+A3)1(0) (Table 1).

Table 1

Convergence of the first component {zn} of {un} generated by (62)

Nznzn+1znl2
1(2.0000,1.0000,2.0000,0,0,)
2(2.1662,1.3413,2.5015,0,0,)0.690
3(1.3286,1.4059,2.3823,0,0,)0.8485
4(0.7003,1.3819,2.2615,0,0,)0.6403
5(0.4264,1.3101,2.1714,0,0,)0.2971
10(0.0554,1.0218,1.7395,0,0,)0.0999
100(0.3081,0.5944,0.9294,0,0,)9.4868 × 10−4
200(0.2977,0.6367,0.9519,0,0,)3.6056 × 10−4
300(0.3064,0.6500,0.9776,0,0,)2.4495 × 10−4
400(0.3141,0.6539,0.9969,0,0,)1.4142 × 10−4
500(0.3196,0.6549,0.9969,0,0,)1.4142 × 10−4
1,000(0.33000.6554,1.0006,0,0,)0
2,000(0.3324,0.6597,0.9987,0,0,)0
3,000(0.3330,0.6635,1.0001,0,0,)0
13,23,1,0,0,0

6 Conclusion

In this article, we constructed and studied algorithms which start by reformulating (4) as the problem of locating a point in a certain extended solution setSe(A1,A2,,Am)H×Hm, which converges strongly to a zero of the sum of a finite family of maximally monotone mappings in Hilbert spaces. The assumption that one of the mappings is single-valued, α-inverse strongly monotone or α-strongly monotone is dispensed with. In addition, we applied our main results to study the convex minimization problem. Finally, we provided a numerical example to support our results. Our results extend the results of [28] in the sense that our theorems provide strong convergence in arbitrary Hilbert spaces. In particular, Theorem 3.5 extends Proposition 7 of Svaiter [28] from weak to strong convergence. Moreover, our theorems improve and unify most of the results that have been proved for this important class of nonlinear mappings.

Acknowledgements

The authors express their deep gratitude to the referees and the editor for their valuable comments and suggestions.

  1. Funding: Getahun B. Wega and Habtu Zegeye gratefully acknowledge the funding received from Simons Foundation based at Botswana International University of Science and Technology (BIUST).

References

[1] S. Kamimura and W. Takahashi, Approximating solutions of maximal monotone operators in Hilbert spaces, J. Approx. Theory 106 (2000), no. 2, 226–240.10.1006/jath.2000.3493Search in Google Scholar

[2] Y. Yao, Y. J. Cho, and Y.-C. Liou, Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems, Eur. J. Oper. Res. 212 (2011), no. 2, 242–250.10.1016/j.ejor.2011.01.042Search in Google Scholar

[3] P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal. 16 (1979), no. 6, 964–979.10.1137/0716071Search in Google Scholar

[4] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul. 4 (2005), no. 4, 1168–1200.10.1137/050626090Search in Google Scholar

[5] H. Brezis and P.-L. Lions, Produits infinis de resolvantes, Israel J. Math. 29 (1978), no. 4, 329–345.10.1007/BF02761171Search in Google Scholar

[6] P. Cholamjiak and Y. Shehu, Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math. 64 (2019), 409–435.10.21136/AM.2019.0323-18Search in Google Scholar

[7] V. Dadashi and H. Khatibzadeh, On the weak and strong convergence of the proximal point algorithm in reflexive Banach spaces, Optimization 66 (2017), no. 9, 1487–1494.10.1080/02331934.2017.1337764Search in Google Scholar

[8] V. Dadashi and M. Postolache, Hybrid proximal point algorithm and applications to equilibrium problems and convex programming, J. Optim. Theory Appl. 174 (2017), no. 2, 518–529.10.1007/s10957-017-1117-0Search in Google Scholar

[9] K. Kunrada, N. Pholasa, and P. Cholamjiak, On convergence and complexity of the modified forward-backward method involving new line searches for convex minimization, Math. Meth. Appl. Sci. 42 (2019), 1352–1362.10.1002/mma.5420Search in Google Scholar

[10] A. Moudafi and M. There, Finding a zero of the sum of two maximal monotone operators, J. Optim. Theory Appl. 94 (1997), no. 2, 425–448.10.1023/A:1022643914538Search in Google Scholar

[11] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl. 72 (1979), no. 2, 383–390.10.1016/0022-247X(79)90234-8Search in Google Scholar

[12] X. Qin, S. Y. Cho, and L. Wang, A regularization method for treating zero points of the sum of two monotone operators, Fixed Point Theory Appl. 2014 (2014), 75, 10.1186/1687-1812-2014-75.Search in Google Scholar

[13] D. V. Thong and P. Cholamjiak, Strong convergence of a forward-backward splitting method with a new step size for solving monotone inclusions, Comp. Appl. Math. 38 (2019), no. 94, 10.1007/s40314-019-0855-z.Search in Google Scholar

[14] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim. 38 (2000), 431–446.10.1137/S0363012998338806Search in Google Scholar

[15] G. B. Wega and H. Zegeye, A Method of approximation for a zero of the sum of maximally monotone mappings in Hilbert spaces, Arab J. Math. Sci. (2019), 10.1016/j.ajmsc.2019.05.004.Search in Google Scholar

[16] G. B. Wega, H. Zegeye, and O. A. Boikanyo, Approximating solutions of the sum of a finite family of maximally monotone mappings in Hilbert spaces, Adv. Oper. Theory 5 (2020), no. 2, 359–370, 10.1007/s43036-019-00026-9.Search in Google Scholar

[17] J. B. Baillon and G. Haddad, Quelques proprietes des operateurs angle-bornes et cycliquement monotones, Isr. J. Math. 26 (1977), no. 2, 137–150.10.1007/BF03007664Search in Google Scholar

[18] P. L. Combettes, Iterative construction of the resolvent of a sum of maximal monotone operators, J. Convex Anal. 16 (2009), no. 4, 727–748.Search in Google Scholar

[19] W. Takahashi, N.-C. Wong, and J.-C. Yao, Two generalized strong convergence theorems of Halpern’s type in Hilbert spaces and applications, Taiwanese J. Math. 16 (2012), no. 3, 1151–1172.10.11650/twjm/1500406684Search in Google Scholar

[20] H. H. Bauschke and J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev. 38 (1996), no. 3, 367–426.10.1137/S0036144593251710Search in Google Scholar

[21] H. H. Bauschke and P. L. Combettes, A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces, Math. Oper. Res. 26 (2001), 248–264.10.1287/moor.26.2.248.10558Search in Google Scholar

[22] N. Pholasaa, P. Cholamjiaka, and Y-J Chob, Modified forward-backward splitting methods for accretive operators in Banach spaces, J. Nonlinear Sci. Appl. 9 (2016), 2766–2778.10.22436/jnsa.009.05.72Search in Google Scholar

[23] S. Reich, Constructive techniques for accretive and monotone operators, in: Applied Nonlinear Analysis, Academic Press, New York, 1979.Search in Google Scholar

[24] S. Suantai, P. Cholamjiak, and P. Sunthrayuth, Iterative methods with perturbations for the sum of two accretive operators in q-uniformly smooth Banach spaces, RACSAM 113 (2019), no. 2, 203–223, 10.1007/s13398-017-0465-9.Search in Google Scholar

[25] R. I. Bot, E. R. Csetnek, and D. Meier, Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces, Optim. Methods Softw. 34 (2019), no. 3, 489–514, 10.1080/10556788.2018.1457151.Search in Google Scholar PubMed PubMed Central

[26] W. Cholamjiak, P. Cholamjiak, and S. Suantai, An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces, J. Fixed Point Theory Appl. 20 (2018), no. 42, 10.1007/s11784-018-0526-5.Search in Google Scholar

[27] H. Wu, C. Cheng, and D. Qu, Strong convergence theorems for maximal monotone operators, fixed-point problems, and equilibrium problems, ISRN Applied Math. 2013 (2013), 708548, 10.1155/2013/708548.Search in Google Scholar

[28] B. F. Svaiter, General projective splitting methods for sums of maximal monotone operators, SIAM J. Control Optim. 48 (2009), no. 2, 787–811.10.1137/070698816Search in Google Scholar

[29] G. K. Kirk, Topics in Metric Fixed Point Theory, Cambridge University Press, Cambridge, 1990.10.1017/CBO9780511526152Search in Google Scholar

[30] A. Moudafi, Viscosity approximations methods for fixed point problems, J. Math. Anal. Appl. 241 (2000), 46–55.10.1006/jmaa.1999.6615Search in Google Scholar

[31] H.-K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl. 298 (2004), no. 1, 279–291.10.1016/j.jmaa.2004.04.059Search in Google Scholar

[32] S.-S. Chang, On Chidume's open questions and approximate solutions of multivalued strongly accretive mapping equations in Banach spaces, J. Math. Anal. Appl. 216 (1997), no. 1, 94–111.10.1006/jmaa.1997.5661Search in Google Scholar

[33] M. G. Xu, Weak and strong convergence theorems for strict pseudo-contraction in Hilbert space, J. Math. Anl. Appl. 329 (2017), 336–346.10.1016/j.jmaa.2006.06.055Search in Google Scholar

[34] G. J. Minty, Monotone (nonlinear) operators in Hilbert space, Duke Math. J. 29 (1962) no. 3, 341–346.10.1215/S0012-7094-62-02933-2Search in Google Scholar

[35] R. T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math. 33 (1970), 209–216.10.2140/pjm.1970.33.209Search in Google Scholar

Received: 2019-07-22
Revised: 2020-05-02
Accepted: 2020-05-10
Published Online: 2020-07-08

© 2020 Getahun B. Wega et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Stability of an additive-quadratic-quartic functional equation
  3. Two new forms of ordered soft separation axioms
  4. Coefficient inequalities for a subclass of Bazilevič functions
  5. Equivalence of the existence of best proximity points and best proximity pairs for cyclic and noncyclic nonexpansive mappings
  6. Generalized parabolic Marcinkiewicz integrals associated with polynomial compound curves with rough kernels
  7. Jordan centralizer maps on trivial extension algebras
  8. On soft pc-separation axioms
  9. Direct and strong converse inequalities for approximation with Fejér means
  10. On perturbed quadratic integral equations and initial value problem with nonlocal conditions in Orlicz spaces
  11. On θ-generalized demimetric mappings and monotone operators in Hadamard spaces
  12. On the domain of implicit functions in a projective limit setting without additional norm estimates
  13. Scalar linear impulsive Riemann-Liouville fractional differential equations with constant delay-explicit solutions and finite time stability
  14. The special atom space and Haar wavelets in higher dimensions
  15. A strong convergence theorem for a zero of the sum of a finite family of maximally monotone mappings
  16. Existence and uniqueness of mild solutions for a fractional differential equation under Sturm-Liouville boundary conditions when the data function is of Lipschitzian type
  17. The new investigation of the stability of mixed type additive-quartic functional equations in non-Archimedean spaces
  18. Numerical approach to the controllability of fractional order impulsive differential equations
  19. Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems
  20. Further results on Ulam stability for a system of first-order nonsingular delay differential equations
  21. Existence results for nonlinear coupled system of integral equations of Urysohn Volterra-Chandrasekhar mixed type
  22. Structure of n-quasi left m-invertible and related classes of operators
  23. Hyers-Ulam stability of a nonautonomous semilinear equation with fractional diffusion
  24. Hasimoto surfaces for two classes of curve evolution in Minkowski 3-space
  25. Applications of some operators on supra topological spaces
  26. An iterative algorithm for the system of split mixed equilibrium problem
  27. Almost graded multiplication and almost graded comultiplication modules
  28. Strong convergence of an inertial extrapolation method for a split system of minimization problems
  29. On the non-hypercyclicity of scalar type spectral operators and collections of their exponentials
  30. Exponential spline method for singularly perturbed third-order boundary value problems
  31. Existence results of noninstantaneous impulsive fractional integro-differential equation
  32. Review Articles
  33. On a characterization of exponential, Pearson and Pareto distributions via covariance and pseudo-covariance
Downloaded on 11.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2020-0010/html
Scroll to top button