Startseite Solving Composite Fixed Point Problems with Block Updates
Artikel Open Access

Solving Composite Fixed Point Problems with Block Updates

  • Patrick L. Combettes EMAIL logo und Lilian E. Glaudin
Veröffentlicht/Copyright: 25. März 2021

Abstract

Various strategies are available to construct iteratively a common fixed point of nonexpansive operators by activating only a block of operators at each iteration. In the more challenging class of composite fixed point problems involving operators that do not share common fixed points, current methods require the activation of all the operators at each iteration, and the question of maintaining convergence while updating only blocks of operators is open. We propose a method that achieves this goal and analyze its asymptotic behavior. Weak, strong, and linear convergence results are established by exploiting a connection with the theory of concentrating arrays. Applications to several nonlinear and nonsmooth analysis problems are presented, ranging from monotone inclusions and inconsistent feasibility problems, to variational inequalities and minimization problems arising in data science.

MSC 2010: 47J26; 47N10; 90C25; 47H05

1 Introduction

Throughout, 𝓗 is a real Hilbert space with power set 2𝓗, identity operator Id, scalar product 〈⋅ | ⋅〉, and associated norm ∥⋅∥. Recall that an operator T : 𝓗 → 𝓗 is nonexpansive if it is 1-Lipschitzian, and α-averaged for some α ∈ ]0, 1[ if Id + α–1(T − Id) is nonexpansive [4]. We consider the broad class of nonlinear analysis problems which can be cast in the following format.

Problem 1.1

Let m be a strictly positive integer and let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1. For every i ∈ {0, …, m}, let Ti : 𝓗 → 𝓗 be αi-averaged for some αi ∈ ]0, 1[. The task is to find a fixed point of T0 i=1m ωi Ti.

A classical instantiation of Problem 1.1 is found in the area of best approximation [8, 38]: given two nonempty closed convex subsets C and D of 𝓗, with projection operators projC and projD, find a fixed point of the composition projC ∘ projD. Geometrically, such points are those in C at minimum distance from D, and they can be constructed via the method of alternating projections [8, 26]

(nN)xn+1=projC(projDxn). (1.1)

This problem was extended in [1] to that of finding a fixed point of the composition proxf ∘ proxg of the proximity operators of proper lower semicontinuous convex functions f : 𝓗 → ] − ∞, + ∞] and g : 𝓗 → ] − ∞, + ∞]. Recall that, given x ∈ 𝓗, proxfx is the unique minimizer of the function yf(y) + ∥xy2/2 or, equivalently, proxfx = (Id + f)–1 where f is the subdifferential of f, which is maximally monotone [6]. A further generalization of this formalism was proposed in [7] where, given two maximally monotone operators A : 𝓗 → 2𝓗 and B : 𝓗 → 2𝓗, with associated resolvents JA = (Id + A)–1 and JB = (Id + B)–1, the asymptotic behavior of the iterations

(nN)xn+1=JA(JBxn) (1.2)

for constructing a fixed point of JAJB was investigated. We recall that JA and JB are 1/2-averaged operators [6]. Now let A0 and (Bi)1⩽im be maximally monotone operators from 𝓗 to 2𝓗 and, for every i ∈ {1, …, m}, let ρiBi = (Id − JρiBi)/ρi be the Yosida approximation of Bi of index ρi ∈ ]0, + ∞[. Set β = 1/( i=1m 1/ρi) and (∀ i ∈ {1, …, m}) ωi = β/ρi. In connection with the inclusion problem

findxHsuch that0A0x+i=1m(ρiBi)x, (1.3)

the iterative process

(nN)xn+1=JβγnA0(xn+γn(i=1mωiJρiBixnxn)),where0<γn<2, (1.4)

was studied in [11]. This algorithm captures (1.2) as well as methods such as those proposed in [31, 32]; see also [45] for related problems. To make its structure more apparent, let us set

(nN)T0,n=JβγnA0and(i{1,,m})Ti,n=(1γn)Id+γnJρiBi. (1.5)

Then we observe that, for every n ∈ ℕ, the following hold:

  • Problem (1.3) is the special case of Problem 1.1 in which T0 = JβA0, T1 = Jρ1B1, …, and Tm = JρmBm. Its set of solutions is

    Fix(JβA0i=1mωiJρiBi)=Fix(T0,ni=1mωiTi,n). (1.6)
  • For every i ∈ {0, …, m}, Ti,n is an averaged nonexpansive operator.

  • The updating rule in (1.4) can be written as

    xn+1=T0,n(i=1mωiti,n),where(i{1,,m})ti,n=Ti,nxn. (1.7)

The implementation of (1.7) requires the activation of T0,n and the m operators (Ti,n)1⩽im. If the operators (Ti)0⩽im have common fixed points, then Problem 1.1 amounts to finding such a point, and this can be achieved via block-iterative methods that require activating only subgroups of operators over the iterations; see, for instance, [2, 5, 10, 24]. In the absence of common fixed points, whether Problem 1.1 can be solved by updating only subgroups of operators is an open question. In the present paper, we address it by showing that it is possible to lighten the computational burden of iteration n of (1.7) by activating only a subgroup (Ti,n)iIn⊂{1,…,m} of the operators and by recycling older evaluations of the remaining operators. This leads to the iteration template

for everyiInti,n=Ti,nxnfor everyi{1,,m}Inti,n=ti,n1xn+1=T0,n(i=1mωiti,n). (1.8)

The proposed framework will feature a flexible deterministic rule for selecting the blocks of indices (In)n ∈ ℕ}, as well as tolerances in the evaluation of the operators in (1.8). Somewhat unexpectedly, our analysis will rely on the theory of concentrating arrays, which appears predominantly in the area of mean iteration methods [13, 15, 29, 33, 34, 40, 41]. In Section 2, we propose a new type of concentrating array that will be employed in Section 3 to investigate the asymptotic behavior of the method. Finally, various applications to nonlinear analysis problems are presented in Section 4.

Notation. Let M : 𝓗 → 2𝓗. Then gra M = {(x, u) ∈ 𝓗 × 𝓗 | uMx} is the graph of M, zer M = {x ∈ 𝓗 | 0 ∈ Mx} the set of zeros of M, dom M = {x ∈ 𝓗 | Mx ≠ ∅} the domain of M, ran M = {u ∈ 𝓗 | (∃ x ∈ 𝓗) uMx} the range of M, M–1 the inverse of M, which has graph {(u, x) ∈ 𝓗 × 𝓗 | uMx}, and JM = (Id + M)–1 the resolvent of M. The parallel sum of M and A : 𝓗 → 2𝓗 is MA = (M–1 + A–1)–1. Further, M is monotone if

((x,u)graM)((y,v)graM)xyuv0, (1.9)

and maximally monotone if, in addition, there exists no monotone operator A : 𝓗 → 2𝓗 such that gra M ⊂ gra A ≠ gra M. If MρId is monotone for some ρ ∈ ]0, + ∞[, then M is strongly monotone. We denote by Γ0(𝓗) the class of lower semicontinuous convex functions f : 𝓗 → ] − ∞, + ∞] such that dom f = {x ∈ 𝓗 | f(x) < + ∞} ≠ ∅. Let fΓ0(𝓗). The subdifferential of f is the maximally monotone operator f : 𝓗 → 2𝓗 : x ↦ {u ∈ 𝓗 | (∀ y ∈ 𝓗) 〈yx | u〉 + f(x) ⩽ f(y)}. For every x ∈ 𝓗, the unique minimizer of the function f + (1/2)∥⋅ − x2 is denoted by proxf x. We have proxf = Jf. Let C be a nonempty closed convex subset of 𝓗. Then projC is the projector onto C, dC the distance function to C, and ιC is the indicator function of C, which takes the value 0 on C and + ∞ on its complement.

2 Concentrating arrays

Mann’s mean value iteration method seeks a fixed point of an operator T : 𝓗 → 𝓗 via the iterative process xn+1 = Txn, where xn is a convex combination of the points (xj)0⩽jn [33, 34]. The notion of a concentrating array was introduced in [15] to study the asymptotic behavior of such methods. Interestingly, it will turn out to be also quite useful in our investigation of the asymptotic behavior of (1.8).

Definition 2.1

[15, Definition 2.1] A triangular array (μn,j)n∈ℕ,0⩽jn in [0, + ∞[ is concentrating if the following hold:

  1. (∀ n ∈ ℕ) j=0n μn,j = 1.

  2. (∀ j ∈ ℕ) limn→+∞μn,j = 0.

  3. Every sequence (ξn)n∈ℕ in [0, + ∞[ that satisfies

    (nN)ξn+1j=0nμn,jξj+εn, (2.1)

    for some summable sequence (εn)n∈ℕ in [0, + ∞[, converges.

We shall require the following convergence principle, which extends that of quasi-Fejér monotonicity [10].

Lemma 2.2

Let C be a nonempty subset of 𝓗, let ϕ : [0, + ∞[ → [0, + ∞[ be strictly increasing and such that limt→+∞ ϕ(t) = + ∞, let (xn)n∈ℕ be a sequence in 𝓗, let (μn,j)n∈ℕ,0⩽jn be a concentrating array in [0, + ∞[, let (βn)n∈ℕ be a sequence in [0, + ∞[, and let (εn)n∈ℕ be a summable sequence in [0, + ∞[ such that

(xC)(nN)ϕ(xn+1x)j=0nμn,jϕ(xjx)βn+εn. (2.2)

Then the following hold:

  1. (xn)n∈ℕ is bounded.

  2. βn → 0.

  3. Suppose that every weak sequential cluster point of (xn)n∈ℕ belongs to C. Then (xn)n∈ℕ converges weakly to a point in C.

  4. Suppose that (xn)n∈ℕ has a strong sequential cluster point in C. Then (xn)n∈ℕ converges strongly to a point in C.

Proof

Let xC. Let us first show that

(xnx)nNconverges. (2.3)

It follows from (2.2) and Definition 2.1 that (ϕ(∥xnx∥))n∈ℕ converges, say ϕ(∥xnx∥) → λ. However, since limt→+∞ ϕ(t) = + ∞, (∥xnx∥)n∈ℕ is bounded and, to establish (2.3), it suffices to show that it does not have two distinct cluster points. Suppose to the contrary that there exist subsequences (∥xknx∥)n∈ℕ and (∥xlnx∥)n∈ℕ such that ∥xknx∥ → η and ∥xlnx∥ → ζ > η, and fix ε ∈ ]0, (ζη)/2[. Then, for n sufficiently large, ∥xknx∥ ⩽ η + ε < ζε ⩽ ∥xlnx∥ and, since ϕ is strictly increasing, ϕ(∥xknx∥) ⩽ ϕ(η + ε) < ϕ(ζε) ⩽ ϕ(∥xlnx∥). Taking the limit as n → + ∞ yields λϕ(η + ε) < ϕ(ζε) ⩽ λ, which is impossible.

  1. and (iv): Clear in view of (2.3).

  2. As shown above, there exists λ ∈ [0, + ∞[ such that ϕ(∥xnx∥) → λ. In turn, [28, Theorem 3.5.4] implies that j=0n μn,jϕ(∥xj-x∥) → λ. We thus derive from (2.2) that 0 ⩽ βn j=0n μn,jϕ(∥xjx∥) − ϕ(∥xn+1x∥) + εn → 0.

  3. This follows from (2.3) and [6, Lemma 2.47]. □

Several examples of concentrating arrays are provided in [15]. Here is a novel construction which is not only of interest to mean iteration processes in fixed point theory [13, 15, 29, 33, 34, 41] but will also play a pivotal role in establishing our main result, Theorem 3.2.

Proposition 2.3

Let K be a strictly positive integer and let (μn,j)n∈ℕ,0⩽jn be a triangular array in [0, + ∞[ such that the following hold:

  1. (∀ n ∈ ℕ) j=0n μn,j = 1.

  2. (∀ n ∈ ℕ)(∀ j ∈ ℕ) njKμn,j = 0.

  3. infn∈ℕ μn,n > 0.

Then (μn,j)n∈ℕ,0⩽jn is a concentrating array.

Proof

Properties [a] and [b] in Definition 2.1 clearly hold. To verify [c], let (ξn)n∈ℕ be a sequence in [0, + ∞[ and let (εn)n∈ℕ be a summable sequence in [0, + ∞[ such that

(nN)ξn+1j=0nμn,jξj+εn. (2.4)

Then, in view of (ii), for every integer nK − 1,

ξn+1k=0K1μn,nkξnk+εn. (2.5)

Set μ = infn∈ℕμn,n. If μ = 1, then (i) and (2.5) imply that, for every integer nK − 1,

0ξn+1ξn+εn, (2.6)

and the convergence of (ξn)n∈ℕ therefore follows from [6, Lemma 5.31]. We henceforth assume that μ < 1 and, without loss of generality, that K > 1. For every integer nK − 1, define ξ̂n = max0⩽kK−1 ξnk, and observe that (i) and (2.5) yield ξn+1ξ̂n + εn. Hence,

(n{K1,K,})0ξ^n+1ξ^n+εn (2.7)

and we deduce from [6, Lemma 5.31] that (ξ̂n)n∈ℕ converges to some number η ∈ [0, + ∞[. Therefore, if (ξn)n∈ℕ converges, then its limit is η as well. Let us argue by contradiction by assuming that ξnη. Then there exists ν ∈ ]0, + ∞[ such that

(NN)(n0{N,N+1,})|ξn0η|>ν. (2.8)

Set

δ=min{μK11μK1,1}andν=δν4. (2.9)

Since ξ̂nη and ∑n∈ℕ εn < + ∞, let us fix an integer NK − 1 such that

(n{N,N+1,})ημK1ν4ξ^nη+νandjnεj(1μK1)ν. (2.10)

Then

(k{1,2,})(n{N,N+1,})j=1kμj1εn+kjjnεj(1μK1)ν, (2.11)

while (2.5) and (i) imply that

(n{N,N+1,})ξn+1μn,nξn+k=1K1μn,nkξnk+εnμn,nξn+(1μn,n)ξ^n+εn=μξn+(1μ)ξ^n+(μn,nμ)(ξnξ^n)+εnμξn+(1μ)ξ^n+εnμξn+(1μ)(η+ν)+εn. (2.12)

It follows from (2.8) that there exists an integer n0N such that |ξn0η| > ν, i.e.,

ξn0>η+νor0ξn0<ην. (2.13)

Suppose that ξn0 > η+ν. Then (2.9) and (2.10) imply that ν < ξn0ηξ̂n0ηνν/4, which is impossible. Therefore, 0 ⩽ ξn0 < ην and it follows from (2.12) that

ξn0+1μ(ην)+(1μ)(η+ν)+εn0=η+(1μ)νμν+εn0. (2.14)

Let us show by induction that, for every integer k ⩾ 1,

ξn0+kη+(1μk)νμkν+j=1kμj1εn0+kj. (2.15)

In view of (2.14), this inequality holds for k = 1. Now suppose that it holds for some integer k ⩾ 1. Then we deduce from (2.12) and (2.15) that

ξn0+k+1μξn0+k+εn0+k+(1μ)(η+ν)μη+μ(1μk)νμk+1ν+j=0kμjεn0+kj+(1μ)(η+ν)=η+(1μk+1)νμk+1ν+j=1k+1μj1εn0+k+1j, (2.16)

which completes the induction argument. Since μ ∈ ]0, 1[, we derive from (2.15), (2.11), and (2.9) that

(k{1,,K1})ξn0+kη+(1μk)νμkν+(1μK1)νη+2(1μK1)νμK1ν=η+(1μK1)δν2μK1νημK1ν2. (2.17)

Therefore, by (2.10),

ημK1ν4ξ^n0+K1ημK1ν2. (2.18)

We thus reach a contradiction and conclude that (ξn)n∈ℕ converges. □

We derive from Proposition 2.3 a new instance of a concentrating array on which the main result of Section 3 will hinge.

Example 2.4

Let I be a nonempty finite set, let (ωi)iI be a family in ]0, 1] such thatiI ωi = 1, let (In)n∈ℕ be a sequence of nonempty subsets of I, and let K be a strictly positive integer such that (∀ n ∈ ℕ) ⋃0⩽kK−1 In+k = I. Set

(nN)(j{0,,n})μn,j=1,ifn=j<K;iIjk=j+1nIkωi,if0nK<j;0,otherwise. (2.19)

Then the following hold:

  1. (μn,j)n∈ℕ,0⩽jn is a concentrating array.

  2. Let ℕ ∋ nK − 1, let (ξj)0⩽jn be in [0, + ∞[, and, for every iI, define i(n) = max{k ∈ {nK + 1, …, n} | iIk. Then

    j=0nμn,jξj=iIωiξi(n). (2.20)

Proof

Let n ∈ ℕ. If nK − 1, we have ⋃0⩽kK−1 Ink = I and therefore

I is the union of the disjoint sets

(In,In1In,In2(InIn1),,InK+2k=nK+3nIk,InK+1k=nK+2nIk). (2.21)

  1. It is clear from (2.19) that, for every integer j ∈ [0, nK], μn,j = 0. In turn, we derive from (2.19) and (2.21) that

    j=0nμn,j=μn,n=1,ifn<K;j=0nμn,j=j=nK+1nμn,j=j=nK+1niIjk=j+1nIkωi=iIωi=1,ifnK. (2.22)

    Finally, infn∈ℕ μn,n = infn∈ℕiIn ωi ⩾ miniI ωi > 0. All the properties of Proposition 2.3 are therefore satisfied.

  2. We have

    (j{nK+1,,n})(iIjk=j+1nIk)i(n)=j. (2.23)

    Hence, in view of (2.19),

    (j{nK+1,,n})iIjk=j+1nIkωiξi(n)=iIjk=j+1nIkωiξj=μn,jξj. (2.24)

    Consequently, (2.21) yields

    j=0nμn,jξj=j=nK+1niIjk=j+1nIkωiξi(n)=iIωiξi(n), (2.25)

    which concludes the proof. □

3 Solving Problem 1.1 with block updates

We formalize the ideas underlying (1.8) by proposing a method in which variable subgroups of operators are updated over the course of the iterations, and establish its convergence properties. At iteration n, the block of operators to be updated is (Ti,n)iIn. For added flexibility, an error ei,n is tolerated in the application of the operator Ti,n. We operate under the following assumption, where m is as in Problem 1.1.

Assumption 3.1

K is a strictly positive integer and (In)n∈ℕ is a sequence of nonempty subsets of {1, …, m} such that

(nN)k=0K1In+k={1,,m}. (3.1)

For every integer nK − 1, define

(i{1,,m})i(n)=max{k{nK+1,,n}|iIk}. (3.2)

The sequences (e0,n)n∈ℕ, (e1,n)n∈ℕ, …, (em,n)n∈ℕ are in 𝓗 and satisfy

nK1e0,n<+and(i{1,,m})nK1ei,i(n)<+. (3.3)

Theorem 3.2

Consider the setting of Problem 1.1 together with Assumption 3.1. Let ε ∈ ]0, 1[ and, for every n ∈ ℕ and every i ∈ {0} ∪ In, let αi,n ∈ ]0, 1/(1 + ε)[ and let Ti,n : 𝓗 → 𝓗 be αi,n-averaged. Suppose that, for every integer nK − 1,

Fix(T0i=1mωiTi)Fix(T0,ni=1mωiTi,i(n)). (3.4)

Let x0 ∈ 𝓗, let (ti,−1)1⩽im ∈ 𝓗m, and iterate

forn=0,1,foreveryiInti,n=Ti,nxn+ei,nforeveryi{1,,m}Inti,n=ti,n1xn+1=T0,n(i=1mωiti,n)+e0,n. (3.5)

Let x be a solution to Problem 1.1. Then the following hold:

  1. (xn)n∈ℕ is bounded.

  2. Let i ∈ {1, …, m}. Then xi(n)Ti,i(n)xi(n) + Ti,i(n) xx → 0.

  3. Let i ∈ {1, …, m} and j ∈ {1, …, m}. Then Ti,i(n)xi(n)Tj,j(n) xj(n)Ti,i(n) x + Tj,j(n) x → 0.

  4. Let i ∈ {1, …, m}. Then xi(n)xn → 0.

  5. xnT0,n( i=1m ωi Ti,i(n)xn) → 0.

  6. Suppose that every weak sequential cluster point of (xn)n∈ℕ solves Problem 1.1. Then the following hold:

    1. (xn)n∈ℕ converges weakly to a solution to Problem 1.1.

    2. Suppose that (xn)n∈ℕ has a strong sequential cluster point. Then (xn)n∈ℕ converges strongly to a solution to Problem 1.1.

  7. For every nK − 1 and every i ∈ {0} ∪ In, let ρi ∈ ]0, 1] be a Lipschitz constant of Ti,n. Suppose that (3.5) is implemented without errors and that, for some i ∈ {0, …, m}, ρi < 1. Then (xn)n∈ℕ converges linearly to the unique solution to Problem 1.1.

Proof

Let us fix temporarily an integer nK − 1. We first observe that, by nonexpansiveness of the operators T0,n and (Ti,i(n))1⩽im,

((y,e0,,em)Hm+2)T0,n(i=1mωi(Ti,i(n)xi(n)+ei))+e0T0,n(i=1mωiTi,i(n)y)i=1mωiTi,i(n)xi(n)i=1mωiTi,i(n)y+i=1mωiei+e0i=1mωiTi,i(n)xi(n)Ti,i(n)y+e0+i=1mωieii=1mωixi(n)y+e0+i=1mei. (3.6)

We also note that (3.2) and (3.5) yield

(i{1,,m})ti,n=Ti,i(n)xi(n)+ei,i(n). (3.7)

It follows from (3.5), (3.7), (3.4), and (3.6) that

xn+1x=T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))+e0,nT0,n(i=1mωiTi,i(n)x)i=1mωixi(n)x+e0,n+i=1mei,i(n). (3.8)

Now define (μk,j)k∈ℕ,0⩽jk as in (2.19), with I = {1, …, m}, and set εn = ∥e0,n∥ + i=1m ei,i(n)∥. Then we derive from Example 2.4(ii) that

i=1mωixi(n)x=j=0nμn,jxjx, (3.9)

and it follows from (3.8) and (3.3) that

xn+1xj=0nμn,jxjx+εn,wherekK1εk<+. (3.10)

Hence, Lemma 2.2(i) guarantees that

(xk)kNis bounded. (3.11)

Consequently, using (3.3) and (3.6), we obtain

ν0=supkK1(2T0,k(i=1mωi(Ti,i(k)xi(k)+ei,i(k)))T0,k(i=1mωiTi,i(k)x)+e0,k)<+ (3.12)

and

ν=supkK1(i=1mωiei,i(k)+2i=1mωi(Ti,i(k)xi(k)Ti,i(k)x))<+. (3.13)

In addition, for every y ∈ 𝓗 and every z ∈ 𝓗, it follows from [6, Proposition 4.35] that

T0,nyT0,nz2yz21α0,nα0,n(IdT0,n)y(IdT0,n)z2yz2ε(IdT0,n)y(IdT0,n)z2 (3.14)

and, likewise, that

(i{1,,m})Ti,i(n)yTi,i(n)z2yz2ε(IdTi,i(n))y(IdTi,i(n))z2. (3.15)

Hence, we deduce from (3.5), (3.7), (3.4), and [6, Lemma 2.14(ii)] that

xn+1x2=T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))T0,n(i=1mωiTi,i(n)x)+e0,n2T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))T0,n(i=1mωiTi,i(n)x)2+ν0e0,ni=1mωi(Ti,i(n)xi(n)Ti,i(n)x)2ε(IdT0,n)(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))(IdT0,n)(i=1mωiTi,i(n)x)2+ν0e0,n+νi=1mωiei,i(n)i=1mωiTi,i(n)xi(n)Ti,i(n)x212i=1mj=1mωiωjTi,i(n)xi(n)Ti,i(n)xTj,j(n)xj(n)+Tj,j(n)x2ε(IdT0,n)(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))+xi=1mωiTi,i(n)x2+ν0e0,n+νi=1mωiei,i(n)i=1mωixi(n)x2εi=1mωi(IdTi,i(n))xi(n)(IdTi,i(n))x212i=1mj=1mωiωjTi,i(n)xi(n)Ti,i(n)xTj,j(n)xj(n)+Tj,j(n)x2ε(IdT0,n)(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))+xi=1mωiTi,i(n)x2+ν0e0,n+νi=1mωiei,i(n). (3.16)

It therefore follows from (3.9) that

xn+1x2j=0nμn,jxjx2εi=1mωixi(n)Ti,i(n)xi(n)+Ti,i(n)xx212i=1mj=1mωiωjTi,i(n)xi(n)Ti,i(n)xTj,j(n)xj(n)+Tj,j(n)x2εi=1mωi(Ti,i(n)xi(n)+ei,i(n))T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))+xi=1mωiTi,i(n)x2+ν0e0,n+νi=1mωiei,i(n). (3.17)

Hence, Example 2.4(i), (3.3), and Lemma 2.2(ii) imply that

max1imxi(n)Ti,i(n)xi(n)+Ti,i(n)xx0max1im1jmTi,i(n)xi(n)Tj,j(n)xj(n)Ti,i(n)x+Tj,j(n)x0, (3.18)

and that

i=1mωi(Ti,i(n)xi(n)+ei,i(n))T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))+xi=1mωiTi,i(n)x0. (3.19)

(i): See (3.11).

(ii)–(iii): See (3.18).

(iv)–(v): It follows from (ii) that

i=1mωixi(n)i=1mωiTi,i(n)xi(n)+i=1mωiTi,i(n)xx0. (3.20)

We also derive from (ii) that, for every i and every j in 1, …, m},

xi(n)Ti,i(n)xi(n)xj(n)+Tj,j(n)xj(n)+Ti,i(n)xTj,j(n)x0. (3.21)

Combining (iii) and (3.21), we obtain

(i{1,,m})(j{1,,m})xi(n)xj(n)0. (3.22)

Now, let ı̄ ∈ {1, …, m} and δ ∈ ]0, + ∞[. Then (3.22) implies that, for every j ∈ {1, …, m}, there exists an integer Nδ,jK − 1 such that

(n{N¯δ,j,N¯δ,j+1,})xı¯(n)xj(n)δ. (3.23)

Set Nδ = max1⩽jm Nδ,j. Then

(j{1,,m})(n{N¯δ,N¯δ+1,})xı¯(n)xj(n)δ. (3.24)

Thus, in view of (3.2), for every integer nNδ, taking jnIn yields jn(n) = n and hence ∥xı̄(n)xn∥ ⩽ δ. This shows that

(i{1,,m})xi(n)xn0. (3.25)

Consequently, it follows from (3.6) that

T0,n(i=1m(ωiTi,i(n)xi(n)+ei,i(n)))T0,n(i=1mωiTi,i(n)xn)i=1mωixi(n)xn+i=1mei,i(n)0. (3.26)

In turn, we derive from (3.19), (3.20), (3.25), and (3.3) that

xnT0,n(i=1mωiTi,i(n)xn)=T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))T0,n(i=1mωiTi,i(n)xn)+i=1mωi(Ti,i(n)xi(n)+ei,i(n))T0,n(i=1mωi(Ti,i(n)xi(n)+ei,i(n)))+xi=1mωiTi,i(n)x+i=1mωixi(n)i=1mωiTi,i(n)xi(n)+i=1mωiTi,i(n)xx+i=1mωi(xnxi(n))i=1mωiei,i(n)0. (3.27)

(vi)(a): This follows from (3.10) and Lemma 2.2(iii).

(vi)(b): By (vi)(a), there exists a solution z to Problem 1.1 such that xnz. Therefore, z must be the strong cluster point in question, say xknz. In view of (3.10) and Lemma 2.2(iv), we conclude that xnz.

(vii): Set ρ = ρ0 i=1m ωi ρi and note that ρ ∈ ]0, 1[. For every integer nK − 1 and every (yi)1⩽im ∈ 𝓗m, (3.4) yields

T0,n(i=1mωiTi,i(n)yi)x=T0,n(i=1mωiTi,i(n)yi)T0,n(i=1mωiTi,i(n)x)ρ0i=1mωiTi,i(n)yii=1mωiTi,i(n)xρ0i=1mωiTi,i(n)yiTi,i(n)xρ0i=1mωiρiyix. (3.28)

Now let y ∈ Fix (T0 i=1m ωi Ti). Since (3.28) implies that

yx=T0,K1(i=1mωiTi,i(K1)y)xρyx, (3.29)

we infer that y = x, which shows uniqueness. For every integer nK − 1, (3.28) also yields

xn+1x=T0,n(i=1mωiTi,i(n)xi(n))xρ0i=1mωiρixi(n)x. (3.30)

Now set

(nN)ξn=xnx. (3.31)

It follows from (3.30) that

(n{K1,K,})ξn+1ρ0i=1mωiρiξi(n)ρξ^n,whereξ^n=max1imξi(n). (3.32)

Let us show that

(nN)ξnρnK+1Kξ^K1. (3.33)

We proceed by strong induction. We have

(k{0,,K1})ξkξ^K1ρkK+1Kξ^K1. (3.34)

Next, let ℕ ∋ nK − 1 and suppose that

(k{0,,n})ξkρkK+1Kξ^K1. (3.35)

Since {i(n)}1⩽im ⊂ {nK + 1, …, n}, there exists kn ∈ {nK + 1, …, n} such that ξ̂n = ξkn. Therefore, we derive from (3.32) and (3.35) that

ξn+1ρξ^n=ρξknρρknK+1Kξ^K1=ρkn+1Kξ^K1ρnK+2Kξ^K1. (3.36)

We have thus shown that

(nN)xnxρ1KKξ^K1(ρ1K)n, (3.37)

which establishes the linear convergence of (xn)n∈ℕ to x. □

Remark 3.3

In applications, the cardinality of In may be small compared to m. In such scenarios, it is advantageous to set z−1 = i=1m ωi ti,−1 and write (3.5) as

forn=0,1,yn=zn1iInωiti,n1foreveryiInti,n=Ti,nxn+ei,nforeveryi{1,,m}Inti,n=ti,n1zn=yn+iInωiti,nxn+1=T0,nzn+e0,n, (3.38)

which provides a more economical update equation.

Next, we specialize our results to the autonomous case, wherein the operators (Ti)0⩽im of Problem 1.1 are used directly.

Corollary 3.4

Consider the setting of Problem 1.1 under Assumption 3.1 and the assumption that it has a solution. Let x0 ∈ 𝓗, let (ti,−1)1⩽im ∈ 𝓗m, and iterate

forn=0,1,foreveryiInti,n=Tixn+ei,nforeveryi{1,,m}Inti,n=ti,n1xn+1=T0(i=1mωiti,n)+e0,n. (3.39)

Then the following hold:

  1. Let x be a solution to Problem 1.1 and let i ∈ {1, …, m}. Then xnTi xnxTi x.

  2. (xn)n∈ℕ converges weakly to a solution to Problem 1.1.

  3. Suppose that, for some i ∈ {0, …, m}, Ti is demicompact [39], i.e., every bounded sequence (yn)n∈ℕ such that (ynTi yn)n∈ℕ converges has a strong sequential cluster point. Then (xn)n∈ℕ converges strongly to a solution to Problem 1.1.

  4. Suppose that (3.5) is implemented without errors and that, for some i ∈ {0, …, m}, Ti is a Banach contraction. Then (xn)n∈ℕ converges linearly to the unique solution to Problem 1.1.

Proof

We operate in the special case of Theorem 3.2 for which (∀ n ∈ ℕ)(∀ i ∈ {0} ∪ In) Ti,n = Ti. Set T = T0 ∘ ( i=1m ωi Ti). Then the set of solutions to Problem 1.1 is Fix T and T is nonexpansive since the operators (Ti)0⩽im are likewise. In addition, we derive from Theorem 3.2(v) that

xnTxn0. (3.40)

Altogether, [6, Corollary 4.28] asserts that, if z ∈ 𝓗 is a weak sequential cluster point of (xn)n∈ℕ, then z ∈ Fix T. Thus,

every weak sequential cluster point of(xn)nNsolves Problem 1.1. (3.41)

Recall from Theorem 3.2(ii) that

(xFixT)(i{1,,m})xi(n)Tixi(n)xTix (3.42)

and from Theorem 3.2(iv) that

(i{1,,m})xi(n)xn0. (3.43)

  1. We derive from the nonexpansiveness of Ti, (3.42), and (3.43) that

    (IdTi)xn(IdTi)x(IdTi)xn(IdTi)xi(n)+(IdTi)xi(n)(IdTi)x2xnxi(n)+(IdTi)xi(n)(IdTi)x0. (3.44)
  2. This is a consequence of (3.41) and Theorem 3.2(vi)(a).

  3. In view of (3.41) and Theorem 3.2(vi)(b), it is enough to show that (xn)n∈ℕ has a strong sequential cluster point. It follows from (ii) and [6, Lemma 2.46] that (xn)n∈ℕ is bounded. Hence, if 1 ⩽ im, we infer from (i) and the demicompactness of Ti that (xn)n∈ℕ has a strong sequential cluster point. Now suppose that i = 0 and let x ∈ Fix T. Arguing as in (3.19), we obtain

    (IdT0)(i=1mωiTixi(n))=i=1mωiTixi(n)T0(i=1mωiTixi(n))i=1mωiTixx. (3.45)

    However, we derive from the nonexpansiveness of the operators (Ti)0⩽im and (3.43) that

    (IdT0)(i=1mωiTixn)(IdT0)(i=1mωiTixi(n))2i=1mωiTixni=1mωiTixi(n)2i=1mωiTixnTixi(n)2xnxi(n)0. (3.46)

    Combining (3.45) and (3.46) yields

    (IdT0)(i=1mωiTixn)i=1mωiTixx. (3.47)

    Therefore, by demicompactness of T0, the bounded sequence ( i=1m ωi Ti xn)n∈ℕ has a strong sequential cluster point and so does (Txn)n∈ℕ = (T0( i=1m ωi Ti xn))n∈ℕ since T0 is nonexpansive. Consequently, (3.40) entails that (xn)n∈ℕ has a strong sequential cluster point.

  4. This is a consequence of Theorem 3.2(vii). □

In connection with Corollary 3.4(iii), here are examples of demicompact operators.

Example 3.5

Le T : 𝓗 → 𝓗 be a nonexpansive operator. Then T is demicompact if one of the following holds:

  1. ran T is boundedly relatively compact (the intersection of its closure with every closed ball in 𝓗 is compact).

  2. ran T lies in a finite-dimensional subspace.

  3. T = JA, where A : 𝓗 → 2𝓗 is maximally monotone and one of the following is satisfied:

    1. A is demiregular [3], i.e., for every sequence (xn, un)n∈ℕ in gra A and for every (x, u) ∈ gra A, [xnx and unu] ⇒ xnx.

    2. A is uniformly monotone, i.e., there exists an increasing function ϕ : [0, + ∞[ → [0, + ∞[X vanishing only at 0 such that (∀ (x, u) ∈ gra A)(∀ (y, v) ∈ gra A) 〈xy | uv〉 ⩾ ϕ(∥xy∥).

    3. A = f, where fΓ0(𝓗) is uniformly convex, i.e., there exists an increasing function ϕ : [0, + ∞[ → [0, + ∞[X vanishing only at 0 such that

      (α0,1)(xdomf)(ydomf)f(αx+(1α)y)+α(1α)ϕ(xy)αf(x)+(1α)f(y). (3.48)
    4. A = f, where fΓ0(𝓗) and the lower level sets of f are boundedly compact.

    5. dom A is boundedly relatively compact.

    6. A : 𝓗 → 𝓗 is single-valued with a single-valued continuous inverse.

Proof

Let (yn)n∈ℕ be a bounded sequence in 𝓗 such that ynTynu, for some u ∈ 𝓗. Set (∀ n ∈ ℕ) xn = Tyn.

(i): By construction, (xn)n∈ℕ lies in ran T and it is bounded since (∀ n ∈ ℕ) ∥xn∥ ⩽ ∥TynTy0∥ + ∥Ty0∥ ⩽ ∥yny0∥ + ∥Ty0∥. Thus, (xn)n∈ℕ lies in a compact set and it therefore possesses a strongly convergent subsequence, say xknx ∈ 𝓗. In turn ykn = yknTykn + xknu + x.

(ii) ⇒ (i): Clear.

(iii)(a) Set (∀ n ∈ ℕ) un = ynxn. Then unu. In addition, (∀ n ∈ ℕ) (xn, un) ∈ gra A. On the other hand, since (yn)n∈ℕ is bounded, we can extract from it a weakly convergent subsequence, say ykny. Then xkn = yknuknyu and uknu. By demiregularity, we get xknyu and therefore ykn = xkn + ukny.

(iii)(b)–(iii)(f): These are special cases of (iii)(a) [6, Proposition 2.4]. □

4 Applications

We present several applications of Theorem 3.2 to classical nonlinear analysis problems which will be seen to reduce to instantiations of Problem 1.1. These range from common fixed point and inconsistent feasibility problems to composite monotone inclusion and minimization problems. In each scenario, the main benefit of the proposed framework will lie in its ability to achieve convergence while updating only subgroups of the pool of operators involved.

4.1 Finding common fixed point of firmly nonexpansive operators

Firmly nonexpansive operators are operators which are 1/2-averaged [6, 25]. This application concerns the following ubiquitous fixed point problem [5, 9, 23, 24, 43].

Problem 4.1

Let m be a strictly positive integer and, for every i ∈ {1, …, m}, let Ti : 𝓗 → 𝓗 be firmly nonexpansive. The task is to find a point in i=1m Fix Ti.

Corollary 4.2

Consider the setting of Problem 4.1 under Assumption 3.1 and the assumption that i=1m Fix Ti ≠ ∅. Let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1. For every n ∈ ℕ and every iIn, let Ti,n : 𝓗 → 𝓗 be a firmly nonexpansive operator such that Fix Ti ⊂ Fix Ti,n. Let x0 ∈ 𝓗, let (ti,−1)1⩽im ∈ 𝓗m, and iterate

forn=0,1,foreveryiInti,n=Ti,nxn+ei,nforeveryi{1,,m}Inti,n=ti,n1xn+1=i=1mωiti,n. (4.1)

Then the following hold:

  1. Let i ∈ {1, …, m}. Then (Ti,i(n)xi(n))n∈ℕ is bounded.

  2. Suppose that, for every z ∈ 𝓗, every i ∈ {1.…, m}, and every strictly increasing sequence (kn)n∈ℕ of integers greater than K,

    xi(kn)zxi(kn)Ti,i(kn)xi(kn)0zFixTi. (4.2)

    Then (xn)n∈ℕ converges weakly to a solution to Problem 4.1.

  3. Suppose that, for some i ∈ {1, …, m}, (Ti,i(n)xi(n))n∈ℕ has a strong sequential cluster point. Then (xn)n∈ℕ converges strongly to a solution to Problem 4.1.

Proof

Set T0 = Id and (∀i ∈ {1, …, m}) αi = 1/2. In addition, set (∀n ∈ ℕ) T0,n = Id. By assumption, for every i ∈ {1, …, m} and every integer nK − 1, Fix Ti ⊂ Fix Ti,i(n). Therefore, it follows from [6, Proposition 4.47] that, for every integer nK − 1,

Fix(T0i=1mωiTi)=i=1mFixTii=1mFixTi,i(n)=Fix(T0,ni=1mωiTi,i(n)). (4.3)

This shows that (3.4) holds, that Problem 4.1 is a special case of Problem 1.1, and that (4.1) is a special case of (3.5). Let us derive the claims from Theorem 3.2. First, let x i=1m Fix Ti. Then, for every i ∈ {1, …, m} and every integer nK − 1, x ∈ Fix Ti ⊂ Fix Ti,i(n). This allows us to deduce from Theorem 3.2(ii) that

(i{1,,m})xi(n)Ti,i(n)xi(n)=xi(n)Ti,i(n)xi(n)+Ti,i(n)xx0. (4.4)

We also recall from Theorem 3.2(iv) that

(i{1,,m})xi(n)xn0. (4.5)

  1. This follows from Theorem 3.2(i), (4.4), and (4.5).

  2. Let i ∈ {1, …, m} and let z ∈ 𝓗 be a weak sequential cluster point of (xn)n∈ℕ, say xknz. In view of Theorem 3.2(vi)(a), it is enough to show that z ∈ Fix Ti. We derive from (4.4) that xi(kn)Ti,i(kn) xi(kn) → 0. On the other hand, (4.5) yields xi(kn) = (xi(kn)xkn) + xknz. Using (4.2), we obtain z ∈ Fix Ti.

  3. Let z ∈ 𝓗 be a strong sequential cluster point of (Ti,i(n) xi(n))n∈ℕ, say Ti,i(kn)xi(kn)z. Then (4.4) yields xi(kn)z. In turn, (4.5) implies that xknz and the conclusion follows from Theorem 3.2(vi)(b). □

Example 4.3

We revisit a problem investigated in [16]. Let m be a strictly positive integer, let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1, and, for every i ∈ {1, …, m}, let ρi ∈ [0, + ∞[ and let Ai : 𝓗 → 2𝓗 be maximally ρi-cohypomonotone in the sense that Ai1 + ρi Id is maximally monotone. The task is to

findxHsuchthat(i{1,,m})0Aix, (4.6)

under the assumption that such a point exists. Suppose that Assumption 3.1 is satisfied, let ε ∈ ]0, 1[, let x0 ∈ 𝓗, let (ti,–1)1⩽im ∈ 𝓗m, and let (∀n ∈ ℕ)(∀iIn) γi,n ∈ [ρi + ε, + ∞[. Iterate

forn=0,1,foreveryiInti,n=xn+(1ρi/γi,n)(Jγi,nAixn+ei,nxn)foreveryi{1,,m}Inti,n=ti,n1xn+1=i=1mωiti,n. (4.7)

Then the following hold:

  1. (xn)n∈ℕ converges weakly to a solution to (4.6).

  2. Suppose that, for some i ∈ {1, …, m}, dom Ai is boundedly relatively compact. Then (xn)n∈ℕ converges strongly to a solution to (4.6).

Proof

Set

(i{1,,m})Ti=Id+(1ρiγi)(JγiAiId),whereγiρi,+Mi=(Ai1+ρiId)1. (4.8)

Then it follows from [6, Proposition 20.22] that the operators (Mi)1⩽im are maximally monotone and therefore from [16, Lemma 2.4] and [6, Corollary 23.9] that

(i{1,,m})Ti=J(γiρi)Miis firmly nonexpansive andFixTi=zerMi=zerAi, (4.9)

which makes (4.6) an instantiation of Problem 4.1. Now set

(nN)(iIn)Ti,n=Id+(1ρiγi,n)(Jγi,nAiId)andei,n=(1ρiγi,n)ei,n. (4.10)

Then (∀i ∈ {1, …, m}) ∑nK–1 ei,i(n) ⩽ ∑nK–1ei,i(n)∥ < + ∞. In addition, (∀n ∈ ℕ)(∀iIn) ti,n = Ti,nxn + ei,n . This places (4.7) in the same operating conditions as (4.1). We also derive from [16, Lemma 2.4] that

(nN)(iIn)Ti,n=J(γi,nρi)Miis firmly nonexpansive andFixTi,n=zerMi=zerAi. (4.11)

(i): In view of Corollary 4.2(ii), it suffices to check that condition (4.2) holds. Let us take z ∈ 𝓗, i ∈ {1, …, m}, and a strictly increasing sequence (kn)n∈ℕ of integers greater than K such that

xi(kn)zandxi(kn)Ti,i(kn)xi(kn)0. (4.12)

Then we must show that 0 ∈ Aiz. Note that

Ti,i(kn)xi(kn)z. (4.13)

Now set

(nN)ui(kn)=(γi,i(kn)ρi)1(xi(kn)Ti,i(kn)xi(kn)). (4.14)

Then

ui(kn)=xi(kn)Ti,i(kn)xi(kn)γi,i(kn)ρixi(kn)Ti,i(kn)xi(kn)ε0. (4.15)

On the other hand, we derive from (4.11) that (∀n ∈ ℕ) Ti,i(kn) = J(γi,i(kn)ρi)Mi. Therefore, (4.14) yields

(nN)(Ti,i(kn)xi(kn),ui(kn))graMi. (4.16)

However, since Mi is maximally monotone, gra Mi is sequentially closed in 𝓗weak × 𝓗strong [6, Proposition 20.38(ii)]. Hence, (4.13), (4.15), and (4.16) imply that z ∈ zer Mi = zer Ai.

(ii): By (4.11), for every nK − 1, Ti,i(n)xi(n) ∈ ran Ti,i(n) = dom (Id+(γi,i(n)ρi)Mi) = dom Mi. However, Corollary 4.2(i) asserts that (Ti,i(n)xi(n))n∈ℕ lies in a closed ball. Altogether, it possesses a strong sequential cluster point and the conclusion follows from Corollary 4.2(iii).□

Remark 4.4

Suppose that, in Example 4.3, the operators (Ai)1⩽im are maximally monotone, i.e., (∀i ∈ {1, …, m})ρi = 0. Suppose that, in addition, all the operators are used at each iteration, i.e., (∀ n ∈ ℕ) In = {1, …, m}. Then the implementation of (4.7) with no errors reduces to the barycentric proximal method of [31].

Example 4.5

As shown in [19], many problems in data science and harmonic analysis can be cast as follows. Let m be a strictly positive integer and, for every i ∈ {1, …, m}, let Ri : 𝓗 → 𝓗 be firmly nonexpansive and let ri ∈ 𝓗. The task is to

findxHsuchthat(i{1,,m})ri=Rix, (4.17)

under the assumption that such a point exists. Let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1, suppose that Assumption 3.1 is satisfied, let x0 ∈ 𝓗, and let (ti,–1)1⩽im ∈ 𝓗m. Iterate

forn=0,1,foreveryiInti,n=ri+xnRixn+ei,nforeveryi{1,,m}Inti,n=ti,n1xn+1=i=1mωiti,n. (4.18)

Then the following hold:

  1. (xn)n∈ℕ converges weakly to a solution to

  2. Suppose that, for some i ∈ {1, …, m}, Id – Ri is demicompact. Then (xn)n∈ℕ converges strongly to a solution to (4.17).

Proof

Following [19], (4.17) can be formulated as an instance of Problem 4.1, by choosing (∀i ∈ {1, …, m}) Ti = ri + Id – Ri. A straightforward implementation of (4.1) consists of setting (∀n ∈ ℕ)(∀iIn) Ti,n = Ti, which reduces (4.1) to (4.18).

  1. Since the operators (Ti)1⩽im are nonexpansive, [6, Theorem 4.27] asserts that the operators (Id – Ti)1⩽im are demiclosed, which implies that condition (4.2) holds. Thus, the claim follows from Corollary 4.2(ii).

  2. We deduce from (4.4) that xi(n)Tixi(n) → 0, and from (4.5) and (i) that (xi(n))n∈ℕ is bounded. Hence, since Ti is demicompact, (xi(n))n∈ℕ has a strong sequential cluster point and so does (Ti xi(n))n∈ℕ. We conclude with Corollary 4.2(iii).□

Remark 4.6

If (4.17) has no solution, (4.18) will produce a fixed point of the operator i=1m ωiii = Id + i=1m ωi(riRi), provided one exists. As discussed in [19], this is a valid relaxation of (4.17).

4.2 Forward-backward operator splitting

We consider the following monotone inclusion problem.

Problem 4.7

Let m be a strictly positive integer and let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1. Let A0 : 𝓗 → 2𝓗 be maximally monotone and, for every i ∈ {1, …, m}, let βi ∈ ]0, + ∞[ and let Ai : 𝓗 → 𝓗 be βi-cocoercive, i.e.,

(xH)(yH)xyAixAiyβiAixAiy2. (4.19)

The task is to find x ∈ 𝓗 such that 0 ∈ A0x + i=1m ωiAix.

Remark 4.8

In Problem 4.7, suppose that A0 is the normal cone operator of a nonempty closed convex set C, i.e., A0 = ∂ιC. Then the problem is to solve the variational inequality

findxCsuch that(yH)xy|i=1mωiAix0. (4.20)

If m = 1, a standard method for solving Problem 4.7 is the forward-backward splitting algorithm [11, 42, 44]. We propose below a multi-operator version of it with block-updates.

Proposition 4.9

Consider the setting of Problem 4.7 under Assumption 3.1 and the assumption that it has a solution. Let γ ∈ ]0, 2 min1⩽imβi[, let x0 ∈ 𝓗, let (ti,–1)1⩽im ∈ 𝓗m, and iterate

forn=0,1,foreveryiInti,n=xnγ(Aixn+ei,n)foreveryi{1,,m}Inti,n=ti,n1xn+1=JγA0(i=1mωiti,n)+e0,n. (4.21)

Then the following hold:

  1. Let x be a solution to Problem 4.7 and let i ∈ {1, …, m}. Then Ai xnAix.

  2. (xn)n∈ℕ converges weakly to a solution to Problem 4.7.

  3. Suppose that, for some i ∈ {0, …, m}, Ai is demiregular. Then (xn)n∈ℕ converges strongly to a solution to Problem 4.7.

  4. Suppose that, for some i ∈ {0, …, m}, Ai is strongly monotone. Then (xn)n∈ℕ converges linearly to the unique solution to Problem 4.7.

Proof

We apply Corollary 3.4 with T0 = JγA0 and (∀i ∈ {1, …, m}) Ti = Id –γAi. It follows from [6, Proposition 4.39 and Corollary 23.9] that the operators (Ti)0⩽im are averaged, and hence from [6, Proposition 26.1(iv)(a)] that Problem 4.7 coincides with Problem 1.1. In addition, (4.21) is an instance of (3.39).

  1. See Corollary 3.4(ii).

  2. This follows from Corollary 3.4(iii). Indeed, if i = 0, the demicompactness of Ti follows from Example 3.5(iii)(a). On the other hand, if i ≠ 0, take a bounded sequence (yn)n∈ℕ in 𝓗 such that (ynTiyn)n∈ℕ converges, say ynTi ynu. Then Ai ynu/γ. On the other hand, (yn)n∈ℕ has a weak sequential cluster point, say ykny. So by demiregularity of Ai, ykny, which shows that Ti is demicompact.

  3. If i = 0, we derive from [6, Proposition 23.13] that T0 = JγA0 is a Banach contraction. If i ≠ 0, as in the proof of [6, Proposition 26.16], we obtain that Ti = Id –γAi is a Banach contraction. The conclusion follows from Corollary 3.4(iv).□

Example 4.10

Consider maximally operators A0 : 𝓗 → 2𝓗 and, for every i ∈ {1, …, m}, Bi : 𝓗 → 2𝓗. The associated common zero problem is [10, 31, 46]

findxHsuchthat0A0xi=1mBix. (4.22)

As shown in [12], when (4.22) has no solution, a suitable relaxation is

findxHsuchthat0A0x+i=1mωi(BiCi)x (4.23)

where, for every i ∈ {1, …, m}, Ci : 𝓗 → 2𝓗 is such that Ci1 is at most single-valued and strictly monotone, with Ci1 0 = {0}. In this setting, if (4.22) happens to have solutions, they coincide with those of (4.23) [12]. Let us consider the particular instance in which, for every i ∈ {1, …, m}, Ci is cocoercive, and set Ai = BiCi. Then the operators (Ci1)1im are strongly monotone and, therefore, the operators (Ai)1⩽im are cocoercive. In addition, (4.23) is a special case of Problem 4.7, which can be solved via Proposition 4.9. Let us observe that if we further specialize by setting, for every i ∈ {1, …, m}, Ci = ρi1 Id for some ρi ∈ ]0, + ∞[, then (4.23) reduces to (1.3).

We now focus on minimization problems.

Problem 4.11

Let m be a strictly positive integer and let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1. Let f0Γ0(𝓗) and, for every i ∈ {1, …, m}, let βi ∈ ]0, + ∞[ and let fi : 𝓗 → ℝ be a differentiable convex function with a 1/βi-Lipschitzian gradient. The task is to

minimizexHf0(x)+i=1mωifi(x). (4.24)

Proposition 4.12

Consider the setting of Problem 4.11 under Assumption 3.1 and assume that

limxHx+(f0(x)+i=1mωifi(x))=+. (4.25)

Let γ ∈ ]0, 2 min1⩽im βi[, let x0 ∈ 𝓗, let (ti,–1)1⩽im ∈ 𝓗m, and iterate

forn=0,1,foreveryiInti,n=xnγ(fi(xn)+ei,n)foreveryi{1,,m}Inti,n=ti,n1xn+1=proxγf0(i=1mωiti,n)+e0,n. (4.26)

Then the following hold:

  1. Let x be a solution to Problem 4.11 and let i ∈ {1, …, m}. Thenfi(xn) → ∇ fi(x).

  2. (xn)n∈ℕ converges weakly to a solution to Problem 4.11.

  3. Suppose that, for some i ∈ {0, …, m}, one of the following holds:

    1. fi is uniformly convex.

    2. The lower level sets of fi are boundedly compact.

    Then (xn)n∈ℕ converges strongly to a solution to Problem 4.11.

  4. Suppose that, for some i ∈ {0, …, m}, fi is strongly convex, i.e., it satisfies (3.48) with ϕ = |⋅|2/2. Then (xn)n∈ℕ converges linearly to the unique solution to Problem 4.11.

Proof

We derive from [6, Theorem 20.25] that A0 = ∂f0 is maximally monotone and from [6, Corollary 18.17] that, for every i ∈ {1, …, m}, Ai = ∇fi is βi-cocoercive. In this setting, it follows from [6, Corollary 27.3(i)] that Problem 4.7 reduces to Problem 4.11. On the other hand, since the assumptions imply that f0 + i=1m ωifi is proper, lower semicontinuous, convex, and coercive, it follows from [6, Corollary 11.16(ii)] that Problem 4.11 has a solution. The claims therefore follow from Proposition 4.9, Example 3.5(iii)(c) & (iii)(d), and [6, Example 22.4(iv)].□

An algorithm related to (4.26) has recently been proposed in [35] in a finite-dimensional setting; see also [36] for a special case.

We illustrate an application of Proposition 4.12 in the context of a variational model that captures various formulations found in data analysis.

Example 4.13

Suppose that 𝓗 is separable, let (ek)k∈𝕂⊂ℕ be an orthonormal basis of 𝓗, and, for every k ∈ 𝕂, let ψkΓ0(ℝ) be such that ψk ⩾ 0 = ψk(0). For every i ∈ {1, …, m}, let 0 ≠ ai ∈ 𝓗, let μi ∈ ]0, + ∞[, and let ϕi : ℝ → [0, + ∞[ be a differentiable convex function such that ϕi is μi-Lipschitzian. The task is to

minimizexHkKψk(xek)+1mi=1mϕi(xai). (4.27)

Let us note that (4.27) is an instantiation of (4.24) with f0 = ∑k∈𝕂 ψk ∘ 〈⋅ | ekand, for every i ∈ {1, …, m}, fi = ϕi ∘ 〈⋅ | aiand ωi = 1/m. The fact that f0Γ0(𝓗) is established in [18], where it is also shown that, given γ ∈ ]0, + ∞[,

proxγf0:xkK(proxγψkxek)ek. (4.28)

On the other hand, for every i ∈ {1, …, m}, fi is a differentiable convex function and its gradient

fi:xϕi(xai)ai (4.29)

has Lipschitz constant μiai2. Let γ ∈ ]0, 2/(max1⩽im μiai2)[ and let (In)n∈ℕ be as in Assumption 3.1. In view of (4.26), (4.28), and (4.29), we can solve (4.27) via the algorithm

forn=0,1,foreveryiInti,n=xnγϕi(xnai)aiforeveryi{1,,m}Inti,n=ti,n1yn=i=1mωiti,nxn+1=kK(proxγψkynek)ek. (4.30)

Infinite-dimensional instances of (4.27) are discussed in [17, 18, 20, 21]. A popular finite-dimensional setting is obtained by choosing 𝓗 = ℝN, 𝕂 = {1, …, N}, (ek)1⩽kN as the canonical basis, α ∈ ]0, + ∞[, and, for every k ∈ 𝕂, ψk = α|⋅|. This reduces (4.27) to

minimizexRNαx1+i=1mϕi(xai). (4.31)

Thus, choosing for every i ∈ {1, …, m} ϕi : t ↦ |tηi|2, where ηi ∈ ℝ models an observation, yields the Lasso formulation, whereas choosing ϕi : t ↦ ln(1 + exp(t)) – ηit, where ηi ∈ {0, 1} models a label, yields the penalized logistic regression framework [27].

4.3 Hard constrained inconsistent convex feasibility problems

The next application revisits a model proposed in [14] to relax inconsistent feasibility problems.

Problem 4.14

Let m be a strictly positive integer and let (ωi)1⩽im ∈ ]0, 1]m be such that i=1m ωi = 1. Let C0 be a nonempty closed convex subset of 𝓗 and, for every i ∈ {1, …, m}, let 𝓖i be a real Hilbert space, let Li : 𝓗 → 𝓖i be a nonzero bounded linear operator, let Di be a nonempty closed convex subset of 𝓖i, let μi ∈ ]0, + ∞[, and let ϕi : ℝ → [0, + ∞[ be an even differentiable convex function that vanishes only at 0 and such that ϕi is μi-Lipschitzian. The task is to

minimizexC0i=1mωiϕi(dDi(Lix)). (4.32)

The variational formulation (4.32) is a relaxation of the convex feasibility problem

findxC0such that(i{1,,m})LixDi (4.33)

in the sense that, if (4.33) is consistent, then its solution set is precisely that of (4.32); see [14, Section 4.4] for details on this formulation and background on inconsistent convex feasibility problems. Here C0 models a hard constraint. An early instance of (4.33) as a relaxation of (4.32) is Legendre’s method of least-squares to deal with an inconsistent system of m linear equations in 𝓗 = ℝN [30]. There, C0 = ℝN and, for every i ∈ {1, …, m}, 𝓖i = ℝ, Di = {βi}, Li = 〈⋅ | ai〉 for some ai ∈ ℝN such that ∥ai∥ = 1, ωi = 1/m, and ϕi = |⋅|2. The formulation (4.32) can also be regarded as a smooth version of the set-theoretic Fermat-Weber problem [37] arising in location theory, namely,

minimizexH1mi=1mdCi(x). (4.34)

The following version of the Closed Range Theorem will be required.

Lemma 4.15

[22, Theorem 8.18] Let 𝓖 be a real Hilbert space and let L : 𝓗 → 𝓖 be a nonzero bounded linear operator. Then ran L is closed ⇔ ran L*L is closed ⇔ (∃ ρ ∈ ]0, + ∞[)(∀x ∈ (ker L)) ∥Lx∥ ⩾ ρx∥.

Corollary 4.16

Consider the setting of Problem 4.14 under one of the following assumptions:

  1. There exists j ∈ {1, …, m} such that limx∥→+∞ (ιC0(x) + ϕj (dDj(Ljx))) = + ∞.

  2. There exists j ∈ {1, …, m} such that ran Lj is closed, C0 ⊂ (ker Lj), and Dj is bounded.

  3. There exists j ∈ {1, …, m} such that 𝓖j = 𝓗, Lj = Id, and Dj is bounded.

  4. C0 is bounded.

Set β = 1/(max1⩽im μiLi2), let γ ∈ ]0, 2β[, let (In)n∈ℕ be as in Assumption 3.1, let x0C0, let (ti,–1)1⩽im ∈ 𝓗m, and iterate

forn=0,1,foreveryiInifLixnDiti,n=xnγϕi(dDi(Lixn))dDi(Lixn)Li(LixnprojDi(Lixn))elseti,n=xnforeveryi{1,,m}Inti,n=ti,n1xn+1=projC0(i=1mωiti,n). (4.35)

Then the following hold:

  1. (xn)n∈ℕ converges weakly to a solution to Problem 4.14.

  2. Suppose that one of the following holds:

    1. Condition [b] is satisfied with the additional assumptions that ϕj = μj|⋅|2/2 and Dj is compact.

    2. C0 is boundedly compact.

Then (xn)n∈ℕ converges strongly to a solution to Problem 4.14.

Proof

We first note that (4.32) is an instance of (4.24) with f0 = ιC0 and (∀i ∈ {1, …, m}) fi = ϕidDiLi. Next, we derive from [6, Example 2.7] that, for every i ∈ {1, …, m}, fi is convex and differentiable, and that its gradient

fi:HH:xϕi(dDi(Lix))dDi(Lix)Li(LixprojDi(Lix)),ifLixDi;0,ifLixDi (4.36)

has Lipschitz constant μiLi2. Hence (4.35) is an instance of (4.26). Now, in order to apply Proposition 4.12, let us check that (4.25) is satisfied under one of assumptions [a]–[d].

[a]: We have f0(x) + i=1m ωifi(x) ⩾ ωj(ιC0(x) + fj(x)) → + ∞ as ∥x∥ → + ∞.

[b] ⇒ [a]: In view of [d], we assume that C0 is unbounded. It follows from Lemma 4.15 that there exists ρ ∈ ]0, + ∞[ such that (∀x ∈ (ker Lj)) ∥Ljx∥ ⩾ ρ∥x∥. Hence,

(xC0)Ljxρx. (4.37)

Now let z ∈ 𝓖j. Then, since Dj is bounded, δ = diam(Dj) + ∥projDjz∥ < + ∞ and

(yGj)yyprojDjy+projDjyprojDjz+projDjzdDj(y)+δ. (4.38)

Consequently, dDj(y) → + ∞ as ∥y∥ → + ∞ with y ∈ 𝓖j. Thus, since ϕj is coercive by [6, Proposition 16.23], we obtain

ϕj(dDj(y))+asy+withyGj. (4.39)

We deduce from (4.37) and (4.39) that

fj(x)+asx+withxC0. (4.40)

[c] ⇒ [b] and [d] ⇒ [a]: Clear.

We are now ready to use Proposition 4.12 to prove the assertions.

  1. Apply Proposition 4.12(ii).

  2. [e]: Let x be the weak limit in (i) and set uj = ∇fj(x)/μj. Then Proposition 4.12(i) asserts that

    Lj(LjxnprojDj(Ljxn))uj. (4.41)

We also observe that, since Lj Lj is weakly continuous [6, Lemma 2.41], we have Lj (Ljxn) ⇀ Lj (Ljx). Therefore, (4.41) yields

Lj(projDj(Ljxn))Lj(Ljx)uj. (4.42)

However, the set Lj (Dj) is compact by [6, Lemma 1.20] and it contains ( Lj (projDj(Lj xn)))n∈ℕ. This sequence has therefore Lj (Ljx) – uj as its unique strong sequential cluster point. Thus, Lj (projDj(Lj xn)) → Lj (Ljx) – uj and we deduce from (4.41) that

Lj(Ljxn)Lj(Ljx). (4.43)

On the other hand, for every n ∈ ℕ, since x and xn lie in C0 ⊂ (ker Lj), we have xnx ∈ (ker Lj) = (ker Lj Lj). Hence, we deduce from (4.43) and Lemma 4.15 that there exists θ ∈ ]0, + ∞[ such that

θxnx(LjLj)(xnx)0. (4.44)

We conclude that xnx.

(ii): This follows from Proposition 4.12(iii)(b) since the lower level sets of f0 are the compact sets {∅, C0}.□

We conclude by revisiting (1.1) and recovering a classical result on the method of alternating projections.

Example 4.17

[8, Theorem 4(a)] Let C and D be nonempty closed convex subsets of 𝓗 such that D is compact. Let x0 ∈ 𝓗 and set (∀ n ∈ ℕ) xn+1 = projC(projDxn). Then (xn)n∈ℕ converges strongly to a point in xC such that x = projC(projDx).

Proof

Apply Corollary 4.16(ii)[e] with m = 1, C0 = C, 𝓖1 = 𝓗, L1 = Id, D1 = D, γ = 1, and μ1 = 1.□


phone:+1 (919) 515 2671

Acknowledgement

The work of P. L. Combettes was supported by the National Science Foundation under grant DMS-1818946 and that of L. E. Glaudin by ANR-3IA Artificial and Natural Intelligence Toulouse Institute.

References

[1] F. Acker and M. A. Prestel, Convergence d’un schéma de minimisation alternée, Ann. Fac. Sci. Toulouse V. Sér. Math. 2 (1980), 1–9.10.5802/afst.541Suche in Google Scholar

[2] A. Aleyner and S. Reich, Block-iterative algorithms for solving convex feasibility problems in Hilbert and in Banach spaces, J. Math. Anal. Appl. 343 (2008), 427–435.10.1016/j.jmaa.2008.01.087Suche in Google Scholar

[3] H. Attouch, L. M. Briceño-Arias, and P. L. Combettes, A parallel splitting method for coupled monotone inclusions, SIAM J. Control Optim. 48 (2010), 3246–3270.10.1137/090754297Suche in Google Scholar

[4] J.-B. Baillon, R. E. Bruck, and S. Reich, On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces, Houston J. Math. 4 (1978), 1–9.Suche in Google Scholar

[5] H. H. Bauschke and J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev. 38 (1996), 367–426.10.1137/S0036144593251710Suche in Google Scholar

[6] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed. Springer, New York, 2017.10.1007/978-3-319-48311-5Suche in Google Scholar

[7] H. H. Bauschke, P. L. Combettes, and S. Reich, The asymptotic behavior of the composition of two resolvents, Nonlinear Anal. 60 (2005), 283–301.10.1016/j.na.2004.07.054Suche in Google Scholar

[8] W. Cheney and A. A. Goldstein, Proximity maps for convex sets, Proc. Amer. Math. Soc. 10 (1959), 448–450.10.1090/S0002-9939-1959-0105008-8Suche in Google Scholar

[9] P. L. Combettes, Construction d’un point fixe commun à une famille de contractions fermes, C. R. Acad. Sci. Paris I, 320 (1995), 1385–1390.Suche in Google Scholar

[10] P. L. Combettes, Quasi-Fejérian analysis of some optimization algorithms, in Inherently Parallel Algorithms for Feasibility and Optimization (D. Butnariu, Y. Censor, and S. Reich, Eds.), pp. 115–152. Elsevier, New York, 2001.10.1016/S1570-579X(01)80010-0Suche in Google Scholar

[11] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization 53 (2004), 475–504.10.1080/02331930412331327157Suche in Google Scholar

[12] P. L. Combettes, Systems of structured monotone inclusions: Duality, algorithms, and applications, SIAM J. Optim. 23 (2013), 2420–2447.10.1137/130904160Suche in Google Scholar

[13] P. L. Combettes and L. E. Glaudin, Quasinonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods, SIAM J. Optim. 27 (2017), 2356–2380.10.1137/17M112806XSuche in Google Scholar

[14] P. L. Combettes and L. E. Glaudin, Proximal activation of smooth functions in splitting algorithms for convex image recovery, SIAM J. Imaging Sci. 12 (2019), 1905–1935.10.1137/18M1224763Suche in Google Scholar

[15] P. L. Combettes and T. Pennanen, Generalized Mann iterates for constructing fixed points in Hilbert spaces, J. Math. Anal. Appl. 275 (2002), 521–536.10.1016/S0022-247X(02)00221-4Suche in Google Scholar

[16] P. L. Combettes and T. Pennanen, Proximal methods for cohypomonotone operators, SIAM J. Control Optim. 43 (2004), 731–742.10.1137/S0363012903427336Suche in Google Scholar

[17] P. L. Combettes, S. Salzo, and S. Villa, Consistent learning by composite proximal thresholding, Math. Program. B167 (2018), 99–127.10.1007/s10107-017-1133-8Suche in Google Scholar

[18] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul. 4 (2005), 1168–1200.10.1137/050626090Suche in Google Scholar

[19] P. L. Combettes and Z. C. Woodstock, A fixed point framework for recovering signals from nonlinear transformations, Proc. Europ. Signal Process. Conf., pp. 2120–2124. Amsterdam, The Netherlands, January 18–22, 2021.Suche in Google Scholar

[20] I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. 57 (2004), 1413–1457.10.1002/cpa.20042Suche in Google Scholar

[21] C. De Mol, E. De Vito, and L. Rosasco, Elastic-net regularization in learning theory, J. Complexity 25 (2009), 201–230.10.1016/j.jco.2009.01.002Suche in Google Scholar

[22] F. Deutsch, Best Approximation in Inner Product Spaces. Springer-Verlag, New York, 2001.10.1007/978-1-4684-9298-9Suche in Google Scholar

[23] J. M. Dye and S. Reich, Unrestricted iterations of nonexpansive mappings in Hilbert space, Nonlinear Anal. 18 (1992), 199–207.10.1016/0362-546X(92)90094-USuche in Google Scholar

[24] S. D. Flåm, Successive averages of firmly nonexpansive mappings, Math. Oper. Res. 20 (1995), 497–512.10.1287/moor.20.2.497Suche in Google Scholar

[25] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York, 1984.Suche in Google Scholar

[26] L. G. Gubin, B. T. Polyak, and E. V. Raik, The method of projections for finding the common point of convex sets, USSR Comput. Math. Math. Phys. 7 (1967), 1–24.10.1016/0041-5553(67)90113-9Suche in Google Scholar

[27] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, 2nd. ed. Springer, New York, 2009.10.1007/978-0-387-84858-7Suche in Google Scholar

[28] K. Knopp, Infinite Sequences and Series. Dover, New York, 1956.Suche in Google Scholar

[29] P. Kügler and A. Leitão, Mean value iterations for nonlinear elliptic Cauchy problems, Numer. Math. 96 (2003), 269–293.10.1007/s00211-003-0477-6Suche in Google Scholar

[30] A. M. Legendre, Nouvelles Méthodes pour la Détermination des Orbites des Comètes. Firmin Didot, Paris, 1805.Suche in Google Scholar

[31] N. Lehdili and B. Lemaire, The barycentric proximal method, Comm. Appl. Nonlinear Anal. 6 (1999), 29–47.Suche in Google Scholar

[32] D. Leventhal, Metric subregularity and the proximal point method, J. Math. Anal. Appl. 360 (2009), 681–688.10.1016/j.jmaa.2009.07.012Suche in Google Scholar

[33] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc. 4 (1953), 506–510.10.1090/S0002-9939-1953-0054846-3Suche in Google Scholar

[34] W. R. Mann, Averaging to improve convergence of iterative processes, Lecture Notes in Math. 701 (1979), 169–179.10.1007/BFb0062080Suche in Google Scholar

[35] K. Mishchenko, F. Iutzeler, and J. Malick, A distributed flexible delay-tolerant proximal gradient algorithm, SIAM J. Optim. 30 (2020), 933–959.10.1137/18M1194699Suche in Google Scholar

[36] A. Mokhtari, M. Gürbüzbalaban, and A. Ribeiro, Surpassing gradient descent provably: A cyclic incremental method with linear convergence rate, SIAM J. Optim. 28 (2018), 1420–1447.10.1137/16M1101702Suche in Google Scholar

[37] B. S. Mordukhovich, N. M. Nam, and J. Salinas, Solving a generalized Heron problem by means of convex analysis, Amer. Math. Monthly 119 (2012), 87–99.10.4169/amer.math.monthly.119.02.087Suche in Google Scholar

[38] J. von Neumann, On rings of operators. Reduction theory, Ann. of Math. 50 (1949), 401–485.10.2307/1969463Suche in Google Scholar

[39] W. V. Petryshyn, Construction of fixed points of demicompact mappings in Hilbert space, J. Math. Anal. Appl. 14 (1966), 276–284.10.1016/0022-247X(66)90027-8Suche in Google Scholar

[40] S. Reich, Fixed point iterations of nonexpansive mappings, Pacific J. Math. 60 (1975), 195–198.10.2140/pjm.1975.60.195Suche in Google Scholar

[41] A. M. Saddeek, Coincidence points by generalized Mann iterates with applications in Hilbert spaces, Nonlinear Anal. 72 (2010), 2262–2270.10.1016/j.na.2009.10.027Suche in Google Scholar

[42] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim. 29 (1991), 119–138.10.1137/0329006Suche in Google Scholar

[43] P. Tseng, On the convergence of products of firmly nonexpansive mappings, SIAM J. Optim. 2 (1992), 425–434.10.1137/0802021Suche in Google Scholar

[44] B. C. Vũ, A splitting algorithm for dual monotone inclusions involving cocoercive operators, Adv. Comput. Math. 38 (2013), 667–681.10.1007/s10444-011-9254-8Suche in Google Scholar

[45] X. Wang and H. H. Bauschke, Compositions and averages of two resolvents: Relative geometry of fixed points sets and a partial answer to a question by C. Byrne, Nonlinear Anal. 74 (2011), 4550–4572.10.1016/j.na.2011.04.024Suche in Google Scholar

[46] A. J. Zaslavski, A proximal point algorithm for finding a common zero of a finite family of maximal monotone operators in the presence of computational errors, Nonlinear Anal. 75 (2012), 6071–6087.10.1016/j.na.2012.06.015Suche in Google Scholar

Received: 2020-11-02
Accepted: 2021-02-06
Published Online: 2021-03-25

© 2021 P. L. Combettes and Lilian E. Glaudin, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Editorial
  2. Editorial to Volume 10 of ANA
  3. Regular Articles
  4. Convergence Results for Elliptic Variational-Hemivariational Inequalities
  5. Weak and stationary solutions to a Cahn–Hilliard–Brinkman model with singular potentials and source terms
  6. Single peaked traveling wave solutions to a generalized μ-Novikov Equation
  7. Constant sign and nodal solutions for superlinear (p, q)–equations with indefinite potential and a concave boundary term
  8. On isolated singularities of Kirchhoff equations
  9. On the existence of periodic oscillations for pendulum-type equations
  10. Multiplicity of concentrating solutions for a class of magnetic Schrödinger-Poisson type equation
  11. Nehari-type ground state solutions for a Choquard equation with doubly critical exponents
  12. Gradient estimate of a variable power for nonlinear elliptic equations with Orlicz growth
  13. The structure of 𝓐-free measures revisited
  14. Solvability of an infinite system of integral equations on the real half-axis
  15. Positive Solutions for Resonant (p, q)-equations with convection
  16. Concentration behavior of semiclassical solutions for Hamiltonian elliptic system
  17. Global existence and finite time blowup for a nonlocal semilinear pseudo-parabolic equation
  18. On variational nonlinear equations with monotone operators
  19. Existence results for nonlinear degenerate elliptic equations with lower order terms
  20. Blow-up criteria and instability of normalized standing waves for the fractional Schrödinger-Choquard equation
  21. Ground states and multiple solutions for Hamiltonian elliptic system with gradient term
  22. Positivity of solutions to the Cauchy problem for linear and semilinear biharmonic heat equations
  23. Convex solutions of Monge-Ampère equations and systems: Existence, uniqueness and asymptotic behavior
  24. Multiple solutions for critical Choquard-Kirchhoff type equations
  25. Regularity for sub-elliptic systems with VMO-coefficients in the Heisenberg group: the sub-quadratic structure case
  26. Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure
  27. A posteriori analysis of the spectral element discretization of a non linear heat equation
  28. Liouville property of fractional Lane-Emden equation in general unbounded domain
  29. Large data existence theory for three-dimensional unsteady flows of rate-type viscoelastic fluids with stress diffusion
  30. On some classes of generalized Schrödinger equations
  31. Variational formulations of steady rotational equatorial waves
  32. On a class of critical elliptic systems in ℝ4
  33. Exponential stability of the nonlinear Schrödinger equation with locally distributed damping on compact Riemannian manifold
  34. On a degenerate hyperbolic problem for the 3-D steady full Euler equations with axial-symmetry
  35. Existence, multiplicity and nonexistence results for Kirchhoff type equations
  36. Combined effects of Choquard and singular nonlinearities in fractional Kirchhoff problems
  37. Convergence analysis for double phase obstacle problems with multivalued convection term
  38. Multiple solutions for weighted Kirchhoff equations involving critical Hardy-Sobolev exponent
  39. Boundary value problems associated with singular strongly nonlinear equations with functional terms
  40. Global solvability in a three-dimensional Keller-Segel-Stokes system involving arbitrary superlinear logistic degradation
  41. Multiplicity and concentration behaviour of solutions for a fractional Choquard equation with critical growth
  42. Concentration results for a magnetic Schrödinger-Poisson system with critical growth
  43. Periodic solutions for a differential inclusion problem involving the p(t)-Laplacian
  44. The concentration-compactness principles for Ws,p(·,·)(ℝN) and application
  45. Regularity for commutators of the local multilinear fractional maximal operators
  46. An improvement to the John-Nirenberg inequality for functions in critical Sobolev spaces
  47. Local versus nonlocal elliptic equations: short-long range field interactions
  48. Analysis of a diffusive host-pathogen model with standard incidence and distinct dispersal rates
  49. Blowing-up solutions of the time-fractional dispersive equations
  50. Fixed point of some Markov operator of Frobenius-Perron type generated by a random family of point-transformations in ℝd
  51. Non-stationary Navier–Stokes equations in 2D power cusp domain
  52. Non-stationary Navier–Stokes equations in 2D power cusp domain
  53. Nontrivial solutions to the p-harmonic equation with nonlinearity asymptotic to |t|p–2t at infinity
  54. Iterative methods for monotone nonexpansive mappings in uniformly convex spaces
  55. Optimality of Serrin type extension criteria to the Navier-Stokes equations
  56. Fractional Hardy-Sobolev equations with nonhomogeneous terms
  57. New class of sixth-order nonhomogeneous p(x)-Kirchhoff problems with sign-changing weight functions
  58. On the set of positive solutions for resonant Robin (p, q)-equations
  59. Solving Composite Fixed Point Problems with Block Updates
  60. Lions-type theorem of the p-Laplacian and applications
  61. Half-space Gaussian symmetrization: applications to semilinear elliptic problems
  62. Positive radial symmetric solutions for a class of elliptic problems with critical exponent and -1 growth
  63. Global well-posedness of the full compressible Hall-MHD equations
  64. Σ-Shaped Bifurcation Curves
  65. On the critical behavior for inhomogeneous wave inequalities with Hardy potential in an exterior domain
  66. On singular quasilinear elliptic equations with data measures
  67. On the sub–diffusion fractional initial value problem with time variable order
  68. Partial regularity of stable solutions to the fractional Geľfand-Liouville equation
  69. Ground state solutions for a class of fractional Schrodinger-Poisson system with critical growth and vanishing potentials
  70. Initial boundary value problems for the three-dimensional compressible elastic Navier-Stokes-Poisson equations
Heruntergeladen am 8.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/anona-2020-0173/html
Button zum nach oben scrollen