검색
Article Search

JMB Journal of Microbiolog and Biotechnology

OPEN ACCESS eISSN 0454-8124
pISSN 1225-6951
QR Code

Article

Kyungpook Mathematical Journal 2022; 62(4): 673-687

Published online December 31, 2022

Copyright © Kyungpook Mathematical Journal.

Accelerated Tseng's Technique to Solve Cayley Inclusion Problem in Hilbert Spaces

Shamshad Husain, Uqba Rafat∗

Department of Applied Mathematics, Aligarh Muslim University, Aligarh 202002, India
e-mail : s_husain68@yahoo.com

Department of Applied Mathematics, Aligarh Muslim University, Aligarh 202002, India
e-mail : uqba.1709@gmail.com

Received: December 2, 2021; Revised: April 9, 2022; Accepted: April 8, 2022

In this study, we solve the Cayley inclusion problem and the fixed point problem in real Hilbert space using Tseng's technique with inertial extrapolation in order to obtain more efficient results. We provide a strong convergence theorem to approximate a common solution to the Cayley inclusion problem and the fixed point problem under some appropriate assumptions. Finally, we present a numerical example that satisfies the problem and shows the computational performance of our suggested technique.

Keywords: Tseng’s technique, Cayley inclusion problem, Cayley operator, fixed point problem

Let H be a real Hilbert space with the inner product , and induced norm . A "Zero-Point Problem" (ZPP) for monotone operators is defined as follows: find x*H such that

0Tx*,

where T:HH is a monotone operator. Several authors have focused on the convergence of iterative methods in order to locate a zero-point for monotone operators in Hilbert spaces. Martinet [9] constructed the "Proximal Point Algorithm" (PPA) to solve problem (1.1). The PPA is depicted as:

xn+1=(I+λnT)1xn,   n1,

where I is the identity mapping, and {λn} is a sequence of positive real numbers. After Martinet [9], several algorithms were developed to solve (ZPP). The reader can see [10] - [14] and for more information, refer to the citations provided in this article.

The "Monotone Inclusion Problem" (MIP) is to obtain xH such that

0(D+M)x.

where D:HH is a single-valued mapping and M:HH is a multi-valued mapping. The MIP (1.3) can be written as the ZPP (1.1) by setting T:=D+M.

According to Lions and Mercier [6], the most common way to solve the problem (1.3) is to use the forward-backwrad splitting method, defined as follows:

xn+1=(I+λM)1(IλD)xn,   n1,

where x1H is arbitrarily chosen and λ>0. In algorithm (1.4), the operator D is called a forward operator and M is called a backward operator. For more details about the forward-backward splitting method used to solve the MIP (1.3), the reader is directed to see [5, 7, 14].

Polyak [11] introduced the inertial extrapolation method to speed up the rate of convergence of the iteration. This is sometimes called the heavy ball method. Many scholars have exploited this notion to combine algorithms with inertial terms to speed up the rate of convergence.

In 2015, Lorenz and Pock [7] studied the MIP (1.3) and proposed the inertial forward-backward algorithm for monotone operators, which combines the heavy ball method and forward-backward method. The algorithm is defined as:

wn=xn+θn(xnxn1)xn+1=(I+λM)1(IλD)wn,   n1,

where x0,x1H are arbitrarily chosen, and D:HH and M:HH are single and multi-valued mappings, respectively. The sequence {xn} generated by algorithm (1.5) converges weakly to a solution of MIP (1.3) with some suitable assumptions.

There are many ways to solve the MIP other than using an algorithm combined with the heavy ball idea. Tseng [15] introduced the modified forward-backward splitting method, a powerful iterative method for solving the monotone inclusion problem (1.3). In short, it is known as Tseng's splitting algorithm. Let C be a closed and convex subset of a real Hilbert space H. Tseng's splitting algorithm is defined as:

yn=(I+λnM)1(IλnD)xnxn+1=PC(ynλn(DynDxn)),   n1,

where x1H is arbitrarily chosen, λn is chosen to be the largest λ{δ,δl,δl2,..} satisfying λDynDxnμxnyn where δ>0,l(0,1),μ(0,1) and PC is the projection onto a closed convex subset C of H.

Kitkaun and Kumam [5] combined the forward -backward splitting method with the viscosity approximation method for solving MIP (1.3). It is called the inertial viscosity forward-backward splitting algorithm, which is defined as:

wn=xn+θn(xnxn1)xn+1=αnδf(xn)+(1αn)(IλnM)1(IλnD)wn,   n1,

where x0,x1H are arbitrarily chosen, f:H is a differentiable function such that its gradient δf is a contraction with constant ρ(0,1) and D:HH and M:HH are an inverse strongly monotone and a maximal monotone operator, respectively. The sequence generated by the algorirthm (1.7) converges strongly to a solution of MIP (1.3) under suitable conditions.

Now we define a new type of monotone inclusion problem known as "Cayley's Inclusion Problem" (CIP). Find x*H such that

0(λM+M)x*

where M:HH is a multi-valued maximal monotone mapping, λM:=2JλMI is the single valued mapping and known as "Cayley operator" and JλM:=(I+λM)1 is the "resolvent operator" associated with maximal monotone mapping M with λ>0. The solution set of CIP is denoted as:

Ω:={xH:0λM(x)+M(x)}.

Let S:HH be a nonexpansive mapping. A fixed point of S is a point xH such that

S(x)=x.

The set of all fixed point of S is denoted by Fix(S)={xH:S(x)=x}.

The objective of this article is to propose an algorithm consisting of Tseng's technique with inertial extrapolation and applying the viscosity iterative method in order to obtain the common solution of CIP and Fix(S) in the framework of real Hilbert space. Moreover, we show that the sequences generated by the algorithm are strongly convergent to the point in the solution set Σ:=Ω Fix(S). In particular, we give numerical illustrations of the algorithm.

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. The weak convergence of {xn}n=1 to x is denoted by xnx as n, while the strong convergence of {xn}n=1 to x is written as xnx as n. Now assume that x, y, z H the following relations are valid for inner product spaces,

x+y2x2+2y,x+y,
αx+(1α)y2=αx2+(1α)y2α(1α)xy2,
αx+βy+γz2=αx2+βy2+γz2                      αβxy2αγxz2βγyz2

for any α,β,γ[0,1] such that α+β+γ=1.

The following definitions are required to obtain the desired results.

Definition 2.1. Let D:HH be a single-valued mapping, then it is said to be

(i) nonexpansive if for all x,yH,

D(x)D(y)xy,

(ii) firmly nonexpansive if for all x,yH,

D(x)D(y)2DxDy,xy,

(iii) L-Lipschitz continuous if for all x,yH, there exists L>0 such that

D(x)D(y)Lxy,

(iv) α-strongly monotone if for all x,yH, there exists a constant α>0 such that

D(x)D(y),xyαxy2,

(v) µ- inverse strongly monotone if for all x,yH, there exists a constant μ>0 such that

D(x)D(y),xyμD(x)D(y)2.

Definition 2.2. Let M:HH be a multi-valued mapping, then it is said to be

(i) monotone if for all x,yH,uM(x),vM(y) such that

xy,uv0,

(ii) strongly monotone if for all x,yH,uM(x),vM(y), there exist θ>0 such that

xy,uvθxy2,

(iii) maximal monotone if M is monotone and (I+λM)(H)=H for all λ>0, where I is the identity mapping on H.

Definition 2.3 Let M:HH be a multi-valued mapping, then the resolvent operator associated with M is defined as:

JλM(x)=[I+λM]1(x),xH.

Here λ>0 and I is the identity mapping.

Remark 2.3. The resolvent operator JλM has the following properties:

(i) it is single-valued and nonexpansive, i.e.,

JλM(x)JλM(y)xy,x,yH,

(ii) it is 1-inverse strongly monotone, i.e,

JλM(x)JλM(y)2xy,JλM(x)JλM(y),x,yH.

Definition 2.4. Let M:HH be a multi-valued mapping and JλM be the resolvent operator associated with M, then the Cayley operator λM is defined as:

λM(x)=[2JλM(x)I],xH.

Remark 2.5. Using Remark 2.3.(i), it can be easily seen that the Cayley operator λM is 3-Lipschitz continuous. Now, for the sake of convenience we shall denote λM by throughout the paper.

Lemma 2.6. Let M:HH be a maximal monotone mapping and B:HH be a Lipschitz continuous mapping. Then a mapping B+M:HH is a maximal monotone mapping.

Lemma 2.7. Let M:HH be a set-valued maximal monotone mapping and λ>0. Then the following statements hold:

  • 1. JλM is a single-valued and firmly nonexpansive mapping;

  • 2. Fix(JλM)=M1(0);

  • 3. xJλM2xJγM, 0<λγ,xH;

  • 4. (IJλM) is firmly nonexapansive mapping;

  • 5. Suppose that M1(0)ϕ. Then

    JλM(x)z2xz2JλM(x)x2  for all  xH and zM1(0)

    and

    xJλM,JλMz0  for all  xH and zM1(0).

Lemma 2.8. Let {an} and {cn} be nonnegative sequences of real numbers such that n=1cn<, and let {bn} be a sequence of real numbers such that limsupnbn0. If there exists n0 such that,for any nn0,

an+1(1αn)an+αnbn+cn,

where {αn} is a sequence in (0,1) such that n=0αn=, then limnan=0.

Lemma 2.9. ([4]) Let C be a closed convex subset of H and S:CC a nonexapansive mapping with Fix(S)ϕ. If there exists {xn} in C satisfying xnz and xnSxn0, then z={S}z.

Lemma 2.10. ([8]) Let {γn} is a sequence of real numbers. Suppose that there is subsequence {γnj}j0 of {γn} satisfying γnjγnj+1 for each j0. Let {ϕ(n)}nn* be a sequence of integers defined by

ϕ(n):=max{kn:γk<γk+1}.

Then {ϕ(n)}nn* is a nondecreasing with limnϕ(n)=. Moreover, for each nn*, we have γϕ(n)γϕ(n)+1 and γnγϕ(n)+1.

In this section, we present an inertial Tseng type algorithm for solving our problem. For the convergence analysis of the proposed method, we consider the following assumptions in order to accomplish our goal.

Assumption 1. H is a real Hilbert space, :HH is L-Lipschitz continuous and monotone, and M:HH is a maximal monotone operator.

Assumption 2. Σ:=ΩF(S) is nonempty.

Assumption 3. {θn}[0,θ),{βn}(β*,β')(0,1αn) for some θ>0,β*>0,β'>0, and {αn}(0,1) satisfies limnαn=0 and n=1αn=.

Assumption 4. f:HH is a ρ-contractive mapping.

Next the algorithm is presented.

Lemma 3.1. Assume that Assumptions 1-4 hold, then any sequence {λn} in Algorithm is nonincreasing and converges to λ such that min{λ1,μL}λ.

Proof. See [17, Lemma 3.1].

Algorithm 3.2. Inertial-viscosity Tseng type algorithm

Initialization: Given λ1>0 and μ(0,1). Select arbitrary elements x0,x1H and set n:=1.

Iterative Steps: Construct {xn} by using the following steps:

Step 1. Set

wn=xn+θn(xnxn1)

and compute,

yn=JλnM(Iλn)wn.

If wn=yn, then stop and wnΣ. Otherwise

Step 2. Compute

zn=ynλn(ynwn)

and

Step 3. Compute

xn+1=αnf(xn)+(1αnβn)xn+βnS(zn)

and update,

λn+1=min{μwnynwnyn,λn}   if  wnyn;λn                                   otherwise.

Replace n by n+1 and then repeat Step 1.

Lemma 3.3. Let qΣ. As given in the algorithm together with all four Assumptions, the following inequalities are true.

znq2wnq2(1μ2λn2λn+12)wnyn2

and

znynμλnλn+1wnyn.

Proof. In the same manner as [3, Lemma 6], we obtain that inequalities (3.1) and (3.2) hold.

Lemma 3.4. Suppose that limnwnyn=0.If there exists a weakly convergent subsequences {wnj} of {wn}, then under Assumptions 1-4, we have that the limit of {wnj} belongs to Σ.

Proof The proof is similar to the proof of [3, Lemma7].

With the above results we are now ready for the main convergence theorem.

Theorem 3.5. Suppose that limnθnαnxnxn1=0, then under Assumptions 1-4, we have xnq as n, where q=PΣf(q).

Proof. For the sake of simplicity, we divide the proof into four claims.

Claim 1. {xn} is bounded sequence.

We observe from (3.1) that limn(1μ2λn2λn+12)=1μ2>0. Let qΣ and indeed, thanks to Lemma (3.2), we get

znqwnq.

Also, from the definition of {yn} and nonexansiveness of JλM, we have

ynqJλnM(Iλn)wnqwnq

By the sequence {θnαnxnxn1} converges to 0, we have that there exists a constant M1 such that, for all n,

θnαnxnxn1M1.

From the definition of wn and (3.3), we obtain

znqwnq=xn+θn(xnxn1)xnq+θnxnxn1xnq+θnαnxnxn1αnxnq+αnM1.

By Assumption 4, nonexpansiveness of S and using (3.10), the following relation is obtained:

xn+1q'=αnf(xn)+(1αnβn)xn+βnS(zn)q'αn(f(xn)q')+(1αnβn)(xnq')+βn(Sznq')αnf(xn)q'+(1αnβn)xnq'+βnSznq'αnf(xn)q'+(1αnβn)xnq+βnznq'αnf(xn)f(q')+αnf(q')q'+(1αn)xnq'+αnβnM1[1αn(1ρ)]xnq+αn(f(q')q'+M1)=[1αn(1ρ)]xnq'+αn(1ρ)f(q')q'+M11ρmax{xn-x,f(q')-q'+M11-ρ}max{xn-1-x,f(q')-q'+M11-ρ}max{x0-x,f(q')-q'+M11-ρ}.

This leads to a conclusion that xn+1q'max{x0-x,f(q')-q'+M11-ρ}. Consequently, the sequence {xn} is bounded. In addition, {f(xn)} is also bounded. Since Σ is closed and convex set, PΣf is a ρ- contractive mapping. Now, we can uniquely find qΣ with q=PΣf(q) duq' to the Banach fixed point theorem. We also get, that for any qΣ,

f(q)q,qq0.

Claim 2. There is M0>0 such that

βn(1αnβn)xnSzn2γnγn+1+αn(f(xn)q2+M0).

Applying (3.5), we have

znq2(xnq+αnM1)2=γn+αn(2M1xnq+αnM12)γn+αnM0

for some M0>0. It follows from the assumption on f, (2.3) and (3.6) that

γn+1=αn(f(xn)q)+(1αnβn)(xnq)+βn(Sznq)2=αnf(xn)q2+(1αnβn)γn+βnSznq2αn(1αnβn)f(xn)xn2βn(1αnβn)xnSzn2αnβnf(xn)Szn2αnf(xn)q2+(1αnβn)γn+βnznq2βn(1αnβn)xnSzn2αnf(xn)q2+(1αn)γn+αnβnM0βn(1αnβn)xnSzn2γn+αn(f(xn)q2+M0)βn(1αnβn)xnSzn2.

Therefore, Claim 2 is obtained

Claim 3. There is M>0 such that

γn+1[1αn(1ρ)]γn+αn(1ρ)[3M1ρθnαnxnxn1]+αn(1ρ)[2M1ρxnSzn+21ρf(q)q,xn+1q]

Indeed, setting tn:=(1βn)xn+βnSzn. From inequality (3.3), nonexpansiveness of S, and the definition of wn, we get

tnq(1βn)xnq+βnSznq(1βn)xnq+βnznq(1βn)xnq+βnwnqxnq+βnθnxnxn1

and

xntn=βnxnSzn.

Hence, from the assumption on f, and (3.1), (3.8), and (3.9), we obtain,

γn+1=(1αn)(tnq)+αn(f(xn)f(q))αn(xntn)αn(qf(q))2  (1αn)(tnq)+αn(f(xn)f(q))22αnxntn+qf(q),xn+1q  (1αn)tnq2+αnf(xn)f(q)2+2αntnxn,xn+1q  +2αnf(q)q,xn+1q  (1αn)(xnq+βnθnxnxn1)2+αnρ2xnq2  +2αntnxnxn+1q+2αnf(q)q,xn+1q  (1αn)γn+2θnxnqxnxn1+θn2xnxn12+αnργn  +2αnβnxnSznxn+1q++2αnf(q)q,xn+1q  [1αn(1ρ)]γn+θnxnxn1(2xnx+θnxnxn1)  +2αnβnxnSznxn+1q+2αnf(q)q,xn+1q  [1αn(1ρ)]γn+3Mθnxnxn1+2MαnβnxnSzn  +2αnf(q)q,xn+1q  [1αn(1ρ)]γn+αn(1ρ)[3M1ρθnαnxnxn1]  +αn(1ρ)[2M1ρxnSzn+21ρf(q)q,xn+1q]

for M:=supn{xnq,θxnxn1}>0.

Recall that our task is to show that xnp which is now equivalent to show that γn0 as n.

Claim 4. γn0 as n.

Consider the following two cases:

Case (i). We can find N satisfying γn+1γn for each nN.

Since each term γn is nonnegative, it is convergent. Due to the fact that limnαn=0 and limnβn(0,1), and by Claim 2,

limnxnSzn=0

Indeed, we immediately get

limnxnθn=limnθnαnxnxn1αn=0

In addition, from the definition of zn and by using the triangle inequlity, we obtained the following inequalities:

znwn=znxn+xnwnznxn+xnwn

and

wnynwnzn+znyn

It follows from inequality (3.2) that

(1μλnλn+1)wnynznxn+xnwn

since limn[1(1μλnλn+1)2]=1μ2>0, (3.10) and (3.11),

limnwnyn=0.

Note that, for each n

xn+1xnxn+1zn+znxnαnf(xn)xn+(2βn)xnzn.

Consequently, since limnαn=0 and by (3.13), limnxn+1xn=0. Next observe that, for the reason that {xn} is bounded, there is zH so that xnkz for some subsequence {xnk} of {xn}. Then Lemma 3.3 together with (3.12) implies that zΣ. As a result, by the definition of q, it is straightforward to show that

limsupnf(q)q,xnq=limkf(q)q,xnkq=f(q)q,zq0.

Since limnxn+1xn=0, the following result obtained:

limsupnf(q)q,xn+1qlimsupnf(q)q,xn+1xn+limsupnf(q)q,xnq0.

Applying Lemma 2.8. to the inequality from Claim 3, we can conclude that limnγn=0.

Case (ii). We can find nj satisfying njj and γnj<γnj+1 for all j.

According to Lemma 2.10., the inequality γϕ(n)γϕ(n)+1 is obtained, where ϕ: is defined by (2.5). This implies, by Claim 2, that

βϕ(n)(1αϕ(n)βϕ(n))xϕ(n)Szϕ(n)2γϕ(n)γϕ(n+1)+αϕ(n)(f(xϕ(n))q2+M0).

Similar to case (i), since αn0 as n, we obtain

limnxϕ(n)zϕ(n)=0.

Furthermore, an argument similar in case (i) shows that

limsupnf(q)q,xϕ(n)+1q0.

Finally, from the inquality γϕ(n)γϕ(n)+1 and by Claim 3, we obtain

γϕ(n)+1[1αϕ(n)(1ρ)]γϕ(n)+αϕ(n)(1ρ)[3M1ρθϕ(n)αϕ(n)xϕ(n)xϕ(n)1]+αϕ(n)(1ρ)[2M1ρxϕ(n)Szϕ(n)+21ρf(q)q,xϕ(n)+1q].

Some simple calculations yield

γϕ(n)+13M1ρθϕ(n)αϕ(n)xϕ(n)xϕ(n)1+2M1ρxϕ(n)Szϕ(n)+21ρf(q)q,xϕ(n)+1q.

From this it follows that limsupnγϕ(n)+10. Thus, limnγϕ(n)+1=0.In addition by Lemma (2.5),

limnγnlimnγϕ(n)+1=0

Hence, xn converges strongly to q. This proves our theorem.

Example 4.1. Let H=, the set of real numbers and f: be contraction mapping and M: be a set-valued map. Let f(x)=x10 and M={x5}x, then we calculate resolvent operator JλM and Cayley operator λM for λ=1 as

JλM(x)=[I+λM]1(x)=5x6.λM(x)=[2JλM(x)I]=2x3,

let S: be defined as S(x)=x and αn=1n, βn=12n, λn=1n+3 and θn=1(n+1)2.

All the assumptions of Theorem 3.1 are satisfied and algorithm 3.1 reduces to

wn=xn+θn(xnxn1)yn=JλnM(Iλn)wnzn=ynλn(ynwn)xn+1=1nf(xn)2n32nxn+12nS(zn).

The iterative sequence {xn} generated in the above algorithm is converges strongly to q=0.

All of the codes have been developed in MATLAB R2021a for simplicity. We've tried for different initial points x0=x1=1,3.5,5.0, and found that the sequence {xn} converges to the solution of the problem in each case. Graph of convergence is depicted in the fig.1.

We conclude that by combining the Tseng's technique with inertial extrapolation, the algorithm generated for Cayley inclusion problem and fixed point problem is feasible and converges to some point in the solution set Σ of our problem in the framework of real Hilbert space. The numerical illustration ensures that the rate of converges of the suggested algorithm is effective and faster as compare it to the previously known algorithms in [2].

  1. P. Cholamjiak and S. Suantai, Viscosity approximation methods for a nonex-pansive semigroup in Banach spaces with gauge functions, J. Glob. Optim., 54(1)(2012), 185-197.
    CrossRef
  2. A. H. Dar, M. K. Ahmad, J. Iqbal and W. A. Mir, Algorithm of Common Solutions to the Cayley Inclusion and Fixed Point Problems, Kyungpook Math. J., 61(2)(2021), 257-267.
  3. A. Gibali and D. V. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55(2018), 1-22.
    CrossRef
  4. K. Goebel and W. A. Kirk. Topics in Metric Fixed Point Theory. Cambridge, UK: Cambridge University Press; 1990.
    CrossRef
  5. D. Kitkuan, P. Kumam, J. Martnez-Moreno and K. Sitthithakerngkiet, Inertial viscosity forwardbackward splitting algorithm for monotone inclusions and its application to image restoration problems, Int. J. Comput. Math., 97(1-2)(2020), 482-497.
    CrossRef
  6. P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16(1979), 964-979.
    CrossRef
  7. D. A. Lorenz and T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51(2015), 311-325.
    CrossRef
  8. P. E. Maing, Strong convergence of projected subgradient methods for nons-mooth and nonstrictly convex minimization, Set-Valued Anal., 16(2008), 899-912.
    CrossRef
  9. B. Martinet, Rgularisation dinquations variationnelles par approximations successives, Rev. Franaise Informat. Recherche Oprationnelle, 4(1970), 154-158.
    CrossRef
  10. O. Nevanlinna and S. Reich, Strong convergence of contraction semigroups and of iterative methods for accretive operators in Banach spaces, Israel J. Math., 32(1979), 44-58.
    CrossRef
  11. B. T. Polyak, Some methods of speeding up the convergence of iteration methods, Ussr Comput. Math. Math. Phys., 4(1964), 1-17.
    CrossRef
  12. S. Reich, Extension problems for accretive sets in Banach spaces, J. Functional Analysis, 26(4)(1977), 378-395.
    CrossRef
  13. R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim, 14(5)(1976), 877-898.
    CrossRef
  14. D. V. Thong and P. Cholamjiak, Strong convergence of a forwardbackward splitting method with a new step size for solving monotone inclusions, Comput. Appl. Math., 38(2)(2019), 1-16.
    CrossRef
  15. P. Tseng, A modified forward-backward splitting method for maximal monotone mapping, SIAM J. Control Optim, 38(2)(2000), 431-446.
    CrossRef
  16. H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc.(2), 66(1)(2002), 240-256.
    CrossRef
  17. J. Yang and H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algorithms, 80(3)(2019), 741-752.
    CrossRef