Article Search
eISSN 0454-8124
pISSN 1225-6951

Article

Kyungpook Mathematical Journal 2022; 62(1): 89-106

Published online March 31, 2022

An Iterative Method for Equilibrium and Constrained Convex Minimization Problems

Maryam Yazdi, Mohammad Mehdi Shabani, Saeed Hashemi Sababe*

Young Researchers and Elite Club, Malard Branch, Islamic Azad University, Malard, Iran
e-mail : msh_yazdi@yahoo.com

Faculty of sciences, Imam Ali University, Tehran, Iran

Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Canada and Young Researchers and Elite Club, Malard Branch, Islamic Azad University, Malard, Iran
e-mail : s.hashemi@ualberta.ca">s.hashemi@ualberta.ca, hashemi_1365@yahoo.com

Received: February 1, 2021; Accepted: November 8, 2021

We are concerned with finding a common solution to an equilibrium problem associated with a bifunction, and a constrained convex minimization problem. We propose an iterative fixed point algorithm and prove that the algorithm generates a sequence strongly convergent to a common solution. The common solution is identified as the unique solution of a certain variational inequality.

Keywords: Nonexpansive mapping, equilibrium problem, fixed point, convex minimization, averaged mapping, iterative method, variational inequality

Consider two problems in a Hilbert space: A constrained convex optimization (CCO) and an equilibrium problem (EP) associated with a bifunction satisfying certain properties. It is known that CCO can be solved by the gradient-projection algorithm (GPA). It is also known that EP is equivalent to a fixed point problem. Therefore, both problems can be solved by fixed point algorithms.

Let H be a real Hilbert space and C a nonempty closed convex subset of H. A self-mapping T of C is said to be nonexpansive if TxTyxy for all x,y∈ C. A self-mapping f of C is said to be a κ-contraction if f(x)f(y)κxy for all x,y ∈ C, where κ[0,1) is a constant. We use Fix(T) to denote the set of fixed points of T; i.e., Fix(T)={xC:Tx=x}.

Recall that the equilibrium problem (EP) associated to a bifunction ϕ:C×C is to find a point u∈ C with the property

ϕ(u,v)0,vC.

The set of solutions of EP (1.1) is denoted by EP(ϕ).

If ϕ is of the form ϕ(u,v)=Au,vu for all u,v∈ C, where A: C→ H is a mapping, then uEP(ϕ) if and only if u ∈ C is a solution to the variational inequality (VI):

Au,vu0,vC.

Consequently, EP (1.1) includes, as special cases, numerous problems from various areas such as in physics, optimization and economics, and has been received a lot of attention; see, e.g., [7, 8, 5, 9, 12, 13, 14, 16, 17, 18, 19, 22, 28] and the references therein.

In 2010, Tian [20] considered the following iterative method:

xn+1=αnγf(xn)+(IμλnF)Txn,n0,

where F is Lipschitz and strongly monotone and γ,μ>0 are some constants. He proved that if the parameters {αn} and {λn} satisfy certain appropriate conditions, the sequence {xn} generated by (1.2) converges strongly to the unique solution x*∈ Fix(T) of the variational inequality

(γfμF)x*,xx*0,for allxFix(T).

Consider the constrained convex minimization problem:

Minimize {g(x):xC},

where g:C is a real-valued convex function. We denote by U the set of solutions of (1.3). It is well known that the gradient-projection algorithm (GPA) can be used to solve (1.3). If g is continuously differentiable, then GPA generates a sequence {xn} via the recursive formula:

xn+1=PC(xnλng(xn)),n0,

where the initial guess x0∈ C is chosen arbitrarily, and the parameters {λn} are positive real numbers satisfying certain conditions. The convergence of GPA (1.4) depends on the behavior of the gradient g. As a matter of fact, it is known that if g is α-strongly monotone and L-Lipschitzian with constants α>0 and L0, then the operator

Wλ:=PC(Iλg)

is a contraction for 0<λ<2α/L2. Consequently, the sequence {xn} generated by PGA (1.4) converges in norm to the unique minimizer of (1.3) provided (λn) is contained in a compact subset of (0,2α/L2), in particular, λnλ(0,2α/L2).

However, if the gradient g is not strongly monotone, the operator Wλ is no longer contractive (in general). As a result, PGA (1.4) fails to converge strongly in an infinite-dimensional Hilbert space (see a counterexample in [23]). Nevertheless, if g is Lipschitzian, PGA (1.4) can still converge in the weak topology.

In 2012, Tian and Liu [21] studied a composite iterative scheme by the viscosity approximation method [15, 26] for finding a common solution of an equilibrium problem and a constrained convex minimization problem:

ϕ(un,y)+1rnyun,unxn0,yC,[3.3mm]xn+1=αnf(xn)+(1αn)Tnun,n1,

where ϕ:C×C is a bifunction, g is L-Lipschitzian with L≥0 such that UEP(ϕ), f is a contraction, x1∈ C, {αn}(0,1), {rn}(0,), un=Qrnxn, Tn is determined by the relation: PC(Iλng)=snI+(1sn)Tn, sn=2λnL4 and {λn}(0,2L). They proved that the sequence {xn}, generated by (1.6), converges strongly to a point in UEP(ϕ) under certain conditions.

In this paper, motivated by the above results, we will propose an iterative fixed point algorithm to find a point which is a common solution to an equilibrium problem associated with a bifunction and a constrained convex minimization problem. We will prove strong convergence of the algorithm under certain conditions and identify the limit of the sequence generated by the algorithm as the unique solution of some variational inequality.

Let H be a real Hilbert space with inner product , and norm , respectively. Weak and strong convergence are denoted by the symbols and , respectively. In a real Hilbert space H, we have the identity

λx+(1λ)y2=λx2+(1λ)y2λ(1λ)xy2

for all x,y ∈ H and λ. Let C be a nonempty closed convex subset of H. Then the (nearest point or metric) projection on C is defined by

PCx:=argminyCxy2,xH.

Note that PC is nonexpansive and characterized by the inequality (for z ∈ C)

z=PCxxz,zy0,yC.

The following inequality is convenient in applications. Recall that for all x,y in a Hilbert space H we have

x+y2x2+2y,x+y.

Next, given a Hilbert space H and a function φ:H, we recal that φ is weakly lower semicontinuous l.s.c. considering H with its weak topology, that is, φ(u^)liminfφ(un) whenever un converges weakly to u^.

Lemma 2.1. ([10]) Let H be a real Hilbert space, C a closed convex subset of H and T:C → C a nonexpansive mapping with Fix(T). If {xn} is a sequence in C such that xnx and (IT)xny, then (I-T)x=y. [This is known as the demiclosedness principle of nonexpansive mappings.]

Let ϕ:C×C be a bifunction. Throughout the paper we always assume that ϕ satisfies the following (standard) conditions [2]:

(A1) ϕ(x,x)=0 for all x∈ C;

(A2) ϕ is monotone, i.e., ϕ(x,y)+ϕ(y,x)0, for all x,y∈ C;

(A3) for each x,y,zC, limt0ϕ(tz+(1t)x,y)ϕ(x,y);

(A4) for each x∈ C, the mapping yϕ(x,y) is convex, weakly lower semicontinuous (l.s.c.).

Under these conditions, it is easy to see that the solution set EP(ϕ) of the equilibrium problem (1.1) is closed and convex.

Following Combettes and Hirstoaga [6], we can define, for each fixed r>0, a mapping Qr:HC by the equation

Qrx=z

for x∈ H, where z∈ C is the unique solution of the inequality:

ϕ(z,y)+1ryz,zx0,yC.

Lemma 2.2. ([6]) The mapping Qr possesses the properties:

(i) Qr is firmly nonexpansive, namely,

QrxQry2QrxQry,xy,x,yH;

(ii) Fix(Qr)=EP(ϕ).

Property (ii) shows an equivalence of EP (1.1) to the fixed point problem of Qrx=x and thus EP (1.1) can be solved by fixed point methods.

Definition 2.3. ([21]) A mapping T:HH is said to be an averaged mapping (av, for short) if T=(1α)I+αS, where α(0,1), I is the identity map and S:H→ H is nonexpansive. In this case, we say that T is α-averaged (α-av, for short).

Note that firmly nonexpansive mappings (e.g., projections) are 12-av.

Proposition 2.4. ([4, 23])} The composite of finitely many averaged mappings is averaged. That is, if each of the mappings {Ti}i=1N is averaged, then so is the composite T1TN. In particular, if T1 is α1-av, and T2 is α2-av, where α1,α2(0,1), then the composite T1T2 is α-av, where α=α1+α2α1α2.

Definition 2.5. A nonlinear operator G with domain D(G) and range R(G) both in H is said to be:

(i) β-strongly monotone if there exists β>0 such that

xy,GxGyβxy2,x,yD(G),

(ii) ν-inverse strongly monotone (for short, ν-ism) if there exists ν>0 such that

xy,GxGyνGxGy2,x,yD(G).

It can be easily seen that any projection PC is a 1-ism.

Inverse strongly monotone (also referred to as co-coercive) operators have extensively been used in practical problems from various areas, for instance, traffic assignment problems; see, for example, [3, 11] and references therein.

Proposition 2.6. ([4, 23]) Let T be an operator of H into itself.

• (i) T is nonexpansive if and only if the complement I-T is 12-ism.

• (ii) If T is ν-ism, then for μ>0, μT is νμ-ism.

• (iii) T is averaged if and only if the complement I-T is ν-ism for some ν>12.

Indeed, for α(0,1), T is α-av if and only if I-T is 12α-ism.

Lemma 2.7. ([1, 25]) Assume {an} is a sequence of nonnegative real numbers such that

an+1(1γn)an+γnνn+μn,

where {γn} is a sequence in [0,1], {μn} is a sequence of nonnegative real numbers and {νn} is a sequence in such that

n=1γn=,limsupnνn0, n=1μn<.

Then limnan=0.

In this section we introduce an iterative fixed point algorithm for finding a point in U ∩ EP(ϕ), that is, a common solution to the equilibrium and optimization problems (1.1) and (1.3).

In the rest of this paper, we always assume that C is a nonempty closed convex subset of a Hilbert space H and g:C is a real-valued continuously differentiable, convex function such that the gradient ∇ g is L-Lipschitz for some L≥ 0. Recall that U denotes the set of solutions of the minimization problem (1.3). Then we have that a point x*∈ U if and only if x*Fix(Wλ), where Wλ is defined in (1.5); that is,

x*=PC(Iλg)x*

for each λ>0. Further helpful properties of the mapping PC(Iλg) are outlined in [23] and summarized below.

Lemma 3.1. Suppose a continuously differentiable, convex function g has L-Lipschitz continuous gradient g

and let λ(0,2/L) be given.

• (i) The mapping PC(Iλg) is α-av, where α=2+λL4(0,1); namely, one can write PC(Iλg)=sI+(1s)T, where s=2λL4 and T=12+λL[4PC(Iλg)(2λL)I] is nonexpansive.

• (ii) The function h(λ)x:=PC(Iλg)x is uniformly Lipschitz continuous in λ[0,) over bounded x ∈ C.

Proof. (i) has been proved in [23]. To prove (ii) take λ,λ[0,) and let r>0 be given. Setting Mr:=sup{g(x):xr,xC} and noting that PC is nonexpansive, we get

h(λ)h(λ)=PC(Iλg)xPC(Iλg)x      |λλ|g(x)Mr|λλ|.

Lemma 3.2. ([27, Lemma 3.1]) Suppose F:CH is α-Lipschitz and η-strongly monotone and let 0<μ<2ηα2. Then, for ν(0,1), the mapping Uν defined by

Uνx:=xμνFx,xC

is a (1ξν)-contraction from C into H, with ξ:=11μ(2ημα2)(0,1].

The following is the main result of this paper.

Theorem 3.3. Assume the bifunction ϕ:C×C satisfies the standard conditions (A1)-(A4), and the objective function g:C is continuously differentiable and convex such that the gradient g is L-Lipschitz with L≥0. Assume UEP(ϕ). Let Qr be defined by (2.1) for r>0. Let f be a κ-contraction of C with κ[0,1) and F: C→ H an α-Lipschitz and η-strongly monotone operator on C with α>0 and η>0. Assume 0<μ<2ηα2 and 0<γ<ξκ with ξ=11μ(2ημα2). Suppose {αn}, {βn} and {rn} are real sequences satisfying the following conditions:

(B1) {αn}[0,1], limnαn=0 and n=1αn=;

(B2) {βn}[0,1], limnβn=0 and n=1|βn+1βn|<;

(B3) {rn}(0,), liminfnrn>0 and n=1|rn+1rn|<;

(B4) n=1|αn+1αn|< or limnαnαn+1=1.

Let {xn} be a sequence generated by the iteration process:

un=Qrnxn,yn=PC(αnγf(xn)+(IαnμF)Tnun),xn+1=(1βn)yn+βnTnyn,n1,

where the initial point x1 ∈ C is selected arbitrarily, {λn}(0,2L), and Tn is determined by the relation PC(Iλng)=snI+(1sn)Tn with sn=2λnL4 (cf. Lemma 3.1 (i)). Namely,

Tn=11sn(PC(Iλng)snI).

Suppose {λn} satisfies the condition

(B5) 0<λ_λn<2L for all n, and n=1|λn+1λn|<.

Then, the sequences {xn} and {un} defined by (3.2) converge strongly to a point qUEP(ϕ), where q=PUEP(ϕ)(IμF+γf)(q) is the unique solution to the following variational inequality:

(μFγf)q,qx0,xUEP(ϕ).

To prove Theorem 3.3 we first establish some lemmas.

Lemma 3.4. Let Tn be defined by (3.3). Let r>0 and set

M(r):=sup{4g(y)+LPCyy:yr,yC}<.

Then, for yC such that yr, we have

Tn+1yTnyM(r)|λn+1λn|.

Proof. By definition of Tn, we have for yC

Tn+1yTny=11sn+1PC(yλn+1g(y))sn+11sn+1y    11snPC(yλng(y))+sn1sny=11sn+1[PC(yλn+1g(y))PC(yλng(y))]+11sn+111snPC(yλng(y))+sn1snsn+11sn+1y=11sn+1[PC(yλn+1g(y))PC(yλng(y))]+sn+1sn(1sn)(1sn+1)(PC(yλng(y))y).

Since PC is nonexpansive and sn=(2λnL)/4 (thus 1sn=2+λn412),

it turns out that (noting λnL<2)

Tn+1yTny2g(y)|λn+1λn|      +L|λn+1λn|(PC(yλng(y))PCy+PCyy)      2g(y)|λn+1λn|      +L|λn+1λn|(λng(y)+PCyy)      (4g(y)+LPCyy)|λn+1λn|.

This finishes the proof.

Lemma 3.5. For x,xC and r,r>0, we have

QrxQ rx1r rQ rxx

and

QrxQ rxxx+1r rQ rxx.

Proof. Set u=Qrx and u=Q r x. By definition, we have

ϕ(u,y)+1ryu,ux0,for allyC.

In particular,

ϕ(u,u)+1ruu,ux0.

Similarly, we have

ϕ(u,u)+1r uu,ux0.

Adding up the last two relations and using the monotonicity ϕ(u,u)+ϕ(u,u)0, we obtain

uu,1r(ux)1 r (ux)0.

This can be rewritten as

1ruu21r1 r uu,ux    1r1 r uuux

and (3.4) follows immediately.

Next using the nonexpansivity of Qr, we get

QrxQ rxQ rxQ rx+QrxQ rx      xx+1r rQ rxx.

The proof of the lemma is complete.

Proof. Theorem 3.3First we make a convention: For the sake of convenience, we will use M>0 to stand for an appropriate constant for several estimates from various places throughout the proof which is divided into five steps.

Step 1. The sequences {xn} and {un} are bounded. To see this, we take a point pUEP(ϕ) to derive from (3.2) that (noting Qrp=p and Tnp=p for all r>0 and n1, and PC is nonexpansive)

xn+1p(1βnynp+βnTnynpynp    αnγf(xn)+(IαnμF)Tnunp    =αnγ[f(xn)f(p)]+αn[γf(p)μF(p)]    +(IαnμF)Tnun(IαnμF)(p).

Now since f is κ-contraction, (IαnμF) is (1αnξ)-contraction, Tn is nonexpansive, and unp=Qrnxnpxnp, it follows from the last relation that

xn+1p(1αn(ξγκ))xnp+αnγf(p)μF(p)    max{xnp,γf(p)μF(p)ξγκ}.

As a result, we get, by induction,

xnpmax{x1p,γf(p)μF(p)ξγκ}

for all n1. Hence, {xn} is bounded, which implies that {un}, {yn}, {f(xn)}, {μF(Tnun)} and {Tnyn} are all bounded.

Step 2. Asymptotic regularity of {xn}: limnxn+1xn=0. To see this we use the definition of the algorithm (3.2) to derive that, after some manipulations,

xn+2xn+1=(1βn+1)yn+1+βn+1Tn+1yn+1(1βn)ynβnTnyn    =(1βn+1)(yn+1yn)+βn+1(Tn+1yn+1Tn+1yn)    +(βnβn+1)(ynTn+1yn)+βn(Tn+1ynTnyn).

Since Tn+1 is nonexpansive and (yn) is bounded, and by Lemma 3.4, we obtain from (3.6),

xn+2xn+1yn+1yn+(|βn+1βn+1|+βn|λn+1λn|)M,

where M>0 is a big enough constant.

To estimate yn+1yn, we again use (3.2) and the nonexpansiveness of PC to get

yn+1ynαn+1γf(xn+1)+(Iαn+1μF)Tn+1un+1    αnγf(xn)(IαnμF)Tnun    =(αn+1αn)[γf(xn+1)μF(Tn+1un+1)]    +αnγ[f(xn+1)f(xn)]    +(IαnμF)Tn+1un+1(IαnμF)Tnun.

Observe by Lemmas 3.4 and 3.5

Tn+1un+1TnunTn+1un+1Tn+1un+Tn+1unTnunun+1un+Tn+1unTnun=Qrn+1xn+1Qrnxn+Tn+1unTnunxn+1xn+|1rnrn+1|Qrn+1xnxn+M|λn+1λn|xn+1xn+|1rnrn+1|+|λn+1λn|M.

Now since f is κ-contraction and IαnμF is (1ξαn)-contraction, also using (3.9), we get from (3.8)

yn+1ynM|αn+1αn|+κγαnxn+1xn    +(1ξαn)[xn+1xn+(|1rnrn+1|+|λn+1λn|)M]    =[1(ξκγ)αn]xn+1xn    +(|αn+1αn|+|1rnrn+1|+|λn+1λn|)M.

Substituting (3.10)) into (3.7) yields

xn+2xn+1(1α^n)xn+1xn+νn+δn,

where α^n=(ξκγ)αn, νn=M|αn+1αn| and

δn=(|βn+1βn|+|1rnrn+1|+(1+βn)|λn+1λn|)M.

By condition (B1), we have nα^ n=. By conditions (B2)-(B5), we always have nδ n<, and moreover, nν n< if n|αn+1αn|< or limn(ν/α^n)=0 if limn(αn/αn+1)=1. So Lemma 2.7 is applicable to (3.11), which yields xn+1xn0, as claimed.

Step 3. We claim that

(3a) xnyn0,

(3b) xnun0,

(3c) PC(Iλng)xnxn0.

Indeed, since

xnynxnxn+1+xn+1yn    =xnxn+1+βnynTnyn    =xnxn+1+Mβn0(asβn0)

and (3a) follows. To show (3b), we first observe that the firm nonexpansivity of Qrn implies that (noting un=Qrnxn and Fix(Qrn)=EP(ϕ))

unp2xnp2unxn2,pEP(ϕ).

For the sake of brevity, we shall use the O(αn) notation in our argument below. For pUEP(ϕ), we have (noting again that PC is nonexpansive)

xn+1p2=(1βn)(ynp)+βn(Tnynp)2ynp2    αnγf(xn)+(IαnμF)Tnunp2    =αn(γf(xn)μF(p))+(IαnμF)Tnun(IαnμF)p2    =αn2γf(xn)μF(p)2+(IαnμF)Tnun(IαnμF)p2    +2αnγf(xn)μF(p),(IαnμF)Tnun(IαnμF)p    αn2M+(1ξαn)2unp2+2αnM(1ξαn)unp    unp2+O(αn)    xnp2unxn2+O(αn).

It turns out that

xnun2xnp2xn+1p2+O(αn)    =O(xn+1xn)+O(αn)0.

This proves (3b). To verify (3c), it suffices to verify that PC(Iλng)unun0. Since PC(Iλng)=snI+(1sn)Tn, it follows that

PC(Iλng)unun=(1sn)Tnunun        Tnunyn+ynun        αnγf(xn)μF(Tnxn)+ynun        =O(αn)+ynun0.

Step 4. We have the following asymptotic variational inequality:

limsupnγf(q)μF(q),ynq0,

where q is the unique fixed point of the contraction PUEP(ϕ)(IμF+γf); namely, q=UEP(ϕ)(IμF+γf)q. Alternatively, q is the unique solution of the variational inequality

γf(q)μF(q),yq0,yUEP(ϕ).

To prove (3.12), take a subsequence (yni) of (yn) weakly convergent to a point y^C and such that

limsupnγf(q)μF(q),ynq=limiγf(q)μF(q),yniq              =γf(q)μF(q),y^q.

By virtue of VI (3.13), it suffices to show y^UEP(ϕ). To see y^U, we use (3c) to get (noticing xnyn0)

PC(Iλnig)yniyni0.

We may assume λniλ^; note that λ^(0,2/L] due to condition (B5). It is then not hard (see Lemma 3.1(ii)) to find from (3.14) that

PC(Iλ^g)yniyni0.

The demiclosedness principle of nonexpansive mappings then ensures that y^Fix(PC(Iλ^g)=U. It remains to show y^EP(ϕ). Since un=Qrnxn, it follows from the definition of Qr and the monotonicity of ϕ that

1rnyun,unxnϕ(un,y)ϕ(y,un),yC.

This results in limsupnϕ(y,un)0 for each yC, due to the facts unxn0 and infnrn>0. As ϕ(y,) is l.s.c. and uniy^, it turns out that ϕ(y,y^)0 for each yC. Now set yt=ty+(1t)y^C with t(0,1). We then have by the standard conditions (A1)-(A4)

0=ϕ(yt,ty+(1t)y^)tϕ(yt,y)+(1t)ϕ(yt,y^)tϕ(yt,y).

Hence, ϕ(yt,y)0 and ϕ(y,yt)0. Letting t0 yields ϕ(y,y^)0 for each yC. This asserts y^EP(ϕ) and the proof of Step 4 is complete.

Step 5. Strong convergence of {xn}: xnq in norm, where q satisfies (3.12). Setting zn:=αnγf(xn)+(IαnμF)Tnun (thus, yn=PCzn), we derive that

xn+1q2=(1βn)(ynq)+βn(Tnynp)2ynq2    αnγf(xn)+(IαnμF)Tnunq2    =[(IαnμF)Tnun(IαnμF)q]+αn(γf(xn)μF(q))2    (IαnμF)Tnun(IαnμF)q2    +2αnγf(xn)μF(q),znq    (1ξαn)2Tnunq2+2γαnf(xn)f(q),xnq    +2γαnf(xn)f(q),znxn+2αnγf(q)μF(q),znq    (1ξαn)2unq2+2γκαnxnq2    +2γκαnxnqznxn+2αnγf(q)μF(q),znq    (12(ξγκ)αn)xnq2+ξ2αn2M2    +2γκαnMznxn+2αnγf(q)μF(q),znq.

Here Mxnq for all n. We can rewrite the last relation as

xn+1q2(1α˜n)xnq2+α˜nδ˜n,

where α˜n=2(ξγκ)αn and

δ˜n=ξ2M2αn+2γκMznxn+2γf(q)μF(q),znq2(ξγκ).

Since yn=PCzn and TnunC, we obtain

ynzn=PCznznPCznTnun+Tnunzn    2znTnun=2αnγf(xn)μF(Tnun)0.

It turns out from (3a) of Step 3 and (3.12) that znxn0 and

limsupnγf(q)μF(q),znq0,

It is now immediately clear that n=1α˜n= and limsupnδ˜n0. This enables us to apply Lemma 2.7 to the relation (3.16) to arrive at xnq0, that is, xnq in norm.

Remark 3.6. The choices of the parameters (αn),(βn),(rn) are easy. For instance, any decreasing null sequence n) satisfies (B2). More precisely, the choices:

αn=1na,βn=1nb,rn=r+1nc,n1,

where 0<a1 and b,c,r>0 satisfy (B1)-(B4). Also, the choice λn=λ_+1nd, where λ_>0 and d>0 satisfies (B5) for n big enough.

Remark 3.7. Theorem 3.3 is a generalization of [21, Theorem 3.2]. In addition, we used the condition liminfnλn>0 for the stepsizes {λn}. However, [21, Theorem 3.2] required that λn2L, which needs the exact value L of the Lipschitz constant of the gradient g of the objective function g. In practical problems, the exact value of L would be unavailable in many circumstances; consequently, verification of the condition λn2L turns out to be hard.

In this section, we give a numerical example to illustrate the scheme (3.2) given in Theorem 3.3.

Example 4.1. Let C=[20,20] and define ϕ(x,y)=4x2+3xy+y2, where x,y. It can easily be verified that ϕ satisfies the conditions (A1)-(A4). Let us deduce a formula for Qrx, where r>0 and x. For y[20,20], we have

ϕ(z,y)+1ryz,zx0ry2+((3r+1)zx)y+xz(4r+1)z20.

Set

G(y):=ry2+((3r+1)zx)y+xz(4r+1)z2.

Then G(y) is a quadratic function of y with coefficients

a:=r,b:=(3r+1)zx,c:=xz(4r+1)z2.

So its discriminant is Δ:=b24ac=[(5r+1)zx]2. Since G(y)0 for all y ∈ C, it turns out that Δ ≤ 0. That is, [(5r+1)zx]20. Therefore, z=x5r+1, which yields Qr(x)=x5r+1.

Thus, by Lemma 2.2, we get EP(ϕ)={0}. Let αn=1n,βn=110n,λn=14,rn=1, for all n, F(x)=14x (hence, α=14 and η=14), f(x)=12x,g(x)=x2, μ=2 and γ=12. Hence UEP(ϕ)={0} and sn=2λnL4=38. Also, Tnx=15x, for all x[20,20]. Indeed,

PC(Iλng)x=P[20,20](xx2)=x2=38x+58Tnx

for x[20,20]. Then, from Lemma 2.7, the sequences {xn} and {un}, generated by the algorithm

un=Qrnxn=16xn,yn=14nxn+(112n)Tn(16xn)=2n+1460nxn,xn+1=100n2+692n563000n2xn,

converge to 0UEP(ϕ), and it is also evident that 0=PUEP(ϕ)(34I)(0).

Table 1 indicates the values of the sequences {xn} and {un} generated by the algorithm (4.1) with different initial values of x1 =12 and x1 = -18, respectively, and n = 30.

The values of the sequences {xn} and {un}.

Numerical results for x1=12 and x1 =-18

nxnunnxnun
11221-18-3
22.9440.490672-4.416-0.736
30.423940.070653-0.6359-0.10598
152.2373e-153.7289e-1615-3.356e-15-5.593e-16
161.088e-161.8133e-1716-1.632e-16-2.72e-17
175.1872e-188.6453e-1917-7.7808e-18-1.2968e-18
286.1677e-331.028e-3328-9.251e-33-1.5419e-33
292.5625e-344.2709e-3529-3.8438e-34 -6.4064e-35
301.0574e-35030-1.5862e-350

Figure 1 presents the behavior of the sequences {xn} and {un} that corresponds to Table 1 and shows that both sequences converge to 0UEP(ϕ).

Figure 1. The convergence of {xn} and {un} with different initial values x1

A part of this research was carried out while the third author was visiting the university of Alberta. The authors are grateful to Professor Hong-Kun Xu for his comments.

1. K. Aoyama, Y. Kimura, W. Takahashi and M. Toyoda, Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Anal., 67(2007), 2350-2360.
2. E. Blum and W. Oettli, From optimization and variatinal inequalities to equilibrium problems, Math. Student., 63(1994), 123-145.
3. D. P. Bretarkas and E. M. Gafin, Projection methods for variational inequalities with applications to the traffic assignment problem, Math. Programming Stud., 17(1982), 139-159.
4. C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20(2004), 103-120.
5. V. Colao, G. Marino and H. K. Xu, An iterative method for finding common solutions of equilibrium and fixed point problems, J. Math. Anal. Appl., 344(2008), 340-352.
6. P. L. Combettes and S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal., 6(2005), 117-136.
7. L. C. Ceng, A. Petrusel, J. C. Yao, Y. Yao, Hybrid viscosity extragradient method for systems of variational inequalities and fixed Points of nonexpansive mappings, zero points of accretive operators in Banach spaces, Fixed Point Theory, 19(2018), 487-502.
8. L. C. Ceng, A. Petrusel, J. C. Yao and Y. Yao, Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions, Fixed Point Theory, 20(2019), 113-133.
9. V. Colao, G. Marino and H. K. Xu, An explicit scheme of equilibrium for a finite family of nonexpansive mappings, J. Math. Anal. Appl., 344(2008), 340-352.
10. K. Geobel, W. A. Kirk and Topics in Metric Fixed Point Theory. Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge Univ. Press; 1999.
11. D. Han and H. K. Lo, Solving non additive traffic assignment problems: a descent method for cocoercive variational inequalities, European J. Oper. Res., 159(2004), 529-544.
12. S. Husain and N. Singh, A hybrid iterative algorithm for a split mixed equilibrium problem and a hierarchical fixed point problem, Appl. Set-Valued Anal. Optim., 1(2019), 149-169.
13. J. S. Jung, A general composite iterative method for equilibrium problems and fixed point problems, J. Comput. Anal. Appl., 12(2010), 124-140.
14. G. Marino and H. K. Xu, A general iterative method for nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 318(2006), 43-52.
15. A. Moudafi, Viscosity approximation methods for fixed point problems, J. Math. Anal. Appl., 241(2000), 46-55.
16. J. W. Peng, J. C. Yao and A viscosity approximation scheme for system of equilibrium problems, nonexpansive mappings and monotone mappings, Nonlinear Anal., 12(2009), 6001-6010.
17. S. Plubtieng and R. Punpaeng, A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces, J. Math. Anal. Appl., 336(2007), 455-469.
18. A. Razani and M. Yazdi, Viscosity approximation method for equilibrium and fixed point problems, Fixed Point Theory, 14(2)(2013), 455-472.
19. A. Razani and M. Yazdi, A New iterative method for generalized equilibrium and fixed point problems of nonexpansive mappings, Bull. Malays. Math. Sci. Soc., 35(4)(2021), 1049-1061.
20. M. Tian, A general iterative alghoritm for nonexpansive mappings in Hilbert spaces, Nonlinear Anal., 73(2010), 689-694.
21. M. Tian and L. Liu, Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem, Fixed Point Theory Appl., 201(2012)(2012), 1-17.
22. S. Wang, C. Hu and G. Chia, Strong convergence of a new composite iterative method for equilibrium problems and fixed point problems, Appl. Math. Comput., 215(2010), 3891-3898.
23. H. K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl., 150(2011), 360-378.
24. H. K. Xu, An iterative approach to quadratic optimization, J. Optim. Theory Appl., 116(2003), 659-678.
25. H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66(2002), 240-256.
26. H. K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl., 298(2004), 279-291.
27. I. Yamada. The hybrid steepest descent for variational inequality problems over the intersection of fixed points sets of nonexpansive mappings. Stud. Comput. Math. North-Holland, Amsterdam; 2001.
28. M. Yazdi and S. Hashemi Sababe A new extragradient method for equilibrium, split feasibility and fixed point problems, J. Nonlinear Convex Anal., 22(4)(2021), 759-773.