Article Search
eISSN 0454-8124
pISSN 1225-6951

### Article

Kyungpook Mathematical Journal 2022; 62(1): 89-106

Published online March 31, 2022

### An Iterative Method for Equilibrium and Constrained Convex Minimization Problems

Maryam Yazdi, Mohammad Mehdi Shabani, Saeed Hashemi Sababe*

Young Researchers and Elite Club, Malard Branch, Islamic Azad University, Malard, Iran
e-mail : msh_yazdi@yahoo.com

Faculty of sciences, Imam Ali University, Tehran, Iran

Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Canada and Young Researchers and Elite Club, Malard Branch, Islamic Azad University, Malard, Iran
e-mail : s.hashemi@ualberta.ca">s.hashemi@ualberta.ca, hashemi_1365@yahoo.com

Received: February 1, 2021; Accepted: November 8, 2021

We are concerned with finding a common solution to an equilibrium problem associated with a bifunction, and a constrained convex minimization problem. We propose an iterative fixed point algorithm and prove that the algorithm generates a sequence strongly convergent to a common solution. The common solution is identified as the unique solution of a certain variational inequality.

Keywords: Nonexpansive mapping, equilibrium problem, fixed point, convex minimization, averaged mapping, iterative method, variational inequality

### 1. Introduction

Consider two problems in a Hilbert space: A constrained convex optimization (CCO) and an equilibrium problem (EP) associated with a bifunction satisfying certain properties. It is known that CCO can be solved by the gradient-projection algorithm (GPA). It is also known that EP is equivalent to a fixed point problem. Therefore, both problems can be solved by fixed point algorithms.

Let H be a real Hilbert space and C a nonempty closed convex subset of H. A self-mapping T of C is said to be nonexpansive if $‖Tx−Ty‖≤‖x−y‖$ for all x,y∈ C. A self-mapping f of C is said to be a κ-contraction if $‖f(x)−f(y)‖≤κ‖x−y‖$ for all x,y ∈ C, where $κ∈[0,1)$ is a constant. We use Fix(T) to denote the set of fixed points of T; i.e., $Fix(T)={x∈C:Tx=x}$.

Recall that the equilibrium problem (EP) associated to a bifunction $ϕ:C×C→ℝ$ is to find a point u∈ C with the property

$ϕ(u,v)≥0, v∈C.$

The set of solutions of EP (1.1) is denoted by EP(ϕ).

If ϕ is of the form $ϕ(u,v)=⟨Au,v−u⟩$ for all u,v∈ C, where A: C→ H is a mapping, then $u∈EP(ϕ)$ if and only if u ∈ C is a solution to the variational inequality (VI):

$⟨Au,v−u⟩≥0, v∈C.$

Consequently, EP (1.1) includes, as special cases, numerous problems from various areas such as in physics, optimization and economics, and has been received a lot of attention; see, e.g., [7, 8, 5, 9, 12, 13, 14, 16, 17, 18, 19, 22, 28] and the references therein.

In 2010, Tian [20] considered the following iterative method:

$xn+1=αnγf(xn)+(I−μλnF)Txn, n≥0,$

where F is Lipschitz and strongly monotone and $γ,μ>0$ are some constants. He proved that if the parameters ${αn}$ and ${λn}$ satisfy certain appropriate conditions, the sequence ${xn}$ generated by (1.2) converges strongly to the unique solution x*∈ Fix(T) of the variational inequality

Consider the constrained convex minimization problem:

where $g:C→ℝ$ is a real-valued convex function. We denote by U the set of solutions of (1.3). It is well known that the gradient-projection algorithm (GPA) can be used to solve (1.3). If g is continuously differentiable, then GPA generates a sequence ${xn}$ via the recursive formula:

$xn+1=PC(xn−λn∇g(xn)), n≥0,$

where the initial guess x0∈ C is chosen arbitrarily, and the parameters ${λn}$ are positive real numbers satisfying certain conditions. The convergence of GPA (1.4) depends on the behavior of the gradient $∇g$. As a matter of fact, it is known that if $∇g$ is α-strongly monotone and L-Lipschitzian with constants $α>0$ and $L≥0$, then the operator

$Wλ:=PC(I−λ∇g)$

is a contraction for $0<λ<2α/L2$. Consequently, the sequence ${xn}$ generated by PGA (1.4) converges in norm to the unique minimizer of (1.3) provided $(λn)$ is contained in a compact subset of $(0,2α/L2)$, in particular, $λn≡λ∈(0,2α/L2)$.

However, if the gradient $∇g$ is not strongly monotone, the operator $Wλ$ is no longer contractive (in general). As a result, PGA (1.4) fails to converge strongly in an infinite-dimensional Hilbert space (see a counterexample in [23]). Nevertheless, if $∇g$ is Lipschitzian, PGA (1.4) can still converge in the weak topology.

In 2012, Tian and Liu [21] studied a composite iterative scheme by the viscosity approximation method [15, 26] for finding a common solution of an equilibrium problem and a constrained convex minimization problem:

$ϕ(un,y)+1rn⟨y−un,un−xn⟩≥0, y∈C,[3.3mm]xn+1=αnf(xn)+(1−αn)Tnun,n≥1,$

where $ϕ:C×C→ℝ$ is a bifunction, $∇g$ is L-Lipschitzian with L≥0 such that $U∩EP(ϕ)≠∅$, f is a contraction, x1∈ C, ${αn}⊂(0,1)$, ${rn}⊂(0,∞)$, $un=Qrnxn$, Tn is determined by the relation: $PC(I−λn∇g)=snI+(1−sn)Tn$, $sn=2−λnL4$ and ${λn}⊂(0,2L)$. They proved that the sequence ${xn}$, generated by (1.6), converges strongly to a point in $U∩EP(ϕ)$ under certain conditions.

In this paper, motivated by the above results, we will propose an iterative fixed point algorithm to find a point which is a common solution to an equilibrium problem associated with a bifunction and a constrained convex minimization problem. We will prove strong convergence of the algorithm under certain conditions and identify the limit of the sequence generated by the algorithm as the unique solution of some variational inequality.

### 2. Preliminaries

Let H be a real Hilbert space with inner product $⟨⋅,⋅⟩$ and norm $‖⋅‖$, respectively. Weak and strong convergence are denoted by the symbols $⇀$ and $→$, respectively. In a real Hilbert space H, we have the identity

$‖λx+(1−λ)y‖2=λ‖x‖2+(1−λ)‖y‖2−λ(1−λ)‖x−y‖2$

for all x,y ∈ H and $λ∈ℝ$. Let C be a nonempty closed convex subset of H. Then the (nearest point or metric) projection on C is defined by

$PCx:=argminy∈C‖x−y‖2, x∈H.$

Note that PC is nonexpansive and characterized by the inequality (for z ∈ C)

$z=PCx ⇔ ⟨x−z,z−y⟩≥0, y∈C.$

The following inequality is convenient in applications. Recall that for all x,y in a Hilbert space H we have

$‖x+y‖2≤‖x‖2+2⟨y,x+y⟩.$

Next, given a Hilbert space H and a function $φ:H→ℝ$, we recal that φ is weakly lower semicontinuous l.s.c. considering H with its weak topology, that is, $φ(u^)≤liminfφ(un)$ whenever un converges weakly to $u^$.

Lemma 2.1. ([10]) Let H be a real Hilbert space, C a closed convex subset of H and T:C → C a nonexpansive mapping with $Fix(T)≠∅$. If ${xn}$ is a sequence in C such that $xn⇀x$ and $(I−T)xn→y$, then (I-T)x=y. [This is known as the demiclosedness principle of nonexpansive mappings.]

Let $ϕ:C×C→ℝ$ be a bifunction. Throughout the paper we always assume that ϕ satisfies the following (standard) conditions [2]:

(A1) ϕ(x,x)=0 for all x∈ C;

(A2) ϕ is monotone, i.e., $ϕ(x,y)+ϕ(y,x)≤0$, for all x,y∈ C;

(A3) for each $x,y,z∈C,$ $limt↓0ϕ(tz+(1−t)x,y)≤ϕ(x,y);$

(A4) for each x∈ C, the mapping $y↦ϕ(x,y)$ is convex, weakly lower semicontinuous (l.s.c.).

Under these conditions, it is easy to see that the solution set EP(ϕ) of the equilibrium problem (1.1) is closed and convex.

Following Combettes and Hirstoaga [6], we can define, for each fixed $r>0$, a mapping $Qr:H→C$ by the equation

$Qrx=z$

for x∈ H, where z∈ C is the unique solution of the inequality:

$ϕ(z,y)+1r⟨y−z,z−x⟩≥0, y∈C.$

Lemma 2.2. ([6]) The mapping Qr possesses the properties:

(i) Qr is firmly nonexpansive, namely,

$‖Qrx−Qry‖2≤⟨Qrx−Qry,x−y⟩, x,y∈H;$

(ii) $Fix(Qr)=EP(ϕ)$.

Property (ii) shows an equivalence of EP (1.1) to the fixed point problem of $Qrx=x$ and thus EP (1.1) can be solved by fixed point methods.

Definition 2.3. ([21]) A mapping $T:H→H$ is said to be an averaged mapping (av, for short) if $T=(1−α)I+αS$, where $α∈(0,1)$, I is the identity map and S:H→ H is nonexpansive. In this case, we say that T is α-averaged (α-av, for short).

Note that firmly nonexpansive mappings (e.g., projections) are $12$-av.

Proposition 2.4. ([4, 23])} The composite of finitely many averaged mappings is averaged. That is, if each of the mappings ${Ti}i=1N$ is averaged, then so is the composite $T1⋯TN$. In particular, if T1 is α1-av, and T2 is α2-av, where $α1,α2∈(0,1)$, then the composite T1T2 is α-av, where $α=α1+α2−α1α2$.

Definition 2.5. A nonlinear operator G with domain D(G) and range R(G) both in H is said to be:

(i) β-strongly monotone if there exists $β>0$ such that

$⟨x−y,Gx−Gy⟩≥β‖x−y‖2, x,y∈D(G),$

(ii) ν-inverse strongly monotone (for short, ν-ism) if there exists $ν>0$ such that

$⟨x−y,Gx−Gy⟩≥ν‖Gx−Gy‖2, x,y∈D(G).$

It can be easily seen that any projection PC is a 1-ism.

Inverse strongly monotone (also referred to as co-coercive) operators have extensively been used in practical problems from various areas, for instance, traffic assignment problems; see, for example, [3, 11] and references therein.

Proposition 2.6. ([4, 23]) Let T be an operator of H into itself.

• (i) T is nonexpansive if and only if the complement I-T is $12$-ism.

• (ii) If T is ν-ism, then for $μ>0$, $μT$ is $νμ$-ism.

• (iii) T is averaged if and only if the complement I-T is ν-ism for some $ν>12$.

Indeed, for $α∈(0,1)$, T is α-av if and only if I-T is $12α$-ism.

Lemma 2.7. ([1, 25]) Assume ${an}$ is a sequence of nonnegative real numbers such that

$an+1≤(1−γn)an+γnνn+μn,$

where ${γn}$ is a sequence in [0,1], ${μn}$ is a sequence of nonnegative real numbers and ${νn}$ is a sequence in $ℝ$ such that

$∑ n=1∞γn=∞, limsupn→∞νn≤0, ∑ n=1∞μn<∞.$

Then $limn→∞an=0$.

### 3. An Iterative Scheme

In this section we introduce an iterative fixed point algorithm for finding a point in U ∩ EP(ϕ), that is, a common solution to the equilibrium and optimization problems (1.1) and (1.3).

In the rest of this paper, we always assume that C is a nonempty closed convex subset of a Hilbert space H and $g:C→ℝ$ is a real-valued continuously differentiable, convex function such that the gradient ∇ g is L-Lipschitz for some L≥ 0. Recall that U denotes the set of solutions of the minimization problem (1.3). Then we have that a point x*∈ U if and only if $x*∈Fix(Wλ)$, where Wλ is defined in (1.5); that is,

$x*=PC(I−λ∇g)x*$

for each $λ>0$. Further helpful properties of the mapping $PC(I−λ∇g)$ are outlined in [23] and summarized below.

Lemma 3.1. Suppose a continuously differentiable, convex function g has L-Lipschitz continuous gradient $∇g$

and let $λ∈(0,2/L)$ be given.

• (i) The mapping $PC(I−λ∇g)$ is α-av, where $α=2+λL4∈(0,1)$; namely, one can write $PC(I−λ∇g)=sI+(1−s)T$, where $s=2−λL4$ and $T=12+λL[4PC(I−λ∇g)−(2−λL)I]$ is nonexpansive.

• (ii) The function $h(λ)x:=PC(I−λ∇g)x$ is uniformly Lipschitz continuous in $λ∈[0,∞)$ over bounded x ∈ C.

Proof. (i) has been proved in [23]. To prove (ii) take $λ,λ′∈[0,∞)$ and let $r>0$ be given. Setting $Mr:=sup{‖∇g(x)‖:‖x‖≤r,x∈C}$ and noting that PC is nonexpansive, we get

$‖h(λ)−h(λ′)‖=‖PC(I−λ∇g)x−PC(I−λ′∇g)x‖ ≤|λ−λ′|‖∇g(x)‖≤Mr|λ−λ′|.$

Lemma 3.2. ([27, Lemma 3.1]) Suppose $F:C→H$ is α-Lipschitz and η-strongly monotone and let $0<μ<2ηα2$. Then, for $ν∈(0,1)$, the mapping $Uν$ defined by

$Uνx:=x−μνFx, x∈C$

is a $(1−ξν)$-contraction from C into H, with $ξ:=1−1−μ(2η−μα2)∈(0,1]$.

The following is the main result of this paper.

Theorem 3.3. Assume the bifunction $ϕ:C×C→ℝ$ satisfies the standard conditions (A1)-(A4), and the objective function $g:C→ℝ$ is continuously differentiable and convex such that the gradient $∇g$ is L-Lipschitz with L≥0. Assume $U∩EP(ϕ)≠∅$. Let Qr be defined by (2.1) for $r>0$. Let f be a κ-contraction of C with $κ∈[0,1)$ and F: C→ H an α-Lipschitz and η-strongly monotone operator on C with $α>0$ and $η>0$. Assume $0<μ<2ηα2$ and $0<γ<ξκ$ with $ξ=1−1−μ(2η−μα2)$. Suppose ${αn}$, ${βn}$ and ${rn}$ are real sequences satisfying the following conditions:

(B1) ${αn}⊂[0,1]$, $limn→∞αn=0$ and $∑ n=1∞αn=∞;$

(B2) ${βn}⊂[0,1]$, $limn→∞βn=0$ and $∑ n=1∞|βn+1−βn|<∞;$

(B3) ${rn}⊂(0,∞)$, $liminfn→∞rn>0$ and $∑ n=1∞|rn+1−rn|<∞;$

(B4) $∑ n=1∞|αn+1−αn|<∞$ or $limn→∞αnαn+1=1$.

Let {xn} be a sequence generated by the iteration process:

$un=Qrnxn,yn=PC(αnγf(xn)+(I−αnμF)Tnun),xn+1=(1−βn)yn+βnTnyn,n≥1,$

where the initial point x1 ∈ C is selected arbitrarily, ${λn}⊂(0,2L)$, and Tn is determined by the relation $PC(I−λn∇g)=snI+(1−sn)Tn$ with $sn=2−λnL4$ (cf. Lemma 3.1 (i)). Namely,

$Tn=11−sn(PC(I−λn∇g)−snI).$

Suppose ${λn}$ satisfies the condition

(B5) $0<λ_≤λn<2L$ for all n, and $∑ n=1∞|λn+1−λn|<∞$.

Then, the sequences {xn} and {un} defined by (3.2) converge strongly to a point $q∈U∩EP(ϕ)$, where $q=PU∩EP(ϕ)(I−μF+γf)(q)$ is the unique solution to the following variational inequality:

$⟨(μF−γf)q,q−x⟩≤0, x∈U∩EP(ϕ).$

To prove Theorem 3.3 we first establish some lemmas.

Lemma 3.4. Let Tn be defined by (3.3). Let $r>0$ and set

$M(r):=sup{4‖∇g(y)‖+L‖PCy−y‖:‖y‖≤r,y∈C}<∞.$

Then, for $y∈C$ such that $‖y‖≤r$, we have

$‖Tn+1y−Tny‖≤M(r)|λn+1−λn|.$

Proof. By definition of Tn, we have for $y∈C$

$Tn+1y−Tny=11−sn+1PC(y−λn+1∇g(y))−sn+11−sn+1y −11−snPC(y−λn∇g(y))+sn1−sny=11−sn+1[PC(y−λn+1∇g(y))−PC(y−λn∇g(y))] +11−sn+1−11−snPC(y−λn∇g(y))+sn1−sn−sn+11−sn+1y=11−sn+1[PC(y−λn+1∇g(y))−PC(y−λn∇g(y))] +sn+1−sn(1−sn)(1−sn+1)(PC(y−λn∇g(y))−y).$

Since PC is nonexpansive and $sn=(2−λnL)/4$ (thus $1−sn=2+λn4≥12$),

it turns out that (noting $λnL<2$)

$‖Tn+1y−Tny‖≤2‖∇g(y)‖|λn+1−λn| +L|λn+1−λn|(‖PC(y−λn∇g(y))−PCy‖+‖PCy−y‖) ≤2‖∇g(y)‖|λn+1−λn| +L|λn+1−λn|(λn‖∇g(y)‖+‖PCy−y‖) ≤(4‖∇g(y)‖+L‖PCy−y‖)|λn+1−λn|.$

This finishes the proof.

Lemma 3.5. For $x,x′∈C$ and $r,r′>0$, we have

$‖Qrx−Q r′x‖≤1−r r′‖Q r′x−x‖$

and

$‖Qrx−Q r′x′‖≤‖x−x′‖+1−r r′‖Q r′x−x‖.$

Proof. Set $u=Qrx$ and $u′=Q r ′ x$. By definition, we have

In particular,

$ϕ(u,u′)+1r⟨u′−u,u−x⟩≥0.$

Similarly, we have

$ϕ(u′,u)+1r ′ ⟨u−u′,u′−x⟩≥0.$

Adding up the last two relations and using the monotonicity $ϕ(u,u′)+ϕ(u′,u)≤0$, we obtain

$⟨u′−u,1r(u−x)−1 r ′ (u′−x)⟩≥0.$

This can be rewritten as

$1r‖u′−u‖2≤1r−1 r ′ ⟨u′−u,u′−x⟩ ≤1r−1 r ′ ‖u′−u‖‖u′−x‖$

and (3.4) follows immediately.

Next using the nonexpansivity of $Qr′$, we get

$‖Qrx−Q r′x′‖≤‖Q r′x′−Q r′x‖+‖Qrx−Q r′x‖ ≤‖x−x′‖+1−r r′‖Q r′x−x‖.$

The proof of the lemma is complete.

Proof. Theorem 3.3First we make a convention: For the sake of convenience, we will use $M>0$ to stand for an appropriate constant for several estimates from various places throughout the proof which is divided into five steps.

Step 1. The sequences {xn} and {un} are bounded. To see this, we take a point $p∈U∩EP(ϕ)$ to derive from (3.2) that (noting Qrp=p and Tnp=p for all $r>0$ and $n≥1$, and PC is nonexpansive)

$‖xn+1−p‖≤(1−βn‖yn−p‖+βn‖Tnyn−p‖≤‖yn−p‖ ≤‖αnγf(xn)+(I−αnμF)Tnun−p‖ =‖αnγ[f(xn)−f(p)]+αn[γf(p)−μF(p)] +(I−αnμF)Tnun−(I−αnμF)(p)‖.$

Now since f is κ-contraction, $(I−αnμF)$ is $(1−αnξ)$-contraction, Tn is nonexpansive, and $‖un−p‖=‖Qrnxn−p‖≤‖xn−p‖$, it follows from the last relation that

$‖xn+1−p‖≤(1−αn(ξ−γκ))‖xn−p‖+αn‖γf(p)−μF(p)‖ ≤max{‖xn−p‖,‖γf(p)−μF(p)‖ξ−γκ}.$

As a result, we get, by induction,

$‖xn−p‖≤max{‖x1−p‖,‖γf(p)−μF(p)‖ξ−γκ}$

for all $n≥1$. Hence, ${xn}$ is bounded, which implies that ${un}$, ${yn}$, ${f(xn)}$, ${μF(Tnun)}$ and ${Tnyn}$ are all bounded.

Step 2. Asymptotic regularity of ${xn}$: $limn→∞‖xn+1−xn‖=0$. To see this we use the definition of the algorithm (3.2) to derive that, after some manipulations,

$xn+2−xn+1=(1−βn+1)yn+1+βn+1Tn+1yn+1−(1−βn)yn−βnTnyn =(1−βn+1)(yn+1−yn)+βn+1(Tn+1yn+1−Tn+1yn) +(βn−βn+1)(yn−Tn+1yn)+βn(Tn+1yn−Tnyn).$

Since Tn+1 is nonexpansive and (yn) is bounded, and by Lemma 3.4, we obtain from (3.6),

$‖xn+2−xn+1‖≤‖yn+1−yn‖+(|βn+1−βn+1|+βn|λn+1−λn|)M,$

where $M>0$ is a big enough constant.

To estimate $‖yn+1−yn‖$, we again use (3.2) and the nonexpansiveness of PC to get

$‖yn+1−yn‖≤‖αn+1γf(xn+1)+(I−αn+1μF)Tn+1un+1 −αnγf(xn)−(I−αnμF)Tnun‖ =‖(αn+1−αn)[γf(xn+1)−μF(Tn+1un+1)] +αnγ[f(xn+1)−f(xn)] +(I−αnμF)Tn+1un+1−(I−αnμF)Tnun‖.$

Observe by Lemmas 3.4 and 3.5

$‖Tn+1un+1−Tnun‖≤‖Tn+1un+1−Tn+1un‖+‖Tn+1un−Tnun‖ ≤‖un+1−un‖+‖Tn+1un−Tnun‖ =‖Qrn+1xn+1−Qrnxn‖+‖Tn+1un−Tnun‖ ≤‖xn+1−xn‖+|1−rnrn+1|‖Qrn+1xn−xn‖+M|λn+1−λn| ≤‖xn+1−xn‖+|1−rnrn+1|+|λn+1−λn|M.$

Now since f is κ-contraction and $I−αnμF$ is $(1−ξαn)$-contraction, also using (3.9), we get from (3.8)

$‖yn+1−yn‖≤M|αn+1−αn|+κγαn‖xn+1−xn‖ +(1−ξαn)[‖xn+1−xn‖+(|1−rnrn+1|+|λn+1−λn|)M] =[1−(ξ−κγ)αn]‖xn+1−xn‖ +(|αn+1−αn|+|1−rnrn+1|+|λn+1−λn|)M.$

Substituting (3.10)) into (3.7) yields

$‖xn+2−xn+1‖≤(1−α^n)‖xn+1−xn‖+νn+δn,$

where $α^n=(ξ−κγ)αn$, $νn=M|αn+1−αn|$ and

$δn=(|βn+1−βn|+|1−rnrn+1|+(1+βn)|λn+1−λn|)M.$

By condition (B1), we have $∑nα^ n=∞$. By conditions (B2)-(B5), we always have $∑nδ n<∞$, and moreover, $∑nν n<∞$ if $∑n|αn+1−αn|<∞$ or $limn(ν/α^n)=0$ if $limn(αn/αn+1)=1$. So Lemma 2.7 is applicable to (3.11), which yields $‖xn+1−xn‖→0$, as claimed.

Step 3. We claim that

(3a) $‖xn−yn‖→0$,

(3b) $‖xn−un‖→0$,

(3c) $‖PC(I−λn∇g)xn−xn‖→0$.

Indeed, since

$‖xn−yn‖≤‖xn−xn+1‖+‖xn+1−yn‖ =‖xn−xn+1‖+βn‖yn−Tnyn‖ =‖xn−xn+1‖+Mβn→0 (asβn→0)$

and (3a) follows. To show (3b), we first observe that the firm nonexpansivity of $Qrn$ implies that (noting $un=Qrnxn$ and $Fix(Qrn)=EP(ϕ)$)

$‖un−p‖2≤‖xn−p‖2−‖un−xn‖2, p∈EP(ϕ).$

For the sake of brevity, we shall use the $O(αn)$ notation in our argument below. For $p∈U∩EP(ϕ)$, we have (noting again that PC is nonexpansive)

$‖xn+1−p‖2=‖(1−βn)(yn−p)+βn(Tnyn−p)‖2≤‖yn−p‖2 ≤‖αnγf(xn)+(I−αnμF)Tnun−p‖2 =‖αn(γf(xn)−μF(p))+(I−αnμF)Tnun−(I−αnμF)p‖2 =αn2‖γf(xn)−μF(p)‖2+‖(I−αnμF)Tnun−(I−αnμF)p‖2 +2αn⟨γf(xn)−μF(p),(I−αnμF)Tnun−(I−αnμF)p⟩ ≤αn2M+(1−ξαn)2‖un−p‖2+2αnM(1−ξαn)‖un−p‖ ≤‖un−p‖2+O(αn) ≤‖xn−p‖2−‖un−xn‖2+O(αn).$

It turns out that

$‖xn−un‖2≤‖xn−p‖2−‖xn+1−p‖2+O(αn) =O(‖xn+1−xn‖)+O(αn)→0.$

This proves (3b). To verify (3c), it suffices to verify that $PC(I−λn▽g)un−un→0$. Since $PC(I−λn▽g)=snI+(1−sn)Tn$, it follows that

$PC(I−λn▽g)un−un=(1−sn)Tnun−un ≤‖Tnun−yn‖+‖yn−un‖ ≤αn‖γf(xn)−μF(Tnxn)‖+‖yn−un‖ =O(αn)+‖yn−un‖→0.$

Step 4. We have the following asymptotic variational inequality:

$limsupn→∞⟨γf(q)−μF(q),yn−q⟩≤0,$

where q is the unique fixed point of the contraction $PU∩EP(ϕ)(I−μF+γf)$; namely, $q=U∩EP(ϕ)(I−μF+γf)q$. Alternatively, q is the unique solution of the variational inequality

$⟨γf(q)−μF(q),y−q⟩≤0, y∈U∩EP(ϕ).$

To prove (3.12), take a subsequence $(yni)$ of (yn) weakly convergent to a point $y^∈C$ and such that

$limsupn→∞⟨γf(q)−μF(q),yn−q⟩=limi→∞⟨γf(q)−μF(q),yni−q⟩ =⟨γf(q)−μF(q),y^−q⟩.$

By virtue of VI (3.13), it suffices to show $y^∈U∩EP(ϕ)$. To see $y^∈U$, we use (3c) to get (noticing $‖xn−yn‖→0$)

$PC(I−λni▽g)yni−yni→0.$

We may assume $λni→λ^$; note that $λ^∈(0,2/L]$ due to condition (B5). It is then not hard (see Lemma 3.1(ii)) to find from (3.14) that

$PC(I−λ^▽g)yni−yni→0.$

The demiclosedness principle of nonexpansive mappings then ensures that $y^∈Fix(PC(I−λ^▽g)=U$. It remains to show $y^∈EP(ϕ)$. Since $un=Qrnxn$, it follows from the definition of Qr and the monotonicity of ϕ that

$1rn⟨y−un,un−xn⟩≥−ϕ(un,y)≥ϕ(y,un), y∈C.$

This results in $limsupn→∞ϕ(y,un)≤0$ for each $y∈C$, due to the facts $‖un−xn‖→0$ and $infnrn>0$. As $ϕ(y,⋅)$ is l.s.c. and $uni⇀y^$, it turns out that $ϕ(y,y^)≤0$ for each $y∈C$. Now set $yt=ty+(1−t)y^∈C$ with $t∈(0,1)$. We then have by the standard conditions (A1)-(A4)

$0=ϕ(yt,ty+(1−t)y^)≤tϕ(yt,y)+(1−t)ϕ(yt,y^)≤tϕ(yt,y).$

Hence, $ϕ(yt,y)≥0$ and $ϕ(y,yt)≤0$. Letting $t→0$ yields $ϕ(y,y^)≤0$ for each $y∈C$. This asserts $y^∈EP(ϕ)$ and the proof of Step 4 is complete.

Step 5. Strong convergence of ${xn}$: $xn→q$ in norm, where q satisfies (3.12). Setting $zn:=αnγf(xn)+(I−αnμF)Tnun$ (thus, $yn=PCzn$), we derive that

$‖xn+1−q‖2=‖(1−βn)(yn−q)+βn(Tnyn−p)‖2≤‖yn−q‖2 ≤‖αnγf(xn)+(I−αnμF)Tnun−q‖2 =‖[(I−αnμF)Tnun−(I−αnμF)q]+αn(γf(xn)−μF(q))‖2 ≤‖(I−αnμF)Tnun−(I−αnμF)q‖2 +2αn⟨γf(xn)−μF(q),zn−q⟩ ≤(1−ξαn)2‖Tnun−q‖2+2γαn⟨f(xn)−f(q),xn−q⟩ +2γαn⟨f(xn)−f(q),zn−xn⟩+2αn⟨γf(q)−μF(q),zn−q⟩ ≤(1−ξαn)2‖un−q‖2+2γκαn‖xn−q‖2 +2γκαn‖xn−q‖‖zn−xn‖+2αn⟨γf(q)−μF(q),zn−q⟩ ≤(1−2(ξ−γκ)αn)‖xn−q‖2+ξ2αn2M2 +2γκαnM‖zn−xn‖+2αn⟨γf(q)−μF(q),zn−q⟩.$

Here $M≥‖xn−q‖$ for all n. We can rewrite the last relation as

$‖xn+1−q‖2≤(1−α˜n)‖xn−q‖2+α˜nδ˜n,$

where $α˜n=2(ξ−γκ)αn$ and

$δ˜n=ξ2M2αn+2γκM‖zn−xn‖+2⟨γf(q)−μF(q),zn−q⟩2(ξ−γκ).$

Since $yn=PCzn$ and $Tnun∈C$, we obtain

$‖yn−zn‖=‖PCzn−zn‖≤‖PCzn−Tnun‖+‖Tnun−zn‖ ≤2‖zn−Tnun‖=2αn‖γf(xn)−μF(Tnun)‖→0.$

It turns out from (3a) of Step 3 and (3.12) that $‖zn−xn‖→0$ and

$limsupn→∞⟨γf(q)−μF(q),zn−q⟩≤0,$

It is now immediately clear that $∑ n=1∞α˜n=∞$ and $limsupn→∞δ˜n≤0$. This enables us to apply Lemma 2.7 to the relation (3.16) to arrive at $‖xn−q‖→0$, that is, $xn→q$ in norm.

Remark 3.6. The choices of the parameters $(αn),(βn),(rn)$ are easy. For instance, any decreasing null sequence n) satisfies (B2). More precisely, the choices:

$αn=1na, βn=1nb, rn=r+1nc, n≥1,$

where $0 and $b,c,r>0$ satisfy (B1)-(B4). Also, the choice $λn=λ_+1nd$, where $λ_>0$ and $d>0$ satisfies (B5) for n big enough.

Remark 3.7. Theorem 3.3 is a generalization of [21, Theorem 3.2]. In addition, we used the condition $liminfn→∞λn>0$ for the stepsizes ${λn}$. However, [21, Theorem 3.2] required that $λn→2L$, which needs the exact value L of the Lipschitz constant of the gradient $∇g$ of the objective function g. In practical problems, the exact value of L would be unavailable in many circumstances; consequently, verification of the condition $λn→2L$ turns out to be hard.

### 4. Numerical Test

In this section, we give a numerical example to illustrate the scheme (3.2) given in Theorem 3.3.

Example 4.1. Let $C=[−20,20]⊂ℝ$ and define $ϕ(x,y)=−4x2+3xy+y2$, where $x,y∈ℝ$. It can easily be verified that ϕ satisfies the conditions (A1)-(A4). Let us deduce a formula for Qrx, where $r>0$ and $x∈ℝ$. For $y∈[−20,20]$, we have

$ϕ(z,y)+1r⟨y−z,z−x⟩≥0⇔ry2+((3r+1)z−x)y+xz−(4r+1)z2≥0.$

Set

$G(y):=ry2+((3r+1)z−x)y+xz−(4r+1)z2.$

Then G(y) is a quadratic function of y with coefficients

$a:=r, b:=(3r+1)z−x, c:=xz−(4r+1)z2.$

So its discriminant is $Δ:=b2−4ac=[(5r+1)z−x]2.$ Since $G(y)≥0$ for all y ∈ C, it turns out that Δ ≤ 0. That is, $[(5r+1)z−x]2≤0$. Therefore, $z=x5r+1$, which yields $Qr(x)=x5r+1$.

Thus, by Lemma 2.2, we get $EP(ϕ)={0}$. Let $αn=1n, βn=110n,λn=14,rn=1$, for all $n∈ℕ$, $F(x)=14x$ (hence, $α=14$ and $η=14$), $f(x)=12x, g(x)=x2$, $μ=2$ and $γ=12$. Hence $U∩EP(ϕ)={0}$ and $sn=2−λnL4=38$. Also, $Tnx=15x$, for all $x∈[−20,20]$. Indeed,

$PC(I−λn▽g)x=P[−20,20](x−x2)=x2=38x+58Tnx$

for $x∈[−20,20]$. Then, from Lemma 2.7, the sequences ${xn}$ and ${un}$, generated by the algorithm

$un=Qrnxn=16xn,yn=14nxn+(1−12n)Tn(16xn)=2n+1460nxn,xn+1=100n2+692n−563000n2xn,$

converge to $0∈U∩EP(ϕ)$, and it is also evident that $0=PU∩EP(ϕ)(34I)(0)$.

Table 1 indicates the values of the sequences ${xn}$ and ${un}$ generated by the algorithm (4.1) with different initial values of x1 =12 and x1 = -18, respectively, and n = 30.

The values of the sequences {xn} and {un}.

Numerical results for x1=12 and x1 =-18

nxnunnxnun
11221-18-3
22.9440.490672-4.416-0.736
30.423940.070653-0.6359-0.10598
152.2373e-153.7289e-1615-3.356e-15-5.593e-16
161.088e-161.8133e-1716-1.632e-16-2.72e-17
175.1872e-188.6453e-1917-7.7808e-18-1.2968e-18
286.1677e-331.028e-3328-9.251e-33-1.5419e-33
292.5625e-344.2709e-3529-3.8438e-34 -6.4064e-35
301.0574e-35030-1.5862e-350

Figure 1 presents the behavior of the sequences {xn} and {un} that corresponds to Table 1 and shows that both sequences converge to $0∈U∩EP(ϕ)$.

Figure 1. The convergence of {xn} and {un} with different initial values x1

### Acknowledgements.

A part of this research was carried out while the third author was visiting the university of Alberta. The authors are grateful to Professor Hong-Kun Xu for his comments.

1. K. Aoyama, Y. Kimura, W. Takahashi and M. Toyoda, Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Anal., 67(2007), 2350-2360.
2. E. Blum and W. Oettli, From optimization and variatinal inequalities to equilibrium problems, Math. Student., 63(1994), 123-145.
3. D. P. Bretarkas and E. M. Gafin, Projection methods for variational inequalities with applications to the traffic assignment problem, Math. Programming Stud., 17(1982), 139-159.
4. C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20(2004), 103-120.
5. V. Colao, G. Marino and H. K. Xu, An iterative method for finding common solutions of equilibrium and fixed point problems, J. Math. Anal. Appl., 344(2008), 340-352.
6. P. L. Combettes and S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal., 6(2005), 117-136.
7. L. C. Ceng, A. Petrusel, J. C. Yao, Y. Yao, Hybrid viscosity extragradient method for systems of variational inequalities and fixed Points of nonexpansive mappings, zero points of accretive operators in Banach spaces, Fixed Point Theory, 19(2018), 487-502.
8. L. C. Ceng, A. Petrusel, J. C. Yao and Y. Yao, Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions, Fixed Point Theory, 20(2019), 113-133.
9. V. Colao, G. Marino and H. K. Xu, An explicit scheme of equilibrium for a finite family of nonexpansive mappings, J. Math. Anal. Appl., 344(2008), 340-352.
10. K. Geobel, W. A. Kirk and Topics in Metric Fixed Point Theory. Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge Univ. Press; 1999.
11. D. Han and H. K. Lo, Solving non additive traffic assignment problems: a descent method for cocoercive variational inequalities, European J. Oper. Res., 159(2004), 529-544.
12. S. Husain and N. Singh, A hybrid iterative algorithm for a split mixed equilibrium problem and a hierarchical fixed point problem, Appl. Set-Valued Anal. Optim., 1(2019), 149-169.
13. J. S. Jung, A general composite iterative method for equilibrium problems and fixed point problems, J. Comput. Anal. Appl., 12(2010), 124-140.
14. G. Marino and H. K. Xu, A general iterative method for nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 318(2006), 43-52.
15. A. Moudafi, Viscosity approximation methods for fixed point problems, J. Math. Anal. Appl., 241(2000), 46-55.
16. J. W. Peng, J. C. Yao and A viscosity approximation scheme for system of equilibrium problems, nonexpansive mappings and monotone mappings, Nonlinear Anal., 12(2009), 6001-6010.
17. S. Plubtieng and R. Punpaeng, A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces, J. Math. Anal. Appl., 336(2007), 455-469.
18. A. Razani and M. Yazdi, Viscosity approximation method for equilibrium and fixed point problems, Fixed Point Theory, 14(2)(2013), 455-472.
19. A. Razani and M. Yazdi, A New iterative method for generalized equilibrium and fixed point problems of nonexpansive mappings, Bull. Malays. Math. Sci. Soc., 35(4)(2021), 1049-1061.
20. M. Tian, A general iterative alghoritm for nonexpansive mappings in Hilbert spaces, Nonlinear Anal., 73(2010), 689-694.
21. M. Tian and L. Liu, Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem, Fixed Point Theory Appl., 201(2012)(2012), 1-17.
22. S. Wang, C. Hu and G. Chia, Strong convergence of a new composite iterative method for equilibrium problems and fixed point problems, Appl. Math. Comput., 215(2010), 3891-3898.
23. H. K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl., 150(2011), 360-378.
24. H. K. Xu, An iterative approach to quadratic optimization, J. Optim. Theory Appl., 116(2003), 659-678.
25. H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66(2002), 240-256.
26. H. K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl., 298(2004), 279-291.
27. I. Yamada. The hybrid steepest descent for variational inequality problems over the intersection of fixed points sets of nonexpansive mappings. Stud. Comput. Math. North-Holland, Amsterdam; 2001.
28. M. Yazdi and S. Hashemi Sababe A new extragradient method for equilibrium, split feasibility and fixed point problems, J. Nonlinear Convex Anal., 22(4)(2021), 759-773.