검색
Article Search

JMB Journal of Microbiolog and Biotechnology

OPEN ACCESS eISSN 0454-8124
pISSN 1225-6951
QR Code

Article

Kyungpook Mathematical Journal 2022; 62(3): 533-555

Published online September 30, 2022

Copyright © Kyungpook Mathematical Journal.

Halpern Subgradient Method for Pseudomonotone Equilibrium Problems in Hilbert Space

Tran Van Thang∗ and Nguyen Minh Khoa

Electric Power University, Hanoi, Vietnam
e-mail : thangtv@epu.edu.vn and khoanm@epu.edu.vn

Received: November 27, 2020; Revised: June 7, 2021; Accepted: June 8, 2021

In this paper, we introduce a new algorithm for finding a solution of an equilibrium problem in a real Hilbert space. Our paper extends the single projection method to pseudomonotone variational inequalities, from a 2018 paper of Shehu et. al., to pseudomonotone equilibrium problems in a real Hilbert space. On the basis of the given algorithm for the equilibrium problem, we develop a new algorithm for finding a common solution of a equilibrium problem and fixed point problem. The strong convergence of the algorithm is established under mild assumptions. Several of fundamental experiments in finite (infinite) spaces are provided to illustrate the numerical behavior of the algorithm for the equilibrium problem and to compare it with other algorithms.

Keywords: Equilibrium problem, weakly continuous, subgradient, pseudomonotone, Halpern subgradient method

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. The paper is concerned with a method for finding solutions to equilibrium problems, stated as follows:

Find  x*C such that f(x*,y)0yC,

where f:H×HH, is a function such that f(x,.) is convex and subdifferentiable on H for every fixed x ∈ C. From now on, we denote the solution set of Problem (1.1) by Sol(C,f). Problem (1.1) is a general model in the sense that it unifies, in a simple form, numerous known models of optimization problems, nonlinear complementary problems, and variational inequalites [3, 10, 12, 13]. Equilibrium problems have many direct applications in other fields, such as transportation, electricity markets, and network problems [7, 13, 15, 19, 24, 26, 27]. This may explain why the problem has become an attractive field and has received a lot of attention by many authors. Some notable methods for solving it have been proposed, to highlight a few, see [1, 2, 3, 4, 6, 7, 17, 28, 29, 30, 32, 33, 36].

In the special case, f(x,y)=F(x),yx, where F:CH, Problem (1.1) is equivalent to the following variational inequality problem:

Find  x*C such that F(x*),xx*0yC.

In order to solve the variational inequality problems, many iterative methods have been proposed. If F is strongly monotone, the problems can be solved by the Newton method [34] or by single projection methods [9, 16]. Under the assumption that F is monotone and L-Lipschitz continuous, the extragradient method for solving Problem (1.2) is introduced by Korpelevich in [20]. In this method, two projections onto the feasible set C are used at each iteration:

x0C,yk=PrC(xkλkF(xk)),xk+1=PrC(xkλkF(yk)),

where λk(0,1L) and PrC denotes Euclidean projection onto C. Korpelevich showed that the iterative sequence generated by the algorithm is convergent in Euclidean spaces. His extragradient method has since been expanded and improved by many mathematicians in different ways [11, 22]. Recently, in [33], the authors introduced a single projection method which combines a projection method with the Halpern iteration technique. This method requires only one projection onto the feasible set C at each iteration and the iterative process is given by

x0C,yk=PrC(xkλkF(xk)),dk:=xkykλk(F(xk)F(yk)),xk+1=αkx0+(1αk)(xkγρkdk),

where γ(0,2), αk(0,1), limkαk=0, k=0αk=+, λk(0,) and

ρk=xkyk,dkdk2 if dk00 if dk=0.

Recall that the strong convergence result of the iterative sequence generated by the proposed method is only established in real Hilbert spaces when F is pseudomonotone and L-Lipschitz-continuous. When F is a multivalued mapping from C to H, then Problem (1.1) becomes the following multivalued variational inequality problem

Find (x*,w*)C×F(x*) such that w*,xx*0xC.

Recall that the Hausdorff distance ρ(A,B) between two subsets A and B of H is defined by:

ρ(A,B):=max{d(A,B),d(B,A)},

where d(A,B):=supaAinfbBab. A multivalued mapping is said to be Lipschitz continuous on C with constant L if

ρ(F(x),F(y))Lxy2,x,yC.

In paper [8], by replacing a projection onto C in 1.3 at each iteration by an approximate projection or a proximal operator, we improved the method in [33] and proposed a new algorithm for solving Problem (1.4). We proved that the algorithm is strongly convergent under the assumption of the pseudomonotonicity and Lipschitz continuity of cost mappings.

In this paper, we extend the single projection method to pseudomonotone variational inequality in [33] (Algorithm 1.3) for solving Problem (1.1) in a real Hilbert space. Under the assumptions that the equilibrium bifunction f(x,y) is pseudomonotone and

ρ2f(x,)(y),2f(y,)(y)Lxy,xH,yC,

we have proved that the sequence {xk} generated by the algorithm is strongly convergent to a solution of the problem. Note that in many other methods, the assumption of Lipschitz-type continuity with the constants c1, c2 of bifunction f(x,y) is necessary for obtaining the convergence theorem of the algorithm [1, 17, 18, 30]. In our algorithm, the condition (1.5) is considered to be an alternative to the condition that f(x,y) is Lipschitz-type continuous. On the basis of the given algorithm for equilibrium problem we developed a new algorithm for finding a common solution of Problem (1.1) and of fixed point problems. The strong convergence of the algorithm is established under mild assumptions.

The paper is organized as follows. In Section 2 we review some necessary concepts and lemmas that will be used in proving the main results of the paper. Im Section 3 we give then Halpern subgradient method for solving Problem (1.1) and prove its convergence. In section 4, we develop the algorithm forom Section 3 for problem of finding a common solution of Problem (1.1) and fixed point problems. In the last section, several fundamental experiments are provided to illustrate the convergence of the algorithm from Section 3, and to compare it to other algorithms.

Throughout this paper, unless otherwise mentioned, let H denote a Hilbert space with inner product .,. and

the induced norm ..

Definition 2.1. Let C be a nonempty closed convex subset in H. The metric projection from H onto C is denoted by PrC and

PrC(x)=argmin{xy:yC},xH.

It is well known that the metric projection PrC() has the following basic property:

xPrC(x),yPrC(x)0,xH,yC.

Definition 2.2. Let C be a nonempty closed convex subset of a real Hilbert space H. A bifunction f:C×CH is called

  • (i) β-strongly monotone on C, if f(x,y)+f(y,x)βxy2x,yC;

  • (ii) monotone on C, if f(x,y)+f(y,x)0x,yC;

  • (iii) pseudomonotone on C, if f(x,y)0f(y,x)0x,yC.

Definition 2.3. Let CH be a nonempty subset. An operator S:CH is called

  • (i) β-demicontractive on C, if Fix(S) is nonempty and there exists β∈[0,1) such that Sxp2xp2+βxSx2xC,pFix(S);

  • (ii) demiclosed, if for any sequence {xk}C, xkzC, (IS)(xk)0 implies zFix(S).

It is well known that if S is β-demicontractive on C then S is demiclosed and (2.1) is equivalent to (see [25])

xSx,xp12(1β)xSx2xC,pFix(S).

To prove the main result in Section 3 and 4, we shall use the following lemmas in the sequel.

Lemma 2.4. For every x,yH, we have the following assertions.

  • (i) x+y2=x2+2x,y+y2;

  • (ii) x+y2x2+2y,x+y.

Lemma 2.5. Let ak be a sequence of nonnegative real numbers satisfying the following condition:

ak+1(1αk)ak+αkδk+βk,k1,

where αk[0,1], k=0αk=+, limsupkδk0 and βk0, n=1βk<.

Then, limkak=0.

The subdifferential of a convex function g:CR{+} is defined by

g(x)={uH:u,yxg(y)g(x)yC}.

In convex programming, we have the following result.

Lemma 2.6. ([31])

Let C be a convex subset of a real Hilbert space H and g:CR{+} be subdifferentiable.

Then, x* is a solution to the following convex problem:

min{g(x):xC}

if and only if 0g(x*)+NC(x*), where g denotes the subdifferential of g and NC(x*) is the outer normal cone of C at x*C, that is, NC(x*)={uH:u,yx*0yC}.

3.1. Assumption

In this article, in order to find a point in Sol(f,C), we assume that the bifunction f:H×HH satisfies the following conditions:

  • A1. f(x,x)=0 for all x ∈ C, f is pseudomonotone and weakly continuous on H,

    i.e., xkx¯ and yky¯f(xk,yk)f(x¯,y¯);

  • A2. There exists a real nonnegative number L such that ρ2f(x,)(y),2f(y,)(y)Lxy,xH,yC;

  • A3. Sol(C,f) is nonempty.

  • A4. f(x,) is convex and subdifferentiable on H.

Remark 3.1. (a) Let g:HH be a convex subdifferentiable and weakly continuous on H. Clearly, f(x,y):=g(y)g(x) satisfies conditions A1A2. We well known that the following optimization problem

ming(x) such that xC,

is equivalent to Problem (1.1).

(b) Let F(x) be a Lipschitz continuous and weakly continuous function on H, i.e., xkx¯F(xk)F(x¯).

Setting f(x,y):=F(x),yx, it is easy to see that f(x,y) satisfies conditions A1 and A2.

3.2. Algorithm

Algorithm 3.2. Choose starting point x0H, L>L, sequences αk,{λk} and ϵk such that

αk(0,1),limkαk=0, k=0αk=+,0<ρkϵkαk3, k=0(ρkϵk)12<,{λk}[a,b]0,1L(0,).

Step 1. (k=0,1,...) Find ykH such that

yk=argminλkf(xk,y)+12yxk2:yC.

If xkyk=0 then STOP.

Step 2. Take uk2f(xk,yk) satisfying xkykλkuk,ykxϵk,xC and

vkB(uk,Lxkyk)2f(yk,yk),

where Buk,Lxkyk:={xH:xukLxkyk}.

Step 3. Compute xk+1=αkx0+(1αk)zk, where zk:=xkρkdk and

dk:=xkykλk(ukvk),ρk=xkyk,dkdk2.

Step 4. Set k:= k+1, and go to Step 1.

Remark 3.3. (a) By Step 1 and Lemma 2.6, we have

0λk2f(xk,yk)+ykxk+NC(yk),

it follows that there is a uk2f(xk,yk) such that

xkykλkuk,ykx0,xC,

i.e., the uk in Step 2 always exists.

(b) If dk=0 then

xkyk=λkukvkλkLxkyk.

Since λk1L for all k we have

(1λkL)xkyk0xk=yk.

Thus, we observe that dk ≠ 0 and ρk of Step 3 is defined.

3.3. Convergence analysis of algorithm

In this section, we show that the algorithm proposed is strongly convergent if assumptions A1-A4 satisfy.

Lemma 3.4. Let sequence {xk} be generated by Algorithm 3.2 and x*Sol(C,f). Then,

  • (i) zkx*2xkx*2zkxk2+2ρkϵk;

  • (ii) sequences {xk} and {zk} are bounded.

Proof. Using Step 2, we have

xkykλkuk,ykxϵkxC.

Replace x by x*C in the last inequality, we have

xkykλkuk,ykx*ϵk.

Using x*Sol(C,f), f(yk,yk)=0, vk2f(yk,yk) and the pseudomonotone assumption of f, we get

λkvk,ykx*λk[f(yk,yk)f(yk,x*)]0.

Adding two last inequalities, it follows that

ϵkykx*,xkykλkuk+λkvk=ykx*,dk.

Using the above inequality and the definition of zk, we have

zkx*2=xkρkdkx*2=xkx*22ρkxkx*,dk+ρk2dk2xkx*22ρkxkyk,dk+ρk2dk2+2ρkϵk=xkx*22ρkxkyk,dk+ρkxkyk,dk+2ρkϵk==xkx*2ρkxkyk,dk+2ρkϵk,

and ρkxkyk,dk=zkxk2. Therefore

zkx*2xkx*2ρkxkyk,dk+2ρkϵk=xkx*2zkxk2+2ρkϵk.

This follows (i). We now prove (ii). From (i) and condition (3.1), it follows that

xk+1x*=αkx0+(1αk)zkx*αkx0x*+(1αk)zkx*αkx0x*+(1αk)xkx*2+2ρkϵkαkx0x*+(1αk)(xkx*+2ρkϵk)maxx0x*,xkx*+2ρkϵk...maxx0x*,x0x*+2 i=0 kρiϵix0x*+2 i=0ρiϵi<+.

It implies that {xk} is bounded. Using again (i), we have that the sequence {zk} is bounded.

Lemma 3.5. Let x*Sol(C,f). Set ak=xkx*2, bk=2x0x*,xk+1x* and βk=2ρkϵk. Then,

  • (i) ak+1(1αk)ak+αkbk+βk;

  • (ii) βk0, k=1βk<;

  • (iii) limkβkαk=0;

  • (iv) 1limsupkbk<.

Proof. Using Lemma 2.4 (ii), we get

xk+1x*2=αk(x0x*)+(1αk)(zkx*)2    (1 α k )2zkx*2+2αk(1αk)x0x*,xk+1x*    (1αk)zkx*2+2αkx0x*,xk+1x*.

Combining the last inequality and Lemma 3.4 (i), we get

xk+1x*2(1αk)xkx*2+2αkx0x*,xk+1x*+(1αk)2ρkϵk    (1αk)xkx*2+2αkx0x*,xk+1x*+2ρkϵk.

This follows (i). Assumptions (ii) and (iii) are directly inferred from condition (3.1).

Since {xk} is bounded, we have bk2x0x*xk+1x*<, and so limsupkbk<. We now assume contradiction that limsupkbk<1. There exists k0N such that bk<1 for all kk0. It follows from (i) that

ak+1(1αk)ak+αkbk+βk  <(1αk)akαk+βk  =ak(ak+1)αk+βk  akαk+βk.    ak0i=k0kαi+i=k0kβikk0.

Using this and the result (ii), one have

limsupkakak0 i= k 0 +αi+ i= k 0 +βi=.

This contradicts the fact that ak0 for all kN.

Therefore, limsupkbk1.

Theorem 3.6. Let bifunction f:H×HH be satisfying the assumptions A1-A4. Then, the sequence {xk} generated by Algorithm 3.2 converges strongly to a solution zSol(C,f), where z=PrSol(C,f)(x0).

Proof. Set ak:=xkz. In oder to prove this theorem, we consider two following cases.

Case 1. Suppose that there exists k0N such that ak+1ak for all kk0. Then, there exists the limit limkak[0,). From Step 3, Lemma 3.4 (i) and Lemma 2.4 (ii), it follows that

xk+1z2=(1αk)(zkz)+αk(x0z)2    (1 α k )2zkz2+2αkx0z,xk+1z    zkz2+2αkx0z,xk+1z    xkz2zkxk2+2αkx0z,xk+1z+2ρkϵk    xkz2zkxk2+2ρkϵk+αkQ0,

where Q0:=sup{2x0z,xk+1z:k=0,1,...}<. This implies that

ak+1ak+zkxk2+2ρkϵk+αkQ0k0.

Taking k in the last inequality and using the assumptions limkαk=0,limk2ρkϵk=0,

we have limkzkxk=0. From vkB(uk,Lxkyk) for all k, it follows

xkyk,dk=xkyk2λkxkyk,ukvk      xkyk2λkxkyk.ukvk      (1bL)xkyk2,

and

dk=xkykλk(ukvk)  xkyk+λkukvk  (1+λkL)xkyk  (1+bL)xkyk.

Using Step 3, (3.4) and (3.5), we get ρk1bL(1+bL)2 and

xkyk21(1bL)xkyk,dk    =1(1bL)ρkzkxk2    (1+bL)2(1bL)2zkxk2.

From the above inequality and limkzkxk=0, it follows that

limkxkyk=0.

Using this and limkzkxk=0, we obtain limkzkyk=0.

By the definition of xk+1 and Lemma 3.4 (ii), we have

xk+1zk=αkx0zkαkQ10ask,

where Q1=sup{x0zk:k=0,1,...}<+. This together with limkzkxk=0 implies that

xk+1xkxk+1zk+zkxk0ask.

Since sequence {xk} is bounded, so there exists a subsequence {xki+1} such that xki+1p as i, and

limsupkx0z,xk+1z=limix0z,xki+1z.

We will show that pSol(C,f). Indeed, by Step 2, one has

xki+1ykiλki+1uki+1,yki+1x0xC.

Combining this inequality and uki+12f(xki+1,yki+1), we get

xki+1yki+1,xyki+1λki+1uki+1,xyki+1λki+1[f(xki+1,x)f(xki+1,yki+1)].

Since xkyk0 as k, {xki+1} is bounded and converges weakly to p as i, {yki+1} also is bounded and yki+1p. For each fixed point x ∈ C, take the limit as i, using limixki+1yki+1=0 and weakly continuity of bifunction f(x,y), we get

f(p,x)0xC.

Using pSol(C,f) and (3.6), we have

limsupkbk=limsupkx0z,xk+1z,    =2limkx0z,xki+1z    =2x0z,pz0.

Applying Lemma 2.5 for Lemma 3.5 (i) and using the last inequality, we deduce

limkxkz=0.

Thus, {xk} converges strongly to the solution z=PrSol(C,f)(x0).

Case 2. We now assume that there is not k¯N such that {ak}k=k¯ is monotonically decreasing. So, there exists an integer k0k¯ such that ak0ak0+1. Then, there exists a subsequence {aτ(k)}of {ak} such that (see Remark 4.4, [21])

0akaτ(k)+1,aτ(k)aτ(k)+1kk0,

where τ(k)=maxiN:k0ik,aiai+1. Using aτ(k)aτ(k)+1,kk0 and (3.3), one has

0wτ(k)xτ(k)aτ(k)+1aτ(k)+wτ(k)xτ(k)ατ(k)Q0+2ρkϵk0 as k,

and so limkwτ(k)xτ(k)=0. By a similar way as in Case 1, we can show that

limnxτ(k)+1xτ(k)=limnxτ(k)yτ(k)=limnwτ(k)yτ(k)=0.

Since {xτ(k)} is bounded, there exists a subsequence of {xτ(k)}, still denoted by {xτ(k)}, which converges weakly to p.

Arguing similarly as in Case 1, we can prove that pSol(C,f) and

limsupkbτ(k)0.

From Lemma 3.5 (i) and aτ(k)aτ(k)+1,kk0, it follows that

ατ(k)aτ(k)aτ(k)aτ(k)+1+ατ(k)bτ(k)+βτ(k)ατ(k)bτ(k)+βτ(k).

It is equivalent to

aτ(k)bτ(k)+βτ(k)ατ(k).

By Lemma 3.5 (iii), (3.8) and the last inequality, we get

limsupkaτ(k)limsupkbτ(k)0.

Therefore, limkaτ(k)=0. As a consequence, we get

aτ(k)+1=xτ(k)+1z    xτ(k)+1xτ(k)+aτ(k)0,k.

It follows that limkaτ(k)+1=0. Furthermore, 0akaτ(k)+1 for all kk0. Hence, limkak=0, i.e., xkz, as k.

Let a finite system of mappings Sj(jJ:={1,2,...,r}) of H into itself. Denote the fixed point set of Sj by

Fix(Sj):={xH:Sjx=x}.

We consider the problem which finds a common element of the solution set of Problem (1.1) and the set of fixed

points of a finite system of mappings Sj(jJ), namely:

 Find x*jJFix(Sj)Sol(C,f).

4.1. Assumption

In this section, we assume that the bifunction f:H×HH and the mappings Sj(jJ) satisfy the following conditions:

B1. f(x,x)=0 for all x∈ C, f is pseudomonotone and weakly continuous on H and f(x,) is convex and subdifferentiable on H;

B2. There exists a real nonnegative number L such that

ρ2f(x,)(y),2f(y,)(y)Lxy,xH,yC;

B3. jJFix(Sj)Sol(C,f);

B4. Sj:HH is βj-demicontractive for every j∈ J.

4.2. Algorithm

Algorithm 4.1. Choose starting point x0H, L>L, sequences αk,{λk} and ϵk such that

αk(0,1),limkαk=0, k=0αk=+,0<ρkϵkαk3, k=0(ρkϵk)12<,{λk}[a,b]0,1L(0,).

Step 1*. (k=0,1,...) Find ykH such that

yk=argminλkf(xk,y)+12yxk2:yC.

If xkyk=0 then STOP.

Step 2*. Take uk2f(xk,yk) satisfying xkykλkuk,ykxϵk,xC and

vkB(uk,Lxkyk)2f(yk,yk),

where Buk,Lxkyk:={xH:xukLxkyk}. Set dk:=xkykλk(ukvk) and zk:=xkρkdk with

dk:=xkykλk(ukvk),ρk=xkyk,dkdk2.

Step 3*. Compute

pk=αkx0+(1αk)zk,qjk=(1ω)pk+ωSjpk,0<ω<1βj2jJ, xk+1=qj0k,j0=argmax{||qjkpk||,jJ},k1.

Step 4*. Set k:= k+1, and go to Step 1*.

Lemma 4.2. The sequences {pk},{xk},{zk} and {yk} are bounded.}

Proof. Let x*jJFix(Sj)Sol(C,f). Using Step 3* and the βj demicontractive assumption of Sj,j=1,2,..., we get

|xk+1x*|2=|(1ω)pk+ωSj0pkx*|2    =|(pkx*)+ω(Sj0pkpk)|2    pkx*|2+2ωpkx*,Sj0pkpk+ω2Sj0pkpk|2    pkx*|2+ω(ω+βj01)Sj0pkpk|2    |pkx*|2.

From Lemma 3.4 (i) and the last inequality, it follows that

||zk+1x*||||pkx*||+2ρk+1ϵk+1.

Using Step 3*, condition (4.1) and (4.4), we have

pk+1x*=||αk+1(x0x*)+(1αk+1)(zk+1x*)||    αk+1||x0x*||+(1αk+1)||zk+1x*||    αk+1||x0x*||+(1αk+1)(||pkx*||+2ρk+1ϵk+1)    max{||pkx*||+2ρk+1ϵk+1}...    max{||p0x*||+ i=1 k+12ρk+1ϵk+1,||x0x*||}<+.

So, the sequence {pk} is bounded. From (4.3) and (4.4), it follows that the sequences {xk} and {zk} are bounded.

Lemma 4.3. Let x*jJFix(Sj)Sol(C,f). Set ak=xkx*2, βk=2ρkϵk

and bk=2x0x*,pkx*. Then,

  • (i) ak+1(1αk)ak+αkbk+βk;

  • (ii) βk0, n=1βk<;

  • (iii) limkβkαk=0;

  • (iv) 1limsupnbk<.

Proof. Using Lemma 2.4 (ii), Lemma 3.4 (i) and Step 3*, we get

|pkx*|2=|αk(x0x*)+(1αk)(zkx*)|2    (1αk)|zkx*|2+2αkx0x*,pkx*    (1αk)|xkx*|2+2αkx0x*,pkx*+2ρkϵk(1αk)    (1αk)|xkx*|2+2αkx0x*,pkx*+2ρkϵk.

Using last inequality and (4.3), we have

||xk+1x*||2(1αk)||xkx*||+2αkx0x*,pkx*+2ρkϵk.

We have (i). By arguing similarly as in the proof of Lemma 3.5, we obtain (ii), (iii) and (iv).

Theorem 4.4. Suppose that conditions B1-B4 are satisfied. Let {xk} be a sequence generated by Algorithm 4.1. Then, the sequence {xk} converges strongly to a solution

zjJFix(Sj)Sol(C,f),

where z=PrjJFix(Sj)Sol(C,f)(x0).

Proof. Set ak:=xkz. In oder to prove this theorem, we consider two following cases.

Case 1. Suppose that there exists k0N such that ak+1ak for all kk0. Then, there exists the limit limkak[0,).

Using Step 3*, Lemma 2.1 (ii), Lemma 3.4 (i) and (2.2), we obtain

xk+1z2=(1ω)pk+ωSj0pkz2    =pkz22ωpkz,pkSj0pk+ω2pkSj0pk2    pkz2ω(1βj0ω)pkSj0pk2    =||αk(x0z)+(1αk)(zkz)||21ω(1βj0ω)xk+1pk2    (1αk)||zkz||2+2αkx0z,pkzxk+1pk2    ||zkz||2+2αkx0z,pkzxk+1pk2    xkz2zkxk2+2αkx0z,pkzxk+1pk2+2ρkϵk    xkz2zkxk2+αkM0xk+1pk2+2ρkϵk,

where M0:=sup{2x0z,pkz:k=0,1,...}<. It follows that

ak+1ak+zkxk2+xk+1pk2αkM0+2ρkϵkk0.

Passing the limit as k and using the assumptions

limkαk=0,limk2ρkϵk=0,

we have

limkzkxk=0,limkxk+1pk=0.

By a similar way as in the proof of Theorem 3.6, we can show that

limkxkyk=0.

It follows that

zkykzkxk+xkyk0, as k.

Using Step 3*, we have

pkzk=αkx0zkαkM10, as k,

where M1=sup{x0zk:k=0,1,...}0<+.

Therefore,

xk+1xkxk+1pk+pkzk+zkxk0 as k.

From this and

xkpkxk+1xk+xk+1pk,

it follows that limkxkpk=0. Since sequence {xk} is bounded, there exists a subsequence {xki} such that xkipH, pkip and

limsupkx0z,pkz=limix0z,pkiz.

Now, we will show that

pjJFix(Sj)Sol(C,f).

By a similar way as in the proof of Theorem 3.6,

we can prove that pSol(C,f). For each j ∈ J, using Step 3*, we have

||pkSjpk||=1ω||pkqjk||1ω||pkqj0k||=1ω||xk+1pk||.

By limkxk+1pk=0 and the last inequality, we get

||pkSjpk||0,k.

From limkxkpk=0 and xkip, it follows that pkip. Using this, limk||pkSjpk||=0 and the demiclosedness of Sj, we have pFix(Sj). Therefore,

pjJFix(Sj)Sol(C,f).

This together with (4.8) implies that

limsupkbk=2limix0z,pkiz=2x0z,pz0.

Using this, Lemma 2.5 and Lemma 4.3, we obtain limkxkz=0.

Case 2. We now assume that there is not k¯N such that {ak}k=k¯ is monotonically decreasing. So, there exists an integer k0k¯ such that ak0ak0+1. Then, there exists a subsequence {aτ(k)} of {ak}such that (see Remark 4.4, [21])

0akaτ(k)+1,aτ(k)aτ(k)+1kk0,

where τ(k)=maxiN:k0ik,aiai+1. Using aτ(k)aτ(k)+1, for all kk0 and (3.3), we get

zτ(k)xτ(k)0,xτ(k)+1pτ(k)0,k.

By a similar way as in case 1, we can show that

limkxτ(k)pτ(k)=limkxτ(k)yτ(k)=limkzτ(k)yτ(k)=0.

Since {xτ(k)} is bounded, there exists a subsequence of {xτ(k)}, still denoted by {xτ(k)}, which converges weakly to pH. By a similar way as in case 1, we can prove that pjJFix(Sj)Sol(C,f) and

limsupkbτ(k)0.

Using Lemma 4.3 (i) and aτ(k)aτ(k)+1,kk0, we have

ατ(k)aτ(k)aτ(k)aτ(k)+1+ατ(k)bτ(k)+βτ(k)ατ(k)bτ(k)+βτ(k).

Since ατ(k)>0, we get

aτ(k)bτ(k)+βτ(k)ατ(k).

From Lemma 4.3 (iii) and last inequality,

it follows that

limsupkaτ(k)limsupkbτ(k)0.

Hence, limkaτ(k)=0. It follows that

aτ(k)+1=xτ(k)+1z2  (xτ(k)+1xτ(k)+xτ(k)z)20,k.

Using 0akaτ(k)+1 for all kk0, we get limnak=0. Hence, xkz as k.

In this final section, we present some fundamental experiments in finite/infinite spaces to illustrate the numerical behavior of Algorithm 3.2 and to compare it with known algorithms. All programs are coded in Matlab R2018a and the program was run on a PC Intel(R) Core(TM) i7-6600X CPU @ 2.60GHz 16GB Ram.

Let H=Rn. Consider Problem (1.1), whose feasible region C is a polyhedral convex set given by

C={xRn:Axb},

and the bifunction f:Rn×RnR which is often found in Nash-Cournot equilibrium models of the form (see [29]):

f(x,y)=Px+Qy+q,yx,

where bRm, qRn, A is a m×n matrix, P,Q are n×n matrices such that Q is symmetric positive semidefinite and P-Q is negative semidefinite. The bifunction f satisfies the conditions A1, A3 and A4 (see [29]). Now, we will show that the condition A2 holds. Indeed, we have 2f(x,)(y)={Px+2QyQx+q} and 2f(y,)(y)={Py+Qy+q}. It follows that

ρ2f(x,)(y),2f(y,)(y)=(PQ)(xy)PQxy,x,yRn.

Test 1. Let n=5, m=10. We perform some experiments to show the numerical behavior of Algorithm 3.2, (the computation results are shown in Fig. 1), where L=PQ, λk=12L,ϵk=0 for all k, αk=125k+1 and the matrices P,Q,A,b are chosen as follow:

Figure 1. Convergence of Algorithm 3.2 with the tolerance ϵ=103, the stopping criterion is xk+1xkϵ.
A=1.13780.33051.03010.57011.90090.21460.90731.16761.82771.91091.64760.74120.45651.25471.19411.65370.32680.72781.63811.56881.99570.75740.07630.27500.75520.91031.45550.88601.22591.10010.23361.15661.36790.81701.16601.42870.02700.41840.60630.83791.35050.29980.85820.67241.78620.90101.04121.02780.08630.8079,b=2.99162.10892.31222.89071.44932.9631.34122.88212.91391.35, P=6.07892.00000002.00007.9330000008.07122.00000002.00008.5923000006.5521, Q=3.73291.00000001.00003.5758000004.25471.00000001.00003.9077000003.4648.

Test 2. This test compares the computation results of Algorithm 3.2 (Alg. 3.2) and the Halpern subgradient extragradient method (HSEM) in [17] with the different initial points (Table 1). The data of the algorithms are as follows:

Table 1 . The comparative results for different initial points, where ϵ=103..

Alg. 3.2HSEM
Init. pointIter.CPU-timesIter.CPU-times
(1,1,1,1,1)141.2656143.8438
(1,0,0,0,0)171.4531287.2031
(1,0,1,0,1)574.5781102.0938
(0,0,10,0,5)53038.3594639.5938
(5,0,10,0,5)53638.734433640.7188
(1,2,10,0,5)50235.765612015.4531
(30,2,10,0,5)64244.718834244.3438
(30,2,10,10,5)52636.37557995.2188
(3,2,1,1,5)161.4531304.0156
(30,2,1,1,50)463.93751660230.5469
(10,2,1,1,20)282.390668685


A is a matrix of the size m × n with entries generated randomly in [-2,2] and elements of b generated randomly in [1,3].

q is a zero vector and two matrices P, Q are defined as follows:

P=8.9487200029.8750000006.9455200028.79040000010.2969, Q=4.8100100014.45370000003.2502100014.0523000005.1967.

Alg. 3.2: The parameters are the same in Test 1, the stopping criterion is xk+1xkϵ.

HSEM: λk=12L for all k, αk=1k+1, the stopping criterion is xk+1xkϵ.

Test 3. In this test, we perform some experiments to show the numerical behavior of Algorithms 3.2 and the Halpern subgradient extragradient method (HSEM) ([17], Algorithm 3.2) and the extragradient-viscosity method (EVM) ([36], Algorithm 1). Computational results are reported in Table 2. The data of the algorithms are as follows:

Table 2 . The comparative results where tolerance parameter ϵ=103, the stopping criterion is xk+1xkϵ..

m=50

Alg. 3.2HSEMEVM

nIter.CPU-timesIter.CPU-timesIter.CPU-times
23545.23441013.6719934.7344
323231522.82811034.1719
54331.26567163.751118.0469
101412.843811462.73441322.9063
208027.343812859.15631410.7031
m=100

Alg. 3.2HSEMEVM

nIter.CPU-timesIter.CPU-timesIter.CPU-times
24195.0469151.75938.8281
360123.2188755.8281933.0313
52131.156310911.35941025.4531
102726.765626480.82811223.1563
2030429.0469230150.54691415.3125


A is a matrix of the size m×n with entries generated randomly in [-2,2], elements of b generated randomly in [1,3].

q is a zero vector, the matrix P=Q-T where the symmetric positive semidefinite matrix Q is made by using Q1 and a random orthogonal matrix, the negative semidefinite T is made by Q2 and another random orthogonal matrix; Q1, Q2 are random diagonal matrices with their diagonal elements in [1, m] and [-m,0], respectively.

Alg. 3.2: The parameters are the same in Test 2.

HSEM: The parameters are the same in Test 2.

EVM: F(x)=xx0,x0C,β=12,βk=12k+1,S=I, where I is identify mapping.

Test 4. Consider the infinite dimensional Hilbert space L2([0,1]) with the innner product and induced norm indicated as

x,y=01x(t)y(t)dtx,y L2([0,1]),x:=01|x(t)|2 dt12 .

We consider Problem (1.1) with the feasible set C={x  L2([0,1]):x1} and the bifunction

f(x,y)=F(x),yx,

where F:L2([0,1]) L2([0,1]) is of the form F(x)=max{0,x(t)}. Obviously, F(x) is monotone and 1-Lipschitz continuous and f satisfies the conditions A1-A4. In this test, we compare the behavior of Algorithm 3.2 (Alg. 3.2) with the adaptive golden ratio algorithm (AGRA) of Malitsly in [23] and the subgradient extragradient algorithm (SEA) of Censor et al. ([11], Algorithm 4.1). In all three algorithms we take x0(t)=t2,L=2, the stopping criterion is xk+1xkϵ. The computation results of this test show in Table 3.

Table 3 . The comparative results for for different given parameters, where ϵ=103..

Alg. 3.2SEAAGRA
λkIter.CPU-timeτkIter.CPU-time¯λkIter.CPU
1500L+110.43751k+110.2344φ2k+33918.3438
14L+14716.296915265.875φ20k15+503216.4844
110L+17722.296915011126.0781φ30k12+305126.6406
1L+105516.6563110015161.8906φ5k12+307939.7969
1L+1247.17191100+k1243.5313φk12+33516.8125
1100L+112334.1719150+k10135.4375φ5k+207954.4688
12L2+14714.765615+k4414.9219φ5k+16838.8281
12L5+113340.203115+k2155.6406φ3k3+207740.9063
120L15+18328120+k551.4531φ4k3+157441.9844
1L12+84815.4844120+k127225.5313φ(5k3+1)1352.8125
1L15+53716.7188120+k156821.4219φ(5k12+30)124321.9219

  1. P. N. Anh, A hybrid extragradient method extended to fixed point problems and equilibrium problems, Optimization, 62(2)(2013), 271-283.
    CrossRef
  2. P. N. Anh and L. T. H. An, An Armijo-type method for pseudomonotone equilibrium problems and its applications, J. Global Optim., 57(3)(2013), 803-820.
    CrossRef
  3. P. N. Anh, T. N. Hai and P. M. Tuan, On ergodic algorithms for equilibrium problems, J. Global Optim., 64(2016), 179-195.
    CrossRef
  4. P. N. Anh, N. D. Hien and P. M. Tuan, Computational errors of the extragradient method for equilibrium problems, Bull. Malays. Math. Sci. Soc., 42(2019), 2835-2858.
    CrossRef
  5. P. N. Anh and J. K. Kim, Outer approximation algorithms for pseudomonotone equi-librium problems, Comput. Math. Appl., 61(9)(2011), 2588-2595.
    CrossRef
  6. P. N. Anh, J. K. Kim and H. G. Hyun, A proximal point-type algorithm for pseu-domonotone equilibrium problems, Bull. Korean Math. Soc., 49(4)(2012), 749-759.
    CrossRef
  7. P. N. Anh, L. D. Muu, V. H. Nguyen and J. J. Strodiot, Using the Banach con-traction principle to implement the proximal point method for multivalued monotone variational inequalities, J. Optim. Theory Appl., 124(2005), 285-306.
    CrossRef
  8. P. N. Anh, T. V. Thang and H. T. C. Thach, Halpern projection methods for solving pseudomonotone multivalued variational inequalities in Hilbert spaces, Numer. Algorithms, 87(2021), 335-363.
    CrossRef
  9. D. P. Bertsekas and E. M. Gafni, Projection methods for variational inequalities with application to the traffic assignment problem, Math. Programming Stud., 17(1982), 139-159.
    CrossRef
  10. E. Blum and W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Student, 63(1994), 123-145.
  11. Y. Censor, A. Gibali and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148(2)(2011), 318-335.
    Pubmed KoreaMed CrossRef
  12. P. L. Combettes and S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal., 6(1)(2005), 117-136.
  13. P. Daniele, F. Giannessi and A. Maugeri. Equilibrium problems and variational models. Kluwer; 2003.
    CrossRef
  14. B. V. Dinh and D. S. Kim, Extragradient algorithms for equilibrium problems and symmetric generalized hybrid mappings, Optim. Lett., 11(2017), 537-553.
    CrossRef
  15. F. Flores-Bazán, Existence theory for finite dimensional pseudomonotone equilibrium problems, Acta Appl. Math., 77(2003), 249-297.
    CrossRef
  16. M. Fukushima, AA relaxed projection method for variational inequalities, Math. Programming, 35(1986), 58-70.
    CrossRef
  17. D. V. Hieu, Halpern subgradient extragradient method extended to equilibrium problems, Rev. R. Acad. Cienc. Exactas F'ıs. Nat. Ser. A Mat. RACSAM, 111(2017), 823-840.
    CrossRef
  18. D. V. Hieu, MModified subgradient extragradient algorithm for pseudomonotone equi-librium problems, Bull. Korean Math. Soc., 55(5)(2018), 1503-1521.
  19. A. N. Iusem and W. Sosa, New existence results for equilibrium problems, Nonlinear Anal., 52(2003), 621-635.
    CrossRef
  20. G. M. Korpelevich, Extragradient method for finding saddle points and other problems, Èkonom. i Mat. Metody, 12(1976), 747-756.
  21. P. E. Maingß, A hybrid extragradient-viscosity method for monotone operators and fixed point problems, SIAM J. Control Optim., 47(2008), 1499-1515.
    CrossRef
  22. Yu. Malitsky, Projected reflected gradient methods for monotone variational inequal-ities, SIAM J. Optim., 25(1)(2015), 502-520.
    CrossRef
  23. u. Malitsky, Golden ratio algorithms for variational inequalities, Math. Program., 184(2020), 383-410.
    CrossRef
  24. P. Marcotte, Network design problem with congestion effects: A case of bilevel pro-gramming, Math. Programming, 34(2)(1986), 142-162.
    CrossRef
  25. G. Marino and H.K. Xu, Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces, J. Math. Anal. Appl., 329(2007), 336-346.
    CrossRef
  26. G. Mastroeni. On Auxiliary Principle for Equilibrium Problems. Nonconvex Optim. Appl. Norwell: Kluwer Acad. Publ.; 2003.
    CrossRef
  27. G. Mastroeni, Gap Functions for equilibrium problems, J. Global Optim., 27(2003), 411-426.
    CrossRef
  28. A. Moudafi, Proximal point algorithm extended to equilibrium problems, J. Nat. Geom., 15(1999), 91-100.
  29. T. D. Quoc, L. D. Muu and V. H. Nguyen, Extragradient algorithms extended to equilibrium problems, Optimization, 57(6)(2008), 749-776.
    CrossRef
  30. H. U. Rehman, P. Kumam, Y. J. Cho, Y. I. Suleiman and W. Kumam, Modified Popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim. Methods Softw., 366(1)(2021), 82-113.
    CrossRef
  31. R. T. Rockafellar. Convex Analysis. New Jersey: Princeton University Press; 1970.
    CrossRef
  32. P. Santos and S. Scheimberg, An inexact subgradient algorithm for equilibrium prob-lems, Comput. Appl. Math., 30(2011), 91-107.
  33. Y. Shehu, Q. L. Dong and D. Jiang, Single projection method for pseudomonotone variational inequality in Hilbert spaces, Optimization, 68(2018), 385-409.
    CrossRef
  34. K. Taji, M. Fukushima and T. Ibaraki, A globally convergent Newton method for solving strong monotone variational inequalities, Math. Programming, 58(1993), 369-383.
    CrossRef
  35. T. V. Thang, Conjugate duality and optimization over weakly efficient set, Acta Math. Vietnam., 42(2)(2017), 337-355.
    CrossRef
  36. P. T. Vuong, J. J. Strodiot and V. H. Nguyen, On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space, Optimization, 64(2)(2015), 429-451.
    CrossRef