검색
Article Search

JMB Journal of Microbiolog and Biotechnology

OPEN ACCESS eISSN 0454-8124
pISSN 1225-6951
QR Code

Article

Kyungpook Mathematical Journal 2024; 64(1): 69-94

Published online March 31, 2024

Copyright © Kyungpook Mathematical Journal.

Strong Convergence of a Bregman Projection Method for the Solution of Pseudomonotone Equilibrium Problems in Banach Spaces

Olawale Kazeem Oyewole, Lateef Olakunle Jolaoso, Kazeem Olalekan Aremu

The Technion – Israel Institute of Technology, 32000 Haifa, Israel
e-mail : oyewoleolawalekazeem@gmail.com, oyewoleok@campus.technion.ac.il

School of Mathematics, University of Southampton, SO171 BJ, United Kingdom Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences, P. O. Box 94 Medunsa 0204, Ga-Rankuwa, South Africa
e-mail : jollatanu@yahoo.co.u

Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences, P. O. Box 94 Medunsa 0204, Ga-Rankuwa, South Africa School of Mathematics, Usmanu Danfodiyo University Sokoto, P. M. B. 2346, Sokoto, Sokoto State, Nigeria
e-mail : aremu.kazeem@udusok.edu.ng, aremukazeemolalekan@gmail.com

Received: November 1, 2021; Revised: October 5, 2022; Accepted: November 15, 2022

In this paper, we introduce an inertial self-adaptive projection method using Bregman distance techniques for solving pseudomonotone equilibrium problems in reflexive Banach spaces. The algorithm requires only one projection onto the feasible set without any Lipschitz-like condition on the bifunction. Using this method, a strong convergence theorem is proved under some mild conditions. Furthermore, we include numerical experiments to illustrate the behaviour of the new algorithm with respect to the Bregman function and other algorithms in the literature.

Keywords: equilibrium problem, strongly pseudomonotone, strong convergence, Banach space, quasi-ϕ-nonexpansive mapping, fixed point

Let C be a nonempty, closed and convex subset of a reflexive real Banach space E with dual space E*. Throughout this paper, we shall denote by |·| and ·,· the norm and the duality pairing between the elements of E and E*, respectively. The Equilibrium Problem (EP) is formulated as:

Findx*Csuch thatg(x*,y)0,yC,

where g:C×CR is a bifunction satisfying g(x,x)=0 for all xC. The EP provides a unified framework for the study of various problems arising in pure and applied sciences such as complimentarity problems, fixed point problems, optimization problems, variational inequality problems and so on [4, 15, 31]. For instance, if g(x,y)=Fx,y-x for all x,yC where F:CE* is a mapping, then the EP becomes the Variational Inequality Problem (VIP) (see [16, 35]) which consists of finding a point x*C such that

F(x*),x*-y0,yC.

In the study of EP, research is split into existence results and the development of iterative algorithms for approximating solutions. For more on existence results, we refer the readers to the works of [4, 15] and the references therein. Iterative schemes for approximating solutions of equilibrium problems have been studied in both finite and infinite dimensional spaces (see [3, 12, 31]). In recent years, there have been several results about approximating the solution of equilibrium problems involving pseudomonotone and strongly pseudomonotone bifunctions (see [10, 12, 14, 19, 22, 32]). We note that in most of these results, there is the requirement that the bifunction g satisfies a certain Lipschitz-type condition on the set C. A bifunction g:C×CR is said to satisfy Lipschitz-type condition, if there exist constants c1,c2>0 such that for all x,y,zC

g(x,y)+g(y,z)g(x,z)-c1|x-y|2-c2|y-z|2.

The Lipschitz condition (1.3) does not hold in general, and when it does, it is not always easy to find the Lipschitz constants c1 and c2. This may affect the efficiency of the method (see [13, 20, 23]). In addition to this, previous methods require solving two strongly convex programming problems; this is not efficient when the feasible set or the bifunction have complex structures. To reduce some of these drawbacks, Vinh and Gibali [17] recently introduced gradient projection type algorithms for solving the EP with a pseudomonotone bifunction in real Hilbert spaces as follows:

Algorithm 1.1. Inertial Gradient projection method (IGPM) for EP

Initialization: Take θn[0,1) and a positive sequence {βn}n=0 satisfying

n=0βn=+,n=0βn2<+.

Select initial points x0,x1C and set n=1.

Iterative step: Given xn-1 and xn (n1), choose αn such that 0θnθ¯n

where

θ¯n=minθ,βn2xn xn1 ,ifxnxn1θ,          otherwise.

Compute

wn=xn+θn(xnxn1).

Take g(xn)(xn,·)(xn)(n1). Calculate

ηn=max{1,g(xn)},λn=βnηn

and

xn+1=PC(wnαng(xn)).

Stopping criterion If xn+1=wn=xn for some n1 then stop. Otherwise set n:=n+1 and return to Iterative step.

Although, Algorithm 1.1 does not require a Lipschitz-like condition on the pseudomonotone bifunction, its convergence requires condition (1.4) which significantly affects the convergence rate of the algorithm. Rehman et al [32] proposed an explicit algorithm which does not requires condition (1.4) but rather the Lipschitz-like condition (1.3) in real Hilbert space as follows:

Algorithm 1.2. Inertial explicit subgradient extragradient method (IESEM)

Initialization: Choose x-1,y-1,x0,y0H, α1, μ(0,h(θn)) and let αn[0,5-2) be a nondecreasing sequence.

Iterative step: Given xn-1, {yn-1}, {xn}, {yn} and αn(n0).

Step 1 Construct a half space     Cn={wH:wnαnvnyn,wyn0},    where vnf(yn1,yn) and compute     xn+1=argmin{αnf(yn,y)+12wny2:yCn},    where wn=xn+θn(xnxn1).Step 2  Set     αn+1=minαn,μ(yn1yn2+xn+1yn2)2[f(yn1 ,xn+1 )f(yn1 ,yn )f(yn ,xn+1 )]+    and compute     xn+1=argmin{αn+1f(yn,y)+12wny2:yC},

where wn+1=xn+1+θn+1(xn+1-xn).

Stopping criterion If xn+1=wn and yn=yn-1 for some n0 then stop. Otherwise set n:=n+1 and return to Step 1.

The authors proved that the sequences {xn} and {yn} generated by Algorithm 1.2 converges weakly to a solution of the EP.

Let us also mention that the inertial extrapolation term in Algorithms 1.1 and 1.2 is used as a means of speeding up the convergence properties of the algorithms. This method was first introduced by Polyak [30] and has been adopted by many other authors, see for instance [11, 19, 22, 27, 32].

Motivated by the above results, in this paper, we are concerned with finding an iterative method which does not involve the condition (1.4) and a Lipschitz-like condition for solving pseudomonotone EP in reflexive Banach space. We introduce a new inertial self-adaptive Bregman projection method which does not require the bifunction satisfying the Lipschitz-like condition and its convergence is proved without using condition (1.4). We also use the Bregman distance techniques which generalizes the Euclidean distance popularly used by other authors.

The rest of the paper is organized as follows. In Section 2, we collect some basic definitions and preliminary results required in our main results. In Section 3, we introduce our algorithm and prove a strong convergence result for the sequence generated by the algorithm. In Section 4, we give an application of our result to variational inequality problems. We provide some numerical reports and also compare the performance of our method with other methods in the literature in Section 5. We give some concluding remarks in Section 6.

In this section, we give some definitions and preliminary results which will be used in our convergence analysis. Let C be a nonempty, closed and convex subset of a real Banach space E with the norm |·| and dual space E*. We denote the weak and strong convergence of a sequence {xn}E to a point xE by xnx and xnx, respectively.

A function f:E(-,+] is said to be

  • proper, if dom(f)={xE:f(x)<};

  • f is strongly convex with strongly convexity constant ρ>0, i.e

    f(x)f(y)f(y),xy+ρ2xy2,x,yE

  • Gâteaux differentiable at xE, if there exists an element in E denoted by f(x) or f(x) such that

    limt0f(x+ty)f(x)t=y,f(x),yE,

    where f(x) or f(x) is called the Gâteaux differential or gradient of f at x. We note that f(f*(x*))=x* for all x*E*

  • Fréchet differentiable at x, if the limit in (iii) above exists uniformly on the unit sphere of E.

Lemma 2.1. ([33]) Let f:ER be uniformly Fréchet differentiable and bounded on bounded subsets of E, then f is uniformly continuous on bounded subsets of E from strong topology of E to the strong topology of E*.

The subdifferential set f at a point x denoted by f is defined by

f(x):={x*E*:f(x)-f(y)y-x,x*,yE}.

Every element x*f(x) is called a subgradient of f at x. If f is continuously differentiable, then f(x)={f(x)}, which is the gradient of f at x. The Fénchel conjugate of f is the convex functional f*:E*R{+} defined f*(x*)=sup{x*,x-f(x):xE}. Let E be a reflexive Banach space, the function f is said to be Legendre if and if only it satisfies the following two conditions (see [1]):

  • int dom(f) and f is single-valued on its domain ;

  • int dom(f) and f* is single-valued on its domain.

Let f be a strictly convex and Gâteaux differentiable function. The function Df : dom (f)×int dom (f)[0,) defined by

Df(x,y)=f(x)-f(y)-x-y,f(y),

is called the Bregman distance with respect to the function f. It is worthy of mentioning that the bifunction Df is not a metric in the usual sense because it does not satisfy the symmetry and triangle inequality properties. However, it posses the following important property called the three points identity:

Df(x,y)+Df(y,z)-Df(x,z)=f(z)-f(y),x-y,

for x dom(f) and y,z int dom(f). Also, from the strong convexity of f and the definition of the Bregman distance, we have that

Df(x,y)ρ2|x-y|2.

The Bregman distance function has been widely used by many authors in the literature (see [1, 7, 8] and the references therein).

Remark 2.2. Practical important examples of Bregman distance functions can be found in [2]. For example, if f(x)=12|·|, then Df(x,y)=12|x-y|2 which is the Euclidean distance. Also, if f(x)=-j=1mxjlog(xj) which is the Shannon's entropy for the non-negative orthant R++m:={xRm:xj>0}, we obtain the Kullback-Leibler cross entropy defined by

Df(x,y)= j=1mxjlog xj yj 1+ j=1myj.

Definition 2.3. ([5]) Let C be a nonempty, closed and convex subset of a reflexive real Banach space E. A Bregman projection of x int dom(f) onto C int dom(f) is the unique vector ΠCxC which satisfies

Df(ΠCx,x)=inf{Df(y,x):yC}.

Lemma 2.4. ([9]) Let C be a nonempty, closed and convex subset of E and xE. Let f:ER be a Gâteaux differentiable and totally convex function. Then

  • q=ΠCx if and only if f(x)-f(q),y-q0, for all yC

  • Df(y,ΠCx)+Df(ΠC(x),x)Df(y,x), for all yC.

Definition 2.5. ([6, 9]) The bifunction vf : int dom(f)×[0,+) defined by

vf(x,t):=inf{Df(y,x):ydom(f),|y-x|=t}

is called the modulus of total convexity at x. The function f is called totally convex at x int dom(f) if vf(x,t) is positive for any t>0. The modulus of total convexity of f on C is the bifunction vf:intdom(f)×[0,+), defined by

vf(C,t):=inf{vf(x,t):xCintdom(f)}.

he function f is called totally convex on bounded subsets if vf(C,t)>0 for any nonempty and bounded subset C and any t>0. Also, f is said to be coercive, if limx+f(x)x=+.

Proposition 2.6. ([6]) If x int dom(f), then the following are equivalent:

  • the function f is totally convex at x,

  • f is sequentially consistent, i.e., for any sequence {yn} dom(f),

limnDf(yn,x)=0limnynx=0.

We also recall (see [6]) that the function f is called sequentially consistent, if for any two sequences {xn} and {yn} in

E such that the first one is bounded,

limnDf(xn,yn)=0limnxnyn=0.

Proposition 2.7. ([6]) If dom(f) contains at least two points, then the function f is totally convex on bounded sets if and only if the function f is sequentially consistent.

Proposition 2.8. ([34]) Let f:ER be a Gâteaux differentiable and totally convex function. If x¯E and the sequence {Df(xn,x¯)} is bounded, then the sequence {xn} is also bounded.

Lemma 2.9. ([29]) If f:E(-,+] is a proper lower semicontinuous function, then f*:E*(-,+] is a proper weak* lower semicontinuous and convex function. Thus for all yE, we have

Dfy,f*i=1Nλif(xi)i=1NλiDf(y,xi)

where {xi}iNE and {λi}iN(0,1) with i=1Nλi=1.

A Banach space is said to satisfy the Opial property [28], if for any sequence {xn}E such that xnx for some xE, we have

limnxnx<limnxny

for all yE with yx. We note that all Hilbert spaces, all finite dimensional Banach spaces and the Banach space p(1p<) satisfy the Opial property. It is also worthy of mentioning that not every Banach space satisfy the Opial property (see [18]). In order to extend this property to cover all Banach spaces, Huang et al. [25] established the following lemma.

Lemma 2.10. ([25]) Let E be a Banach space and f:E(-,+] be a proper strictly convex function that is Gâteaux differentiable and {xn} is a sequence in E such that xnx for some xE. Then

limnDf(x,xn)<limnDf(y,xn)

for all ydomf with yx.

Lemma 2.11. ([17]) Let {an} and {bn} be two nonnegative real sequences such that

an+1anbn.

Then, {an} is bounded and n=1bn<.

Definition 2.12. Let C be a nonempty, closed and convex subset of a Banach space E and g:C×CR be a bifunction, g is said to be:

  • strongly γ-monotone on C, if there exists γ>0 such that

    g(x,y)+g(y,x)-γ|x-y|2,x,yC

  • monotone on C, if

    g(x,y)+g(y,x)0,x,yC

  • strongly γ-pseudomonotone on C, if there exists γ>0 such that for any x,yC

    g(x,y)0g(y,x)-γ|x-y|2

  • pseudomonotone on C, if

    g(x,y)0g(y,x)0,x,yC.

It is easy to see that (i)(ii)(iv) and (i)(iii)(iv) but the converse implications are not always true, see, for instance [17, 20, 23].

In this section, we give a concise and precise statement of our algorithm and discuss some of its elementary properties convergence analysis. Let C be a nonempty, closed and convex subset of a reflexive Banach space E and f:ER{+} be a bounded Legendre function which is uniformly Fréchet differentiable, strongly coercive, strongly convex and totally convex on bounded subsets of E. Let g:C×CR be a bifunction satisfying the following conditions:

  • g(x,·) is convex and lower semicontinuous for every xE

  • g is pseudomonotone on C;

  • EP(g,C)

  • If {xn}n=0E is bounded, then the sequence {h(xn)g(xn,·)(xn)}n=0 is bounded.

  • For L>0, we assume h(x)g(x,·)(x) is L-Lipschitz continuous. However, the knowledge of L is not required in execution and practice of our method.

Remark 3.1. We note that condition (A4) is a standard assumption and holds when g(x,·) is bounded on bounded sets (see, for instance [9, Proposition 1.1.11]).

Next, we present our algorithm as follows:

Algorithm 3.2. Modified Bregman Popov Extragradient Method for EP Initialization: Choose x0,x1,y0 and y1C and α1>0, μ(0,ρ(2-1)), θ(0,1).

Iterative step: Having xn-1,xn,yn-1,yn and αn. Calculate xn+1,yn+1 and αn+1 for each n1 as follows

wn=f*(f(xn)+θ(f(xn1)f(xn)),xn+1=ΠTnf*((f(wn)αnh(yn))),αn+1=minαn,μyn yn1 h(yn )h(yn1 ), ifh(yn)h(yn1)>0,αn,              otherwise,yn+1=ΠCf*((f(xn+1)αn+1h(yn))),

where

Tn={yE:f(wn)-αnh(yn-1)-f(yn),y-yn0}

and h(x)g(x,·)(x) for each xC.

Stopping criterion If xn+1=wn and yn+1=yn=yn-1 for some n1 then stop. Otherwise set n:=n+1 and return to Iterative step.

Remark 3.3. The sequence {αn} given in (3.1) is nonincreasing and

limnαn=αminα1,μL.

Proof: It follows from the definition of {αn} that αn+1αn. Thus, {αn} is non-increasing. Now, since |h(yn)-h(yn-1)|L|yn-yn-1| for L>0, we get that

μynyn1h(yn)h(yn1)μL, ifh(yn)h(yn1)>0.

This together with (3.1) implies

αnminα1,μL.

Thus, the sequence {αn} is lower bounded. The conclusion follows.

The following inequality is satisfied for {xn} and {yn}.

Lemma 3.4. Let {xn} and {yn} be defined by Algorithm 3.2, then for every x* EP(g,C) the following inequality holds

Df(x*,xn+1)Df(x*,xn)12μαn ραn+1 Df(xn+1,yn)
1(1+2)μαnραn+1Df(yn,wn)+θ(Df(x*,xn1)Df(x*,xn)).

Proof: Since x* EP(g,C) and xn+1=ΠTnf*(f(wn)-αnh(yn)), we have by Lemma 2.4 (i), that

f(wn)αnh(yn)f(xn+1),x*xn+10,

this implies

f(wn)-f(xn+1),x*-xn+1αnh(yn),x*-xn+1.

Using the three points identity (2.1) and subdifferential of g in the second variable in (3.3), we obtain

Df(x*,xn+1)=Df(x*,wn)Df(xn+1,wn)+f(wn)f(xn+1),x*xn+1Df(x*,wn)Df(xn+1,wn)+αnh(yn),x*xn+1
=Df(x*,wn)Df(xn+1,wn)+αnh(yn),ynxn+1+αnh(yn),x*ynDf(x*,wn)Df(xn+1,wn)+αnh(yn),ynxn+1+αng(yn,x*)=Df(x*,wn)Df(xn+1,wn)+αnh(yn1),ynxn+1
+αnh(yn)h(yn1),ynxn+1+αng(yn,x*).

Note that

αnh(yn1),ynxn+1=f(wn)αnh(yn1)f(yn),xn+1yn+f(yn)f(wn),xn+1yn.

Since xn+1Tn, f(wn)-αnh(yn-1)-f(yn),xn+1-yn0 implies that

αnh(yn-1),yn-xn+1f(yn)-f(wn),xn+1-yn.

Using the three points identity (2.1), we have

f(yn)-f(wn),xn+1-yn=Df(xn+1,wn)-Df(xn+1,yn)-Df(yn,wn).

Substituting this into (3.6), we get

αnh(yn-1),yn-xn+1Df(xn+1,wn)-Df(xn+1,yn)-Df(yn,wn).

From (3.7) and (3.5), we obtain that

Df(x*,xn+1)Df(x*,wn)Df(xn+1,yn)Df(yn,wn)+αng(yn,x*)+αnh(yn1)h(yn),xn+1yn.

Observe from the definition of αn, that

αnh(yn1)h(yn),xn+1ynαnh(yn1)h(yn)xn+1ynμαnαn+1yn1ynxn+1ynμαnαn+1122yn1yn2+12xn+1yn2μαn22αn+1(2+2)ynwn2+2wnyn12+μαn2αn+1xn+1yn2(1+2)μαnραn+1Df(yn,wn)+μαnραn+1Df(wn,yn1)+2μαnραn+1Df(xn+1,yn),

where we have used 2ab12a2+2b2 and (a+b)22a2+(2+2)b2 in separate steps and the strong convexity of f.

By substituting (3.9) into (3.8), we obtain

Df(x*,xn+1)Df(x*,wn)12μαn ραn+1 Df(xn+1,yn)1(1+2)μαn ραn+1 Df(yn,wn)+αng(yn,x*).

Since x* EP(g,C) and ynC, we have that g(x*,yn)0, it follows from the fact that g is pseudomonotone that g(yn,x*)0. Therefore, we obtain from (3.10), that

Df(x*,xn+1)Df(x*,wn)12μαn ραn+1 Df(xn+1,yn)1(1+2)μαn ραn+1 Df(yn,wn).

Observe from (3.1) and Lemma 2.9, that

Df(x*,wn)=Df(x*,f*((1θ)f(xn)+θf(xn1)))(1θ)Df(x*,xn)+θDf(x*,xn1).

It therefore follows from (3.11) and (3.12), that

Df(x*,xn+1)Df(x*,xn)12μαn ραn+1 Df(xn+1,yn)1(1+2)μαn ραn+1 Df(yn,wn)+θ(D(x*,xn1)Df(x*,xn)).

The prove is complete.

Now, we present the weak convergence theorem for Algorithm 3.2.

Theorem 3.5. Assume that the conditions A1-A4 are satisfied and θDf(xn,xn-1)<, then the sequences {xn} and {yn} given by Algorithm 3.2 converge weakly to a solution x* EP(g,C).

Proof. Let x* EP(g,C). First, we show that {xn} is bounded. Indeed, we have from Lemma 3.4 that (3.13) satisfies an+1an-bn where

an=Df(x*,xn)+μαnραn+1Df(xn,yn1)+θDf(x*,xn1)

and

bn=1-(1+2)μαnραn+1Df(yn,wn)+1-(1+2)μαnραn+1+μαn+1ραn+2Df(xn+1,yn).

Then, by Lemma 2.11, {an} is bounded, which implies that {xn} is bounded, limnDf(xn+1,yn)=0 and limnDf(yn,wn)=0. Consequently, |yn-wn|0 and |xn+1-yn|0 as n. Again, we see from (3.1) and Lemma 2.9, that

Df(xn,wn)(1θ)Df(xn,xn)+θDf(xn,xn1)θDf(xn,xn1).

Thus, we have that Df(xn,wn)0 as n, which implies by (2.4), that |xn-wn|0 as n. Consequently, one gets that |xn+1-xn|0 as n. Furthermore

yn+1ynyn+1xn+1+xn+1xn+xnyn0,asn.

Since f is uniformly Fréchet differentiable, we have that |f(yn+1)-f(xn+1)|0 as n.

From the boundedness of {xn}, there exists a subsequence {xnk} of {xn} such that xnkx¯. Consequently, {wnk} and {ynk} converge both converge weakly to x¯. It follows easily that x¯C. From the definition of yn+1 and Lemma 2.4 (i), we have

f(xn+1)αnh(yn)f(yn+1),yyn+10,yC,

that is

f(xn+1)f(yn+1),yyn+1αnh(yn),yyn+10,yC

and

f(xn+1)f(yn+1)αn,yyn+1+h(yn),yn+1ynh(yn),yyn,yC.

Hence, we have by the definition of subdifferential, that forall yC,

f(xnk+1)-f(ynk+1)αnk,y-ynk+1+h(ynk),ynk+1-ynkg(ynk,y).

Observe that f(xnk+1)-f(ynk+1)αnk0 as k, since αnkα>0. Hence, by passing limit over (3.17), we obtain that g(x¯,y)0,yC, thus x¯ EP(g,C).

Finally, we show that x¯ is unique. Assume the contrary, then there exists a subsequence xni such that xnix^. Following similar arguments as above, we have that x^ EP(g,C). It follows from the Bregman Opial-like property of E (Lemma 2.10), that

limnDf(x¯,xn)=limkDf(x¯,xnk)=liminfkDf(x¯,xnk)<liminfkDf(x^,xnk)=limkDf(x^,xnk)=limnDf(x^,xn).

Thus, we arrive at a contradiction. Therefore is x¯=x^.

We are now in position to establish the strong convergence of Algorithm 3.2, however we replace condition A2 of Assumption A by A2* that the bifunction g:C×CR{+} is strongly pseudomonotone.

Strong convergence theorem for Algorithm 3.2.

Theorem 3.6. Assume that conditions A1, A2* and A3-A4 are satisfied. The sequences {xn} and {yn} of Algorithm 3.2 converge strongly to a unique solution of EP(g,C).

Proof: Let x* EP(g,C). By the definition of xn+1 and Lemma 2.4, we have

f(wn)-αnh(yn)-f(xn+1),x*-xn+10,

alternatively

f(wn)-f(xn+1),x*-xn+1αnh(yn),x*-xn+1.

By using the three points identity 2.1, we have

Df(x*,xn+1)+Df(xn+1,wn)-Df(x*,wn)αnh(yn),x*-xn+1,

that is

Df(x*,xn+1)Df(x*,wn)-Df(xn+1,wn)+αnh(yn),x*-xn+1.

Since x* EP(g,C), we have g(x*,x)0 for all xC. Using the fact that g is strongly pseudomonotone, we obtain g(x,x*)-γ|x-x*|2. Taking x=ynC, we get g(yn,x*)-γ|yn-x*|2. Now, using the definition of subdifferential of g at yn, we have

h(yn),x*xn+1h(yn),x*yn+h(yn),ynxn+1g(yn,x*)+h(yn),ynxn+1γynx*2+h(yn),ynxn+1.

Substituting (3.18) into (3.19), we get

Df(x*,xn+1)Df(x*,wn)-Df(xn+1,wn)-γαn|yn-x*|2+αnh(yn),yn-xn+1,

again by using (2.1), we get

Df(x*,xn+1)Df(x*,wn)Df(wn,yn)Df(xn+1,yn)f(yn)f(wn),xn+1ynγαnynx*2+αnh(yn),ynxn+1=Df(x*,wn)Df(wn,yn)Df(xn+1,yn)+f(wn)αnh(yn)f(yn),xn+1ynγαnynx*2.

Observe that

f(wn)αnh(yn)f(yn),xn+1yn=f(wn)αnh(yn1)f(yn),xn+1yn+αnh(yn1)h(yn),xn+1yn,

and f(wn)-αnh(yn-1)-f(yn),xn+1-yn0, since xn+1Tn. Hence

f(wn)αnh(yn)f(yn),xn+1ynαnh(yn1)h(yn),xn+1ynαnh(yn1)h(yn)xn+1ynμαnαn+1yn1ynxn+1ynμαnαn+1{122yn1yn2+12xn+1yn2}μαn22αn(2+2)ynwn2+2wnyn12+μαn2αn+1xn+1yn2(1+2)μαnραn+1Df(yn,wn)+μαnραn+1Df(wn,yn1)+2μαnραn+1Df(xn+1,yn).

Using (3.23) in (3.21), we get

Df(x*,xn+1)Df(x*,wn)12μαn αn+1 Df(xn+1,yn)1(1+2)μαn αn+1 Df(yn,wn)+μαnαn+1Df(wn,yn1)γαnynx*2.

by using (3.12), we obtain

Df(x*,xn+1)Df(x*,xn)12μαn αn+1 Df(xn+1,yn)1(1+2)μαn αn+1 Df(yn,wn)+μαnαn+1Df(wn,yn1)+θ(Df(x*,xn1)Df(x*,xn))γαnynx*2.

Following similar argument as in Theorem 3.5, we obtain that {xn} is bounded. It follows also that |wn-xn|0, |yn-xn|0, |xn+1-yn|0 and |xn+1-xn|0 as n. Since f is continuous on bounded sets, coercive and uniformly Fréchet differentiable, we get by Lemma 2.1, that |f(xn+1)-f(wn)|0, |f(xn)-f(xn-1)|0 and |f(xn+1)-f(xn)|0 as n. Therefore,

Df(x*,wn)Df(x*,xn+1)=f(x*)f(wn)f(wn),x*wnf(x*)+f(xn+1)+f(xn+1),x*xn+1=f(xn+1)f(wn)+f(xn+1),x*xn+1f(wn),x*wn=f(xn+1)f(wn)+f(xn+1)f(wn),x*wn
+f(xn+1),wnxn+1.

Thus, passing limits over (3.26), we get

limnDf(x*,wn)Df(x*,xn+1)=0.

Also,

Df(wn,yn-1)=f(wn)-f(yn-1)-f(yn-1),wn-yn-10asn.

It follows from (3.24), that

γαn|yn-x*|2Df(x*,wn)-Df(x*,xn+1)+μαnαn+1Df(wn,yn-1)0.

Thus, since γαn>0, we get that

limnynx*=0.

This implies {yn} converges strongly to x*. Consequently, {xn} converges strongly to x* EP(g,C).

In this section, we give an application of our main result to the Market Equilibrium Model. First, we give an adaptation of our method to the Variational Inequality Problem (VIP). Suppose the function g:C×CR in (1.1) is given by

g(x,y):=F(x),yx0,  ifxC,+,       otherwise,

where F:CE*, then the EP (1.1) reduces to the classical variational inequality problem (1.2). That is finding a point x*C such that

F(x*),x-x*0,xC.

Denote by VIP(C,F) the solution set of VIP (1.2). Variational inequalities play an important role in studying a wide class of unilateral, obstacle and equilibrium problems arising in several branches of pure and applied sciences in a unified and general framework (see [17, 21]) and the references therein. For this and more there have been extensive studies of this problem by several authors (see [24, 25, 26]) for more.

The mapping F:CE* is said to be strongly γ-pseudomonotone if there exists γ>0 such that for any x,yC

Fx,y-x0,Fy,y-xγ|x-y|2.

By this adaptation, Algorithm 3.2 provides a new method for variational inequalities. In fact, we have the following Popov subgradient extragradient method for VIP.

We obtain the following for solving variational inequality problem.

Algorithm 4.1. Modified Bregman Popov Extragradient Method for VIP Initialization: Choose x0,x1,y0 and y1C and α1>0, μ(0,ρ(2-1)),θ(0,1).

Iterative step: Having xn-1,xn,yn and αn, calculate xn+1,yn+1 and αn+1 for each n1 as follows

wn=f*(f(xn)+θ(f(xn1)f(xn)),xn+1=ΠTnf*(f(wn)αnFyn),αn+1=minαn,μyn yn1 h(yn )h(yn1 ), ifh(yn)h(yn1)>0,αn,              otherwise,yn+1=ΠCf*(f(xn+1)αn+1Fyn),

where

Tn={yE:f(xn)-αnF(yn-1)-f(yn),y-yn0}.

Stopping criterion If xn+1=wn=yn for some n1 then stop. Otherwise set n:=n+1 and return to Iterative step.

Theorem 4.2. Let C be a nonempty, closed and convex subset of a reflexive real Banach E with dual E*. Assume F:CE* be a strongly γ-pseudomonotone operator which is bounded on bounded sets such that VIP(C,F). Let {xn} be the sequence generated by Algorithm 4.1, then {xn} converges strongly to an element in VIP(C,F).

Proof. For each x,yC, let the bifunction g:C×CR be given by (4.1). It follows by hypothesis that the assumptions (A1)-(A4) are satisfied. Following the conclusion of Theorem 3.5 that x*EP(g,C), we have that x*VIP(C,F).

In this section, we report some numerical experiments to illustrate the performance of our method for some known Bregman distances. We first list some of functions with their corresponding distances.

  • Squared Euclidean distance (SED) with domf=Rn,

    f(x)=12xTx,  f(x)=x,  Df(x,y)=12xy22.

    m

  • General quadratic kernel (GQK) with domf=Rn,

    f(x)=12xTAx,  f(x)=Ax,  Df(x,y)=12(xy)TA(xy),

    where

    • A is symmetric positive definite;

    • in some applications, A is positive semidefinite, but not positive definite.

  • Relative entropy (RE) with itemize domf=R+n,

    f(x)=i=1nxilogxi,  f(x)= logx1+1 logxi+1,Df(x,y)=i=1n xilog xi yixi+yi.

    Df(x,y) is called the Kullback–Leibler distance.

  • Logarithmic barrier (LB) with domf=R++n,

    f(x)= i=1nlogxi,f(x)= 1 x1 1 xn ,Df(x,y)= i=1n xi yi log xi yi 1,

    Df(x,y) in this example is called Itakura-Saito divergence.

Example 5.1. We consider the EP (1.1) with the bifunction g:RN×RNR defined by

g(x,y)=Px+Qy+q,yx

where q is a vector in RN, P and Q are N×N matrices such that P is symmetric and positive semidefinite and Q-P is negative semidefinite. The feasible set C is defined by

C={x=(x1,x2,,xN)TN:x1 and xi>0,i=1,2,,N}.

Note that g is monotone (hence, pseudomonotone) and the unique solution of the EP is x¯=(0,0,,0)T (see [31]). The entries of the matrices P and Q are generated randomly, while q is generated randomly and uniformly distributed in [-2,2]. We compare the performance of Algorithm 3.2 for various kind of the convex function f listed above. We test the algorithm for N=10,30,50,100, μ=0.35, α1=0.24 and the initial points x0,x1 are generated randomly in RN. The projection onto the feasible set is calculated explicitly and we study the convergence of the sequence generated by Algorithm 3.2 using Error=|xn+1-wn|2+|yn-xn|2<10-4 as stopping criterion. The numerical results are shown in Table 1 and Figure 1.

Figure 1. Example 5.1, From Top – Bottom: M=10,30,50,100.

Table 1 . Computational result for Example 5.1..

SEDMDKLDISD
M=10Iter.17212115
Time (sec)0.05210.13530.07090.0393
M=30Iter.16102214
Time (sec)0.16580.27330.24860.1423
M=50No of Iter.16172314
Time (sec)0.33950.99160.48680.3484
M=100No of Iter.2092413
Time (sec)0.91581.25211.41610.6622


Example 5.2. In this example, we consider the EP (1.1) with the bifunction g:C×CR defined by

g(x,y)= i=1N[(xi+1+yi)(yixi)] and C=x+:i=1Nxi =1.

We compare the performance of Algorithm 3.2 using the convex functions as given above. The initial values x0,x1,y0,y1 are generated randomly in RN where N=10,30,50 and 100. We choose μ=0.63, α1=0.5. and study the convergence of the algorithm using Error=|xn+1-wn|2+|yn-xn|2<10-4 as stopping criterion.

Example 5.3. Let E=2(R) be the linear spaces whose elements are all 2-summable sequences {xi}i=1 of scalars in R, that is

2(R):=x=(x1,x2,xi),xiRandi=1|xi|2<,

with the inner product ,:l2×l2 defined by x,y:=i=1xiyi and the norm :l2 by |x|:=i=1|xi|2, where x={xi}i=1, y={yi}i=1.

Suppose f:22 be given by 12|x|2 for all x2 then, f(x)=|x|. Let C={xE:|x|3}, and let h(x)g(x,·)(x) be given by x(5-|x|). The projection onto C is easily computed as

PC(x)=x if x33xx otherwise.

In this experiment for Algorithm 3.2, we choose μ=0.63, α1=0.5. θ=17. We also compare the algorithm in case θ0 and θ=0 (without inertial term) for varying values of x0 and x1 as follows:

Case1 x0=(1,0,0,,0,) and x1=(-2,0,0,,0,)Case2 x0=(2,0,0,,0,) and x1=(1,0,0,,0,)Case3 x0=(2,0,0,,0,) and x1=(-1.5,0,0,,0,)Case4 x0=(1,0,1,,0,) and x1=(-2,0,1,,0,)

For Example 5.3, we chose Error=|xn+1-wn|2+|yn-xn|2 and Error=|xn+1-xn|2+|yn-xn|2 respectively for the accelerated and unaccelerated algorithm. The comparisons are demonstrated in Figure 3.

Figure 3. Example 5.3, Top left: Case 1, Top right: Case 2, Bottom left: Case 3, Bottom right Case 4 .

ww

Example 5.4. In this example we make a comaprison of Algorithm 3.2 and Algorithm 1.1. Let E=R and g:C×CR be given by g(x,y)=(2.5-|x|)(y-x). Let C={xE:|x|3}. Then the projection onto C is easily computed as

PC(x)=x if x3,3xx otherwise.

For Algorithm 3.2 choose the sequences θ=13, α1=2.5 and βn=12n+3 for Algorithm 1.1. The execution of this example is terminated at En=|xn+1-wn|=10-4. The result of this example is reported in Figure 4 for various values of the initial points x0, y0, x1 and y1.

Figure 4. Example 5.4, Top left: Case 1, Top right: Case 2, Bottom left: Case 3, Bottom right Case 4 .

Case1x0=0.56,x1=0.76,y0=0.98, and y1=0.65.Case2x0=0.91,x1=0.75,y0=0.98, and y1=0.12.Case3x0=1.01,x1=-1.76,y0=1.12, and y1=0.65.Case4x0=1.56,x1=2.06,y0=-0.98, and y1=-0.65.

This paper presented a Popov inspired subgradient extragradient algorithm from obtaining the solutions of an equilibrium problem. The method uses a step size which is carefully selected for easy computation and does not depend on a Lipschitz-type condition. Based on this method, we state and prove weak and strong convergence theorems under some certain monotonicity and standard assumptions. By numerical illustrations, we displayed the efficiency of this method compared to other previous obtained results in this direction.

Figure 2. Example 5.1, From Top – Bottom: M=10,30,50,100.

Table 2 . Computational result for Example 5.2..

SEDMDKLDISD
M=10Iter.2252518
Time (sec)0.01080.04790.34170.0344
M=30Iter.23196719
Time (sec)0.01320.43860.06460.0196
M=50No of Iter.23245620
Time (sec)0.00681.35600.05070.0089
M=100No of Iter.27312622
Time (sec)0.01152.98700.01640.0131

We will like to thank the anonymous referee whose constructive criticism of the manuscript in its first draft has helped improve the quality of the article.

  1. H. H. Bauschke, J. M. Borwein and P. L. Combettes. Essential smoothness, essential strict convexity and Legendre functions in Banach spaces, Commun. Contemp. Math., 3(2001), 615-647.
  2. A. Beck, First-Order Methods in Optimization, Society for Industrial and Applied Mathematics, Philadelphia, 2017.
  3. G. Bigi, M. Castellani, M. Pappalardo and M. Passacantando. Existence and solution methods for equilibria, Eur. J .Oper. Res., 227(1)(2013), 1-11.
  4. E. Blum and W. Oetlli. From optimization and variational inequalities to equilibrium problems, Math. Student, 63(1994), 123-145.
  5. L. M. Bregman. The relaxation method for finding common points of convex sets and its application to the solution of problems in convex programming, USSR Comput. Math. Math. Phys., 7(3)(1967), 200-217.
  6. D. Butnariu and E. Resmerita. Bregman distances, totally convex functions and a method for solving operator equations in banach spaces, Abstr. Appl. Anal., 2006(2006), 1-39.
  7. D. Butnariu, Y. Censor and S. Reich. Iterating averaging of entropic projections for solving stochastic convex feasibility problems, Comput. Optim. Appl., 8(1)(1997), 21-39.
  8. D. Butnariu, A. N. Iusem and C. Zǎlinescu. On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces, J. Convex Anal., 10(1)(2003), 35-61.
  9. D. Butnariu and A. N. Iusem, Totally convex functions for fixed points computation and infinite dimensional optimization, Kluwer Academic Publishers, Dordrecht, 2000.
  10. V. H. Dang. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems, Numer. Algorithms, 77(2018), 983-1001.
  11. V. H. Dang. New inertial algorithm for a class of equilibrium problems, Numer. Algorithms, 80(2019), 1413-1436.
  12. B. V. Dinh and L. D. Muu. A projection algorithm for solving pseudo-monotone equilibrium problems and it’s application to a class of bilevel equilibria, Optimization, 64(2015), 559-575.
  13. P. M. Duc, L. D. Muu and N. V. Quy. Solution-existence and algorithms with their convergence rate for strongly pseudo-monotone equilibrium problems, Pac. J. Optim., 12(2016), 833-845.
  14. G. M. Eskandani, M. Raeisi and T. M. Rassias. A hybrid extragradient method for solving pseudomontone equilibrium problems using bregman distance, J. Fixed Point Theory Appl., 20(2018), 27.
  15. K. Fan, A minimax inequality and applications, Academic Press, New York, 1972.
  16. G. Fichera. Sul problema elastostatico di Signorini con ambigue condizioni al contorno, Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat., 34(1963), 138-142.
  17. A. Gibali. A new bregman projection method for solving variational inequalities in Hilbert spaces, Pure Appl. Funct. Anal., 3(3)(2018), 403-415.
  18. J. P. Gossez and E. L. Dozo. Some geometric properties related to the fixed point theory for nonexpansive mappings, Pacific J. Math., 40(3)(1972), 565-573.
  19. D. V. Hieu. An inertial-like proximal algorithm for equilibrium problems, Math. Methods Oper. Res., 88(2018), 399-415.
  20. D. V. Hieu. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems, Numer. Algorithms, 77(2018), 983-1001.
  21. D. V. Hieu and P. Cholamjiak. Modified extragradient method with bregman distance for variational inequalities, Appl. Anal., 101(2020), 655-670.
  22. D. V. Hieu. New inertial algorithm for a class of equilibrium problems, Numer. Algorithms, 80(2018), 1-24.
  23. D. V. Hieu. New extragradient method for a class of equilibrium problems, App. Anal., 97(5)(2018), 811-824.
  24. L. O. Jolaoso, F. U. Ogbuisi and O. T. Mewomo. An iterative method for solving minimization, variational inequality and fixed point problems in reflexive Banach spaces, Adv. Pure Appl. Math., 9(3)(2018), 167-184.
  25. Y. Y. Huang, J. C. Jeng, T. Y. Kuo and C. C. Hong. Fixed point and weak convergence theorems for point dependent λ-hybrid mappings in banach spaces, Fixed Point Theory Appl., 2011(2011), 105.
  26. Yu. V. Malistky and V. V. Semenov. An extragradient algorithm for monotone variational inequalities, Cybernet. Systems Anal., 50(2014), 271-277.
  27. A. Moudafi and M. Oliny. Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155(2003), 447-454.
  28. Z. Opial. Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73(1967), 591-597.
  29. R. R. Phelps, Convex Functions. monotone operators and differentiability, Lecture Notes in Mathematics, Berlin, 1993.
  30. B. T. Polyak. Some methods of speeding up the convergence of iteration methods, U.S.S.R Comput. Math. Math. Phys., 4(5)(1964), 1-17.
  31. T. D. Quoc, L. D. Muu and N. V. Hien. Extragradient algorithms extended to equilibrium problems, Optimization, 57(2008), 749-776.
  32. H. Rehman, P. Kumam, Y. -J. Cho, Y. I. Suleiman and W. Kumam. Modified popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim. Meth. Softw., 36(2020), 82-113.
  33. S. Reich and S. Sabach. A strong convergence theorem for a proximal-type algorithm in reflexive banach spaces, J. Nonlinear Convex Anal., 10(2009), 471-485.
  34. S. Reich and S. Sabach. Two strong convergence theorems for a proximal method in reflexive banach spaces, Numer. Funct. Anal. Optim., 31(2010), 24-44.
  35. G. Stampacchia. Formes bilineaires coercitives sur les ensembles convexes, C. R. Acad. Sci. Paris, 258(1964), 4413-4416.