Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

On directionally differentiable multiobjective programming problems with vanishing constraints

On directionally differentiable multiobjective programming problems with vanishing constraints In this paper, a class of directionally differentiable multiobjective programming problems with inequality, equality and vanishing constraints is considered. Under both the Abadie constraint qualification and the modified Abadie constraint qualification, the Karush–Kuhn– Tucker type necessary optimality conditions are established for such nondifferentiable vector optimization problems by using the nonlinear version Gordan theorem of the alternative for convex functions. Further, the sufficient optimality conditions for such directionally differ- entiable multiobjective programming problems with vanishing constraints are proved under convexity hypotheses. Furthermore, vector Wolfe dual problem is defined for the considered directionally differentiable multiobjective programming problem vanishing constraints and several duality theorems are established also under appropriate convexity hypotheses. Keywords Directionally differentiable multiobjective programming problems with vanishing constraints · Pareto solution · Karush–Kuhn–Tucker necessary optimality conditions · Wolfe vector dual · Convex function Mathematics Subject Classification 90C29 · 90C30 · 90C46 · 90C25 · 49K10 1 Introduction Multiobjective optimization problems, also known vector optimization problems or multi- criteria optimization problems, are extremum problems involving more than one objective function to be optimized. Many real-life problems can be formulated as multiobjective pro- gramming problems which include human decision making, economics, financial investment, portfolio, resource allocation, information transfer, engineering design, mechanics, control theory, etc. During the past five decades, the field of multiobjective programming, has grown remarkably in different directional in the setting of optimality conditions and duality theory. One of the classes of nondifferentiable multicriteria optimization problems studied in the recent past is the class of directionally differentiable vector optimization problems for which many authors have established the aforesaid fundamental results in optimization theory (see, Tadeusz Antczak tadeusz.antczak@wmii.uni.lodz.pl Faculty of Mathematics, University of Lodz, Banacha 22, 90-238 Łódz, ´ Poland 123 Annals of Operations Research for example, (Ahmad, 2011; Antczak, 2002, 2009; Arana-Jiménez et al., 2013; Dinh et al., 2005; Ishizuka, 1992; Kharbanda et al., 2015; Mishra & Noor, 2006; Mishra et al., 2008, 2015;Slimani&Radjef, 2010;Ye, 1991) and others). Recently, a special class of optimization problems, known as the mathematical program- ming problems with vanishing constraints, was introduced by Achtziger and Kanzow (2008), which serves as a unified frame work for several applications in structural and topology opti- mization. Since optimization problems with vanishing constraints, in their general form, are quite a new class of mathematical programming problems, only very few works have been published on this subject so far (see, for example, (Achtziger et al. 2013; Antczak 2022; Dorsch et al. 2012;Dussaultetal. 2019; Guu et al. 2017; Hoheisel and Kanzow 2008, 2009; Hoheisel et al. 2012;Huetal. 2014, 2020; Izmailov and Solodov 2009; Khare and Nath 2019; Mishra et al. 2015, 2016; Thung 2022). However, to the best our knowledge there are no works on optimality conditions for (convex) directionally differentiable multiobjective programming problems with vanishing constraints in the literature. The main purpose of this paper is, therefore, to develop optimality conditions for a new class of nondifferentiable multiobjective programming problems with vanishing constraints. Namely, this paper represents the study concerning both necessary and sufficient optimal- ity conditions for convex directionally differentiable vector optimization problems with inequality, equality and vanishing constraints. Considering the concept of a (weak) Pareto solution, we establish Karush–Kuhn–Tucker type necessary optimality conditions which are formulated in terms of directional derivatives. In proving the aforesaid necessary optimality conditions, we use a nonlinear version of the Gordan alternative theorem for convex func- tions and also the Abadie constraint qualification. Further, we illustrate the case that the necessary optimality conditions may not hold under the aforesaid constraint qualification. Therefore, we introduce the VC-Abadie constraint qualification and, under this weaker con- straint qualification in comparison to that classical one, we present the Karush–Kuhn–Tucker type necessary optimality conditions for the considered directionally differentiable multiob- jective programming problem. Further, we prove the sufficiency of the aforesaid necessary optimality conditions for such nondifferentiable vector optimization problems under appro- priate convexity hypotheses. The optimality results established in the paper are illustrated by the example of a convex directionally differentiable multiobjective programming problem with vanishing constraints. Furthermore, for the considered directionally differentiable vec- tor optimization problem with vanishing constraints, we define its vector Wolfe dual problem and we prove several duality theorems also under convexity hypotheses. 2 Preliminaries In this section, we provide some definitions and results that we shall use in the sequel. The following convention for equalities and inequalities will be used throughout the paper. T T n For any x = (x , x , ..., x ) , y = (y , y , ..., y ) in R ,wedefine: 1 2 n 1 2 n (i) x = y if and only if x = y for all i = 1, 2, ..., n; i i (ii) x < y if and only if x < y for all i = 1, 2, ..., n; i i (iii) x  y if and only if x  y for all i = 1, 2, ..., n; i i (iv) x ≤ y if and only if x  y and x = y. Throughout the paper, we will use the same notation for row and column vectors when the interpretation is obvious. 123 Annals of Operations Research Definition 2.1 The affine hull of the set C of points x , ..., x ∈ C is defined by 1 k k k af f C = α x : α ∈ R, α = 1 . i i i i i =1 i =1 Definition 2.2 (Hiriart-Urruty & Lemaréchal, 1993) The relative interior of the set C (denoted by relint C)isdefinedas relint C = {x ∈ C : B (x , r ) ∩ af f C ⊆ C for some r > 0} , where B (x , r ) := y ∈ R : x − y  r is the ball of radius r around x with respect to some norm on R . Remark 2.3 (Rockafellar, 1970) The definition of the relative interior of a nonempty convex set C can be reduced to the following: relint C = {x ∈ C :∀y ∈ C ∃λ> 1s.t. λx + (1 − λ)y ∈ C } . Definition 2.4 It is said that ϕ : C → R,where C ⊂ R is a nonempty convex set, is convex on C if the inequality ϕ (u + λ(x − u))  λϕ(x ) + (1 − λ) ϕ(u) (1) holds for all x , u ∈ C and any λ ∈ [0, 1]. It is said that ϕ is said to be strictly convex on C if the inequality ϕ (u + λ(x − u)) <λϕ(x ) + (1 − λ) ϕ(u) holds for all x , u ∈ C, x = u,and any λ ∈ (0, 1). Definition 2.5 We say that a mapping ϕ : X → R defined on a nonempty set X ⊆ R is directionally differentiable at u ∈ X into a direction v ∈ R if the limit ϕ (u + αv) − ϕ (u) ϕ (u; v) = lim (2) α→0 α exists finite. We say that ϕ is directionally differentiable or (Dini differentiable) at u, if its + n directional derivative ϕ (u; v) exists finite for all v ∈ R . n n Proposition 2.6 (Jahn, 2004) Let a mapping ϕ : R → R be convex. Then, at every u ∈ R n + and in every direction v ∈ R , the directional derivative ϕ (u; v) exists. Moreover, since the convex function ϕ has a directional derivative in the direction x − u for any x ∈ R ,then the following inequality ϕ (x ) − ϕ (u)  ϕ (u; x − u) (3) holds. n n Lemma 2.7 (Jahn, 2004)Let X ⊆ R be open, u ∈ X be given, f , g : X → R and v ∈ R . Further, assume that the directional derivatives of f and g at u in the direction v exist, + + i.e. f (u; v) and g (u; v) both exist. Then the directional derivative of f · g exists and + + ( f · g) (u; v) = f (u)g (u; v)+ f (u; v)g(u). Giorgi (2002) proved the following theorem of the alternative for convex functions, which may be considered as a nonlinear version of the Gordan theorem presented by Mangasarian (1969) in the linear case. 123 Annals of Operations Research n k Theorem 2.8 (Giorgi, 2002)Let C ⊂ R be a a nonempty convex set, F : C → R , m n q : C → R be convex functions and  : R → R be a linear function. Let us assume that there exists x ∈ relint C such that  (x ) < 0,j = 1, ..., m, and  (x )  0, 0 j 0 s 0 s = 1, ..., q. Then, the system F x < 0, i = 1, ..., k, ⎨ ( ) (x )  0, j = 1, ...m, (4) (x )  0, s = 1, ..., q T k m q admits no solutions if and only if there exists a vector (λ, θ, β) ∈ R × R × R , λ = 0, + + such that T T T λ F (x ) + θ  (x ) + β  (x )  0, ∀x ∈ C. Definition 2.9 The cone of sequential linear directions (also known as the sequential radial cone) to a set Q ⊂ R at x ∈ Q is the set denoted by Z Q; x and defined by ( ) Z (Q; x ) := v ∈ R :∃ (α ) ⊂ R α ↓ 0 such that x + α v ∈ Q, ∀k ∈ N . k + k k Definition 2.10 The tangent cone to a set Q ⊂ R at x ∈ cl Q is the set denoted by T (Q; x ) and defined by x − x T Q; x := v ∈ R :∃ x ⊆ A, α ⊂ R such that α ↓ 0 ∧ x → x ∧ → v ( ) ( ) ( ) k k + k k = v ∈ R :∃v → v, α ↓ 0 such that x + α v ∈ Q, ∀k ∈ N , k k k k where cl Q denotes the closure of Q. Note that the aforesaid cones are nonempty, T (Q; x ) is closed, it may not be convex and Z (Q; x ) ⊂ T (Q; x ). 3 Multiobjective programming with vanishing constraints In the paper, we consider the following constrained multiobjective programming problem (MPVC) with vanishing constraints defined by V -minimize f (x ) := f (x ), ..., f (x ) 1 p g (x )  0, j = 1, ..., m, h (x ) = 0, s = 1, ..., q, (MPVC) H (x )  0, t = 1, ..., r , H (x ) G (x )  0, t = 1, ..., r , t t x ∈ C , n n n where f : R → R, i ∈ I = {1, ..., p}, g : R → R, j ∈ J = {1, ..., m}, h : R → R, i j s n n s = 1, ..., r , H : R → R, G : R → R, t ∈ T = {1, ..., r }, are real-valued functions and t t C ⊆ R is a nonempty open convex set. For the purpose of simplifying our presentation, we will next introduce some notations which will be used frequently throughout this paper. Let ={x ∈ C : g (x )  0, j ∈ J , H (x )  0, H (x ) G (x )  0, t ∈ T } be the set of all feasible solutions for (MPVC). t t t Further, we denote by J (x ) := j ∈ J : g (x ) = 0 the set of inequality constraint indices that are active at x ∈ and by J (x ) ={ j ∈{1, ..., m}: g (x)< 0} the set of inequality constraint indices that are inactive at x ∈ . Then, J (x ) ∪ J (x ) = J . 123 Annals of Operations Research Before studying optimality in multiobjective programming, one has to define clearly the well-known concepts of optimality and solutions in multiobjective programming problem. The (weak) Pareto optimality in multiobjective programming associates the concept of a solution with some property that seems intuitively natural. Definition 3.1 A feasible point x is said to be a Pareto solution (an efficient solution) in (MPVC) if and only if there exists no other x ∈ such that f (x ) ≤ f (x ). Definition 3.2 A feasible point x is said to be a weak Pareto solution (a weakly efficient solution, a weak minimum) in (MPVC) if and only if there exists no other x ∈ such that f (x)< f (x ). As it follows from the definition of (weak) Pareto optimality, x is nonimprovable with respect to the vector cost function f . The quality of nonimprovability provides a complete solution if x is unique. However, usually this is not the case, and then one has to find the entire exact set of all Pareto optimality solutions in a multiobjective programming problem. Now, for any feasible solution x, let us denote the following index sets T (x ) = {t ∈ T : H (x ) > 0} , + t T (x ) = {t ∈ T : H (x ) = 0} . 0 t Further, let us divide the index set T (x ) into the following index subsets: T (x ) = {t ∈ T : H (x ) > 0, G (x ) = 0} , +0 t t T (x ) = {t ∈ T : H (x ) > 0, G (x ) < 0} . +− t t Similarly, the index set T (x ) can be partitioned into the following three index subsets: T (x ) = {t ∈ T : H (x ) = 0, G (x ) > 0} , 0+ t t T (x ) = {t ∈ T : H (x ) = 0, G (x ) = 0} , 00 t t T x = {t ∈ T : H x = 0, G x < 0} . ( ) ( ) ( ) 0− t t Moreover, we denote by T (x ) the set of indexes t ∈ T defined by T (x ) = T (x ) ∪ HG HG 00 T (x ) ∪ T (x ) ∪ T (x ). 0+ 0− +0 Before proving the necessary optimality conditions for the considered directionally dif- ferentiable multiobjective programming problem with vanishing constraints, we introduce the Abadie constraint qualification for this multicriteria optimization problem. In order to introduce the aforesaid constraint qualification, for x ∈ ,wedefine thesets Q (x ), l = 1, ..., p,and Q (x ) as follows 123 Annals of Operations Research Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i = l, i i g x  0, ∀ j ∈ J , ( ) h (x ) = 0, ∀s ∈ S, H (x )  0, ∀t ∈ T , H (x ) G (x )  0, ∀t ∈ T , t t Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i i g (x )  0, ∀ j ∈ J , h (x ) = 0, ∀s ∈ S, H (x )  0, ∀t ∈ T , H (x ) G (x )  0, ∀t ∈ T . t t Now, we give the definition of the almost linearizing cone for the considered multiobjective programming problem (MPVC) with vanishing constraints. It is a generalization of the almost linearizing cone introduced by Preda and Chitescu (1999) for a directionally differentiable multiobjective optimization problem with inequality constraints only. Definition 3.3 The almost linearizing cone L ( , x ) to the set at x ∈ is defined by n + + L ( , x ) = v ∈ R : f (x ; v)  0∀i ∈ I , g (x ; v)  0, j ∈ J (x ) , i j + + + h (x ; v) = 0, ∀s ∈ S, H (x ; v)  0, t ∈ T , (H G ) (x ; v)  0, t ∈ T . t t s t Now, we prove the result which gives the formulation of the almost linearizing cone to the sets Q (x ), l = 1, ..., p. Proposition 3.4 Let x ∈ be a Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Then, the linearizing cone to the set to each set l l Q (x ),l = 1, ..., p, at x, denoted by L Q (x ) ; x , is given by l n L Q (x ) ; x := {v ∈ R : f (x ; v)  0, ∀i ∈ I , i = l, + + g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , (5) 0+ j s + + H (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ),G (x ; v)  0, ∀t ∈ T (x ) . 00 0− +0 t t Proof Let us assume that x ∈ is a Pareto solution in the considered multiobjective pro- gramming problem (MPVC) with vanishing constraints. Then, by the definitions of the almost linearizing cone and index sets, we get l n L Q (x ) ; x := {v ∈ R : f (x ; v)  0, ∀i ∈ I , i = l, + + g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , 0+ (6) j s + + H (x ; v)  0, ∀t ∈ T (x ) , (H G ) (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ) . 0 t t 0 +0 Note that, by Lemma 2.7, one has + + + (H G ) (x ; v) = G (x)(H ) (x ; v) + H (x)(G ) (x ; v).(7) t t t t t t 123 Annals of Operations Research Then, by the definition of index sets, ( 7 )gives G (x)(H ) (x ; v) if t ∈ T (x ) ∪ T (x ) t t 0+ 0− (H G ) (x ; v) = 0if t ∈ T (x ) (8) t t 00 H (x)(G ) (x ; v) if t ∈ T (x ) . t t +0 Combining ( 6 )-( 8 ), we get ( 5 ). This completes the proof of this proposition. Remark 3.5 Note that the almost linearizing cone to Q (x ) at x ∈ Q (x ) is given by L (Q (x ) ; x ) = L Q (x ) ; x . (9) l=1 Indeed, by ( 5 ), we get ( 9 ). In other words, the formulation of L (Q (x ) ; x ) is given by n + L (Q (x ) ; x ) := v ∈ R : f (x ; v)  0, ∀i ∈ I , + + + g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , (10) 0+ s t + + H (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ) , G (x ; v)  0, ∀t ∈ T (x ) . 00 0− +0 t t + + + Proposition 3.6 If f x;· ,i ∈ I , g x;· ,j ∈ J x , h x;· ,s ∈ S, −H x;· , ( ) ( ) ( ) ( ) ( ) s t i j t ∈ T (x ) ∪ T (x ) ∪ T (x ),G (x;·),t ∈ T (x ),are convex on R ,then L (Q (x ) ; x ) 00 0− 0+ +0 is a closed convex cone. Proof Since the directional derivative is a positive homogenous function, therefore, if α  0 and v ∈ L (Q (x ) ; x ), one has αv ∈ L (Q (x ) ; x ). This means that L (Q (x ) ; x ) is a cone. Now, we prove that it is a convex cone. Let v , v ∈ L (Q (x ) ; x ) and α ∈ [0, 1].By 1 2 convexity assumption, it follows that + + + f (x ; αv + (1 − α) v )  α f (x ; v ) + (1 − α) f (x ; v )  0, i ∈ I , 1 2 1 2 i i i + + + g (x ; αv + (1 − α) v )  αg (x ; v ) + (1 − α) g (x ; v )  0, j ∈ J (x ) , 1 2 1 2 j j j + + + h (x ; αv + (1 − α) v )  αh (x ; v ) + (1 − α) h (x ; v )  0, s ∈ S, 1 2 1 2 s s s + + −H (x ; αv + (1 − α) v )  −α H (x ; v ) 1 2 1 t t − (1 − α) H (x ; v )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) , 2 00 0− 0+ + + + G (x ; αv + (1 − α) v )  αG (x ; v ) − (1 − α) G (x ; v )  0, t ∈ T (x ) . 1 2 1 2 +0 t t t The above inequalities imply that αv + (1 − α) v ∈ L (Q (x ) ; x ), which means that 1 2 L (Q (x ) ; x ) is a convex cone. Now, we prove the closedness of L (Q (x ) ; x ). In order to prove this property, we take a sequence {v } ⊂ L (Q (x ) ; x ) such that v → v as r →∞.Since v ∈ L (Q (x ) ; x ) for r r r any integer r, by the continuity of convex functions f (x;·), i ∈ I,wehave + + lim f (x ; v ) = f (x ; v)  0, ∀i ∈ I . i i r →∞ + + Similarly, we obtain g (x ; v)  0, j ∈ J (x ) , h (x ; v) = 0, s ∈ S, H (x ; v) = 0, j s + + t ∈ T (x ), H (x ; v)  0, t ∈ T (x ) ∪ T (x ), G (x ; v)  0, t ∈ T (x ). This means 0+ 00 0− +0 t t that the set L (Q (x ) ; x ) is closed. Remark 3.7 Based on the result established in the above proposition, we conclude that also L Q (x ) ; x , l = 1, ..., p, are also closed convex cones. 123 Annals of Operations Research Proposition 3.8 If, for each v ∈ Z (Q (x ) ; x ), the Dini directional derivatives f (x ; v), + + i ∈ I , g (x ; v),j ∈ J (x ) , h (x ; v),s ∈ S, H (x ; v),t ∈ T (x ) ∪ T (x ) ∪ T (x ), t 00 0− 0+ j s G (x ; v),t ∈ T (x ),exist,then Z Q (x ) ; x ⊂ L (Q (x ) ; x ) . (11) l=1 Proof Firstly, we prove that, for each l = 1, ..., p, l l Z Q x ; x ⊂ L Q x ; x.(12) ( ) ( ) Therefore, for each l = 1, ..., p,wetake v ∈ Z Q (x ) ; x . Then, by Definition 2.9,there exists (α ) ⊂ R , α ↓ 0, such that x + α v ∈ Q (x ) for all k ∈ N . Therefore, for each k + k k l = 1, ..., p,since x + α v ∈ Q (x ),wehave f (x + α v)  f (x ) , ∀i = 1, ..., p, i = l, i k i g (x + α v)  0 = g (x ) , ∀ j ∈ J (x ) , j k j h (x + α v) = 0 = h (x ) , ∀s ∈ S s k s H (x + α v)  0 = H (x ) , ∀t ∈ T (x ) , t k t 0 H (x + α v) G (x + α v)  0 = H (x ) G (x ) , ∀t ∈ T (x ) . t k t k t t HG Then, by Definition 2.5,wehave f (x + α v) − f (x ) i k i f (x ; v) = lim  0, ∀i = 1, ..., p, i = l, (13) α ↓0 α g (x + α v) − g (x ) j k j g (x ; v) = lim  0, ∀ j ∈ J (x ) , (14) α ↓0 α h (x + α v) − h (x ) s k s h (x ; v) = lim = 0, ∀s ∈ S, (15) α ↓0 α H (x + α v) − H (x ) t k t H (x ; v) = lim  0, ∀t ∈ T (x ) , (16) α ↓0 α (G H )(x + α v) − (G H )(x ) t t k t t (G H ) (x ; v) = lim  0, ∀t ∈ T (x ) . (17) t t HG α ↓0 α By Lemma 2.7, one has G (x ) H (x ; v) if t ∈ T (x ) ∪ T (x ) t t 0+ 0− (H G ) (x ; v) = 0if t ∈ T (x ) (18) t t 00 H (x ) G (x ; v) if t ∈ T (x ) . t t +0 Thus, ( 16 )-( 18 ) yield H (x ; v) = 0, ∀t ∈ T (x ) , (19) 0+ H (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ) , (20) 00 0− G (x ; v)  0, ∀t ∈ T (x ) . (21) Hence, we conclude by ( 13 )-( 15 )and ( 19 )-( 21 )that v ∈ L Q (x ) ; x for each l l l = 1, ..., p. Therefore, since we have shown that Z Q (x ) ; x ⊂ L Q (x ) ; x for each 123 Annals of Operations Research l = 1, ..., p,wehaveby( 9 )that p p l l Z Q (x ) ; x ⊂ L Q (x ) ; x = L (Q (x ) ; x ) , l=1 l=1 as was to be shown. Note that, in general, the converse inclusion of ( 11 ) does not hold. Therefore, in order to prove the necessary optimality condition for efficiency in (MPVC), we give the definition of the Abadie constraint qualification. Definition 3.9 It is said that the Abadie constraint qualification holds at x ∈ for (MPVC) iff L (Q (x ) ; x ) ⊂ Z Q (x ) ; x.(22) l=1 Remark 3.10 By ( 11 ), ( 22 ) means that the Abadie constraint qualification (ACQ) holds at x for (MPVC) iff L (Q (x ) ; x ) = Z Q (x ) ; x . l=1 Now, we state a necessary condition for efficiency in (MPVC). Theorem 3.11 Let x ∈ be an efficient solution in (MPVC) and, for each v ∈ Z (C , x ) , + + + the directional derivatives f (x ; v),i = 1, .., p, g (x ; v),j ∈ J (x ),h (x ; v),s ∈ S, i j + + + H (x ; v),t ∈ T (x ),H (x ; v),t ∈ T (x ) ∪ T (x ) ∪ T (x ),G (x ; v),t ∈ T (x ), 0 00 0+ 0− +0 t t t exist. Further, we assume that g ,j ∈ J (x ),H ,t ∈ T (x ),G ,t ∈ T (x ), are continuous j t + t +− functions at x. If the Abadie constraint qualification (ACQ) holds at x for (MPVC), then, for each l = 1, ..., p, the system + + f (x ; v)  0,f (x ; v) < 0,i = 1, ..., p, i = l, (23) i l g (x ; v)  0, j ∈ J (x ), (24) h (x ; v) = 0, s ∈ S, (25) −H (x ; v)  0, t ∈ T (x ), (26) (H G ) (x ; v)  0, t ∈ T (x ) (27) t t HG has no solution v ∈ R . Proof We proceed by contradiction. Suppose, contrary to the result, that there exists l ∈ {1, ..., p} such that the system + + f (x ; v)  0, f (x ; v) < 0, i = 1, ..., p, i = l , (28) i l g (x ; v)  0, j ∈ J (x ), (29) h (x ; v) = 0, s ∈ S,(30) −H (x ; v)  0, t ∈ T (x ), (31) (H G ) (x ; v)  0, t ∈ T (x ) (32) t t HG 123 Annals of Operations Research has a solution v ∈ R . Then, by ( 8 ), the system + + f (x ; v)  0, f (x ; v) < 0, i = 1, ..., p, i = l , (33) i l g (x ; v)  0, ∀ j ∈ J (x ),(34) h (x ; v) = 0, ∀s ∈ S, (35) H (x ; v) = 0 ∀t ∈ T (x ) , (36) 0+ H (x ; v)  0 ∀t ∈ T (x ) ∪ T (x ) , (37) 00 0− G (x ; v)  0 ∀t ∈ T (x ) (38) has a solution v ∈ R . Hence, it is obvious that v ∈ L (Q (x ) ; x ). By assumption, (ACQ) is satisfied at x for (MPVC). Then, by Definition 3.9, v ∈ Z Q (x ) ; x . Thus, v ∈ l=1 Z Q (x ) ; x . Therefore, by Definition 2.9, there exists (α ) ⊂ R , α ↓ 0, such that k + k x + α v ∈ Q (x ) for all k ∈ N . Hence, x + α v ∈ C and, moreover, k k f (x + α v)  f (x ) , ∀i = 1, ..., p, i = l , (39) i k i 0 g (x + α v)  0, ∀ j ∈ J (x ) , (40) j k h (x + α v) = 0, ∀s = 1, ..., q, (41) s k H (x + α v) = 0, ∀t ∈ T (x ) , (42) t k 0+ H x + α v  0, ∀t ∈ T x , (43) ( ) ( ) t k 0 G (x + α v)  0, ∀t ∈ T (x ) . (44) t k +0 By the definition of indexes sets, one has g (x ) < 0, j ∈ J (x ), H (x ) > 0, t ∈ T (x ), j t + G (x)< 0, t ∈ T (x ). Therefore, by the continuity of g , j ∈ J (x ), H , t ∈ T (x ), G , t +− j t + t t ∈ T (x ), at x, there exists k ∈ N such that, for all k > k , +− 0 0 g (x + α v)  0, ∀ j ∈ / J (x ) , (45) j k H (x + α v)  0, ∀t ∈ T (x ), (46) t k + G (x + α v)  0, ∀t ∈ T (x ). (47) t k +− Thus, we conclude by ( 40 )-(47 ) that there exists δ> 0 such that x + α v ∈ ∩ B x ; δ , ( ) where B (x ; δ) denotes the open ball of radius δ around x. On the other hand, it follows from the assumption that x ∈ is an efficient solution in (MPVC). Hence, by Definition 3.1, there exists a number δ> 0 such that there is no x ∈ ∩ B (x ; δ) satisfying f (x )  f (x ) , i = 1, ..., p, (48) i i f (x ) < f (x ) for some i ∈ {1, ..., p} . (49) i i Hence, since x + α v ∈ ∩ B (x ; δ) and (39) holds, by (48)and (49), we conclude that, for all k ∈ N , the inequality f (x + α v) > f (x ) l k l 0 0 holds. Then, by Definition 2.5, the inequality above implies that the inequality f (x ; v)  0 holds, which is a contradiction to (28). Hence, the proof of this theorem is completed. 123 Annals of Operations Research Remark 3.12 As follows from the proof of Theorem 3.11, if the system (23)-(27)has no solution v ∈ R , then, for each l = 1, ..., p, the system + + f (x ; v)  0, f (x ; v) < 0, i = 1, ..., p, i = l, (50) i l g x ; v  0, ∀ j ∈ J (x ), (51) ( ) h (x ; v) = 0, ∀s ∈ S,(52) H (x ; v) = 0 ∀t ∈ T (x ) , (53) 0+ H (x ; v)  0 ∀t ∈ T (x ) ∪ T (x ),(54) 00 0− G (x ; v)  0 ∀t ∈ T (x ) (55) has no solution v ∈ R . n p Let us define the functions F = F , ..., F : R → R ,  =  , ..., 1 p 1 n |J (x )|+|T (x )|+|T (x )|+|T (x )| 00 0− +0 : R → R and  = |J (x )|+|T (x )|+|T (x )|+|T (x )| 00 0− +0 | | n q+ T (x ) 0+ , ...,  : R → R as follows 1 q+|T (x )| 0+ F (v) := f (x ; v) , i ∈ I , (56) g (x ; v) for l ∈ J (x ) ,α = 1, .., J (x ) , ⎪ l ⎪ + −H (x ; v) for l ∈ T (x ), α = |J (x )| + 1, ..., |J (x )| + |T (x )| , ⎪ 0 00 ⎪ l (v) := −H (x ; v) for l ∈ T (x ), α = |J (x )| + |T (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| , (57) α 0− 00 00 0− l ∈ T (x ), ⎪ +0 ⎪ + G (x ; v) for α = |J (x )| + |T (x )| + |T (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| + |T (x )| , 00 0− 00 0− +0 ⎨ h (x ; v) for l = 1, ..., q,β = 1, .., q, (v) := (58) ⎩ + H (x ; v) for l ∈ T (x ), β = q + 1, ..., q + |T (x )| . 0+ 0+ We are now in a position to formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution x to be an efficient solution in (MPVC) under the Abadie constraint qualification (ACQ). Theorem 3.13 (Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let x ∈ be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that f ,i ∈ I, g ,j ∈ J, h ,s ∈ S, H ,t ∈ T, G , i j s t t + + t ∈ T , are directionally differentiable functions at x, f (x;·),i ∈ I, g (x;·),j ∈ J (x ), i j + + + −H (x;·),t ∈ T (x ) ∪ T (x ),G (x;·),t ∈ T (x ), are convex functions, h (x;·), 00 0− +0 t t s s ∈ S, H (x;·),t ∈ T (x ), are linear functions, g ,j ∈ J (x ),H ,t ∈ T (x ),G , t 0+ j t + t t ∈ T (x ) ∪ T (x ), are continuous functions at x and, moreover, the Abadie constraint 0 +− qualification (ACQ) is satisfied at x for (MPVC). If there exists v ∈ relint Z (C ; x ) such p m that  (v ) < 0 and  (v )  0, then there exist Lagrange multipliers λ ∈ R , μ ∈ R , 0 0 H G q r r ξ ∈ R , ϑ ∈ R and ϑ ∈ R such that the following conditions p q + + λ f (x ; v) + μ g (x ; v) + ξ h (x ; v) − (59) j s s i j i =1 j =1 s=1 r r H G + + ϑ H (x ; v) + ϑ G (x ; v)  0, ∀v ∈ Z (C ; x ) , t t t t t =1 t =1 123 Annals of Operations Research μ g (x ) = 0,j ∈ J , (60) ϑ H (x ) = 0,t ∈ T, (61) ϑ G (x ) = 0,t ∈ T, (62) λ ≥ 0, μ  0, (63) H H H ϑ = 0,t ∈ T (x ) , ϑ  0,t ∈ T (x ) ∪ T (x ) , ϑ free, t ∈ T (x ) , + 00 0− 0+ t j t (64) G G ϑ = 0,t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , ϑ  0,t ∈ T (x ) 00 0+ 0− +− +0 t t (65) hold. Proof Let x ∈ be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Since (ACQ) is satisfied at x for (MPVC), by Remark 3.12, the system (50)-(55) has no solution v ∈ R .By(56)-(58), it follows that the system F (v) < 0, i = 1, ..., k, ⎪ i (v)  0, j = 1, ...m, (v)  0, s = 1, ..., q T k m q admits no solutions. Then, by Theorem 2.8, there exists a vector (λ, θ, β) ∈ R × R × R , + + λ = 0, such that T T T n λ F (v) + θ  (v) + β  (v)  0, ∀v ∈ R . Hence, by (56)-(58), one has p q + + + + λ f (x ; v) + θ g (x ; v) + β h (x ; v) − θ H (x ; v) − i j s t i j s t i =1 j ∈J (x ) s=1 t ∈T (x )∪T (x ) 00 0− + + β H (x ; v) + θ G (x ; v)  0, ∀v ∈ X . (66) s s t t t ∈T x t ∈T x ( ) ( ) 0+ +0 Let us set θ if j ∈ J (x ) , μ = (67) 0if j ∈ / J (x ) , ξ = β , s = 1, ..., q, (68) θ if t ∈ T (x ) ∪ T (x ) , α = |J (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| , ⎪ α 00 0− 00 0− ϑ = β if t ∈ T (x ) , α = q + 1, ..., q + |T (x )| , (69) α 0+ 0+ 0if t ∈ T (x ) , θ if t ∈ T (x ) , α = |J (x )| + |T (x )| + |T (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| + |T (x )| , α +0 00 0− 00 0− +0 ϑ = 0if t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , 00 0+ 0− +− (70) If we use (67)-(70)in(66), then we get the Karush–Kuhn–Tucker optimality condition (59). Moreover, note that (67)-(70) imply the Karush–Kuhn–Tucker optimality conditions (60)- (65). Hence, the proof of this theorem is finished. 123 Annals of Operations Research Note that, in general, the Abadie constraint qualification may not be fulfilled at an efficient solution in (MPVC) if T (x ) = ∅. Based on the definition of the index sets, we substitute the constraint H G (x )  0, t ∈ T t t by the constraints H (x ) = 0, G (x )  0, t ∈ T (x ) t t 0+ H (x )  0, G (x )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , t t 00 +0 0− +− in which the index sets depend on x. Then, we define the following vector optimization problem derived from (MPVC), some of the constraints of which depends on the optimal point x: V -minimize f (x ) := f (x ), ..., f (x ) 1 p g (x )  0, j = 1, ..., m, h (x ) = 0, s = 1, ..., q, H (x )  0, t = 1, ..., r , (MP (x ) ) H (x ) = 0, G (x )  0, t ∈ T (x ) t t 0+ H (x )  0, G (x )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , t t 00 +0 0− +− x ∈ C . In order to introduce the modified Abadie constraint qualification, for x ∈ ,wedefine the sets Q (x ), l = 1, ..., p,and Q (x ) as follows Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i = l, i i g (x )  0, ∀ j = 1, ..., m, h (x ) = 0, ∀s = 1, ..., q, H (x ) = 0, G (x )  0, ∀t ∈ T (x ) , t t 0+ H (x )  0, G (x )  0, ∀t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , t t 00 +0 0− +− Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i i g x  0, ∀ j = 1, ..., m, ( ) h (x ) = 0, ∀s = 1, ..., q, H (x ) = 0, G (x )  0, ∀t ∈ T (x ) , t t 0+ H (x )  0, G (x )  0, ∀t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) . t t 00 +0 0− +− Then, the almost linearizing cone for the sets Q (x ) is defined by L Q (x ) ; x := v ∈ R : f (x ; v)  0, ∀i ∈ I , i = l, + + + (71) g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , 0+ s t + + H x ; v  0, ∀t ∈ T x ∪ T x , G x ; v  0, ∀t ∈ T x ∪ T x . ( ) ( ) ( ) ( ) ( ) ( ) 00 0− 00 +0 t t Hence, the almost linearizing cone for the set Q (x ) is given as follows L Q (x ) ; x = L Q (x ) ; x . (72) l=1 123 Annals of Operations Research Remark 3.14 Note that the only difference between L (Q (x ) ; x ) and L Q (x ) ; x is that we add the inequality G (x ; v)  0, ∀t ∈ T (x ) in L Q (x ) ; x in comparison to L (Q (x ) ; x ). In particular, we always have the relation L Q (x ) ; x ⊂ L (Q (x ) ; x ) . (73) Proposition 3.15 Let x be a feasible solution in (MPVC). Then Z Q (x ) ; x ⊂ L (Q (x ) ; x ) . (74) l=1 Proof By Proposition 3.8, it follows that Z Q (x ) ; x ⊂ L (Q (x ) ; x ) . (75) l=1 Moreover, as it follows from the proof of Proposition 3.8, one has l l Z Q (x ) ; x ⊂ L Q (x ) ; x , ∀l = 1, ..., p. (76) Thus, (76)and (72) yield p p l l Z Q (x ) ; x ⊂ L Q (x ) ; x = L Q (x ) ; x . (77) l=1 l=1 Since Q (x ) ⊆ Q (x ), l = 1, ..., p, therefore, one has Z Q (x ) ; x ⊆ Z Q (x ) ; x , ∀l = 1, ..., p, (78) L Q (x ) ; x ⊆ L (Q (x ) ; x ) . (79) Combining (75)–(79), we get (74). Now, we are ready to introduce the modified Abadie constraint qualification which we name the VC-Abadie constraint qualification. Definition 3.16 Let x ∈ be an efficient solution in (MPVC). Then, the VC-Abadie con- straint qualification (VC-ACQ) holds at x for(MPVC)iff L Q (x ) ; x ⊆ Z Q (x ) ; x.(80) l=1 Now, we define the Abadie constraint qualification for (MP(x )) and we show that then the VC-Abadie constraint qualification (VC-ACQ) holds at x for (MPVC), even in a case in which the Abadie constraint qualification (ACQ) is not satisfied. Definition 3.17 Let x ∈ be a (weakly) efficient solution in (MPVC). Then, the modified Abadie constraint qualification (MACQ) holds at x for (MP(x ))iff L (Q (x ) ; x ) ⊆ Z Q (x ) ; x . (81) l=1 123 Annals of Operations Research We now give the sufficient condition for the VC-Abadie constraint qualification to be satisfiedatanefficientsolutionin(MPVC). Lemma 3.18 Let x ∈ be an efficient solution in (MPVC). If the modified Abadie constraint qualification (MACQ) holds at xfor (MP(x )), then the VC-Abadie constraint qualification (VC-ACQ) holds at x for (MPVC). Proof Assume that x ∈ is an efficient solution in (MPVC) and, moreover, the modified Abadie constraint qualification (MACQ) holds at x for (MP(x )). Then, by Definition 3.17, it follows that L (Q (x ) ; x ) ⊆ Z Q (x ) ; x.(82) l=1 Since Q (x ) ⊆ Q (x ), l = 1, ..., p,wehavethat Z Q x ; x ⊆ Z Q x ; x , l = 1, ..., p, (83) ( ) ( ) L Q x ; x ⊆ L Q x ; x , l = 1, ..., p. (84) ( ) ( ) Hence, (84) implies p p L Q x ; x = L Q x ; x ⊆ L Q x ; x = L Q x ; x . (85) ( ) ( ) ( ) ( ( ) ) l=1 l=1 Then, (83)gives p p Z Q (x ) ; x ⊆ Z Q (x ) ; x . (86) l=1 l=1 Thus, by (82), (85)and (86), we get L Q (x ) ; x ⊆ Z Q (x ) ; x , l=1 as was to be shown. Since the VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie con- straint qualification (ACQ), the necessary optimality conditions (59)-(65) may not hold. Therefore, in the next theorem, we formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution x to be an efficient solution in (MPVC) under the VC-Abadie constraint qualification (VC-ACQ). Theorem 3.19 (Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let x ∈ be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that f ,i ∈ I, g ,j ∈ J, h ,s ∈ S, H , i j s t + + t ∈ T, G ,t ∈ T , are directionally differentiable functions at x, f (x;·),i ∈ I, g (x;·), i j + + j ∈ J (x ), −H (x;·),t ∈ T (x ) ∪ T (x ),G (x;·),t ∈ T (x ) ∪ T (x ),are convex 00 0− 00 +0 t t + < functions, h (x;·),s ∈ S, H (x;·),t ∈ T (x ), are linear functions, g ,j ∈ J (x ), 0+ j s t H ,t ∈ T (x ),G ,t ∈ T (x ) ∪ T (x ), are continuous functions at x and, moreover, the t + t 0 +− VC-Abadie constraint qualification (VC-ACQ) is satisfied at x for (MPVC). If there exists v ∈ relint Z (C ; x ) such that  (v ) < 0 and  (v )  0, then there exist Lagrange 0 0 0 123 Annals of Operations Research H G p m q r r multipliers λ ∈ R , μ ∈ R , ξ ∈ R , ϑ ∈ R and ϑ ∈ R such that the following conditions p q + + λ f (x ; v) + μ g (x ; v) + ξ h (x ; v) − (87) j s s i j i =1 j =1 s=1 r r H G + + ϑ H (x ; v) + ϑ G (x ; v)  0∀v ∈ Z (C ; x ) , t t t t t =1 t =1 μ g (x ) = 0,j ∈ J , (88) ϑ H (x ) = 0,t ∈ T, (89) ϑ G (x ) = 0,t ∈ T, (90) λ ≥ 0, μ  0, (91) H H H ϑ = 0,t ∈ T (x ) , ϑ  0,t ∈ T (x ) ∪ T (x ) , ϑ free, t ∈ T (x ) , (92) + 00 0− 0+ t j t G G ϑ = 0,t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , ϑ  0,t ∈ T (x ) ∪ T (x ) 00 0+ 0− +− +0 00 t t (93) hold. Now, we prove the sufficiency of the Karush–Kuhn–Tucker optimality conditions for the considered multiobjective programming problem (MPVC) with vanishing constraints under appropriate convexity hypotheses. Theorem 3.20 Let x be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)–(65) be satisfied at x for (MPVC) with Lagrange multi- H G k m q r r pliers λ ∈ R , μ ∈ R , ξ ∈ R , ϑ ∈ R and ϑ ∈ R . Further, we assume that f ,i ∈ I, + + + − g ,j ∈ J (x ),h ,s ∈ S x := s ∈ S : ξ > 0 , −h ,s ∈ S x := s ∈ S : ξ < 0 , ( ) ( ) j s s s s −H ,t ∈ T (x ) ∪ T (x ) ∪ T (x ),G ,t ∈ T (x ),are convex on .Then xis a weak t 00 0− 0+ t +0 Pareto solution in (MPVC). Proof We proceed by contradiction. Suppose, contrary to the result, that x is not a weak Pareto solution in (MPVC). Thus, by Definition 3.1, there exists x ∈ such that f ( x)< f (x ).(94) By assumption, f is convex at x on . Hence, by Proposition 2.6,(94) yields f (x ; x − x ) < 0, i = 1, ..., p. (95) Since λ ≥ 0, the inequalities (95)give λ f (x ; x − x ) < 0. (96) i =1 123 Annals of Operations Research From x , x ∈ and the definition of J (x ), it follows that g ( x )  g (x ) = 0, j ∈ J (x ) , (97) j j h ( x ) = h (x ) = 0, s ∈ S, (98) s s −H ( x )  −H (x ) = 0, t ∈ T (x ) ∪ T (x ) ∪ T (x ), (99) t t 00 0− 0+ G ( x )  G (x ) = 0, t ∈ T (x ) . (100) t t +0 + − By assumption, g , j ∈ J (x ), h , s ∈ S (x ) = s ∈ S : ξ > 0 , −h , s ∈ S (x ) = j s s s s ∈ S : ξ < 0 , −H , t ∈ T (x ) ∪ T (x ) ∪ T (x ), G , t ∈ T (x ),are convex on . t 00 0− 0+ t +0 Then, by Proposition 2.6,(97)-(100) imply, respectively, g (x ; x − x )  0, j ∈ J (x ) , (101) + + h (x ; x − x )  0, s ∈ S (x ) , (102) − − −h (x ; x − x )  0, s ∈ S (x ) , (103) H (x ; x − x )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ), (104) 00 0− 0+ G (x ; x − x )) < 0, t ∈ T (x ). (105) < + − Taking into account that μ = 0, j ∈ J (x ), ξ = 0, s ∈ / S (x ) ∪ S (x ), ϑ = 0, j s t ∈ T (x ), ϑ = 0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ), the foregoing inequalities + 00 0+ 0− +− yield, respectively, μ g (x ; x − x )  0, (106) j =1 ξ h (x ; x − x )  0, (107) s=1 ϑ H (x ; x − x )  0, (108) t t t =1 ϑ G (x ; x − x )  0. (109) t t t =1 Combining (96)and (106)-(109), we get that the inequality p q + + + λ f (x ; x − x ) + μ g (x ; x − x ) + ξ h (x ; x − x ) − j s s i j i =1 j =1 s=1 r r H G + + ϑ H (x ; x − x ) + ϑ G (x ; x − x ) < 0 t t t t t =1 t =1 holds, contradicting the Karush–Kuhn–Tucker type necessary optimality condition (59). This means that x is a weak Pareto solution in (MPVC). In order to prove the sufficient optimality conditions for a feasible solution x to be a Pareto solution in (MPVC), stronger convexity assumptions are needed imposed on the objective functions. 123 Annals of Operations Research Theorem 3.21 Let x be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)-(65) be satisfied at x for (MPVC) with Lagrange mul- H G m q r r tipliers λ ∈ R , μ ∈ R , ξ ∈ R , ϑ ∈ R and ϑ ∈ R . Further, we assume that f , + + i ∈ I , are strictly convex on ,g ,j ∈ J (x ),h ,s ∈ S (x ) = s ∈ S : ξ > 0 , −h , j s s s ∈ S (x ) := s ∈ S : ξ < 0 , −H ,t ∈ T (x ) ∪ T (x ) ∪ T (x ),G ,t ∈ T (x ),are t 00 0− 0+ t +0 convex on .Then x is a Pareto solution in (MPVC). Remark 3.22 In Theorem 3.21, all objective functions f , i ∈ I , are assumed to be strictly convex on in order to prove that x ∈ is a Pareto solution in (MPVC). However, as it follows from the proof of the aforesaid theorem, it is sufficient if we assume in Theorem 3.21 that at least one the objective function f , i ∈ I , is strictly convex on , but Lagrange multiplier λ associated to such an objective function f should be greater than 0. i i Remark 3.23 If x is such a feasible solution at which the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) in place of (59)-(65), then also the functions G , t ∈ T (x ), t 00 should be assumed to be convex on in the sufficient optimality conditions. Now, we illustrate the results established in the paper by an example of a convex direc- tionally differentiable multiobjective programming problem with vanishing constraints. Example 3.24 Consider a directionally differentiable multiobjective programming problem with vanishing constraints defined by V -minimize f (x ) = ( f (x ), f (x )) = (|x | − x , x + |x |) 1 2 1 2 1 2 H (x ) = x  0, (MPVC1) 1 2 H (x ) G (x ) = x (−x − x )  0. 1 1 2 1 2 Note that = (x , x ) ∈ R : x  0, x (−x − x )  0 , x = (0, 0) is a feasible solu- 1 2 2 2 1 2 1 2 tion in (MPVC1) and T x = {1}.Now,wedefine thesets Q x , Q x , Q x , Q x . ( ) ( ) ( ) ( ) ( ) Then, by definition, we have 1 2 Q x = x , x ∈ R : x + |x |  0, x  0, x −x − x  0 , ( ) ( ) ( ) 1 2 1 2 2 2 1 2 2 2 Q x = x , x ∈ R : |x | − x  0, x  0, x −x − x  0 , ( ) ( ) ( ) 1 2 1 2 2 2 1 2 Q (x ) = (x , x ) ∈ R : x + |x |  0, |x | − x  0, x  0, x (−x − x )  0 , 1 2 1 2 1 2 2 2 1 2 Q (x ) = (x , x ) ∈ R : x + |x |  0, |x | − x  0, x  0, − x − x  0 . 1 2 1 2 1 2 2 1 2 Further, by Definition 2.9 and the definition of the almost linearizing cone (see (5), (10)), we have, respectively, 1 2 Z Q (x ) ; x = (v ,v ) ∈ R : v + |v |  0,v  0, v (−v − v )  0 , 1 2 1 2 2 2 1 2 2 2 Z Q (x ) ; x = (v ,v ) ∈ R : |v | − v  0,v  0, v (−v − v )  0 , 1 2 1 2 2 2 1 2 L (Q (x ) ; x ) = (v ,v ) ∈ R : v + |v |  0, |v | − v  0,v  0 , 1 2 1 2 1 2 2 L Q (x ) ; x = (v ,v ) ∈ R : v + |v |  0, |v | − v  0,v  0, − v − v  0 . 1 2 1 2 1 2 2 1 2 Note that the Abadie constraint qualification (ACQ) is not satisfied at x = (0, 0) for (MPVC1) since the relation L (Q (x ) ; x ) ⊂ Z Q (x ) ; x is not satisfied. But the VC-Abadie l=1 constraint qualification (VC-ACQ) holds at x = (0, 0) for (MPVC1) since the relation L Q (x ) ; x ⊂ Z Q (x ) ; x is satisfied. As it follows even from this example, the l=1 123 Annals of Operations Research VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie constraint qualifi- cation (ACQ). Moreover, the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) are ful- H G 1 1 1 1 filled at x with Lagrange multipliers λ = , λ = , ϑ = , ϑ = . Further, note that 1 2 1 1 2 4 4 4 the functions constituting (MPVC1) are convex on and the objective function f is strictly convex on . Hence, by Theorem 3.21, x = (0, 0) is a Pareto solution in (MPVC1). Note that the optimality conditions established in the literature (see, for example, (Achtziger et al., 2013;Dorschetal., 2012;Dussaultetal., 2019; Hoheisel & Kanzow, 2008, 2007, 2009; Hoheisel et al., 2012; Izmailov & Solodov, 2009)) are not applicable for the consid- ered multiobjective programming problem (MPVC1) with vanishing constraints since the results established in the above mentioned works have been proved for scalar optimization problems with vanishing constraints. Moreover, the results presented in Guu et al. (2017) and Mishra et al. (2015) have been established for differentiable multiobjective programming problems with vanishing constraints only and, therefore, they are not useful for finding (weak) Pareto solutions in such nondifferentiable vector optimization problems as the directionally differentiable multiobjective programming problem (MPVC1) with vanishing constraints. 4 Wolfe duality In this section, for the considered vector optimization problem (MPVC) with vanishing con- straints, we define its vector Wolfe dual problem. Then we prove several duality results between problems (MPVC) and (WDVC) under convexity assumption imposed on the func- tions constituting them. We now define the vector-valued Lagrange function L for (MPVC) as follows H G L y,μ,ξ,ϑ ,ϑ := f (y) , ..., f (y) 1 p ⎛ ⎞ m q r r H G ⎝ ⎠ + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) e, j j s s t t t t j =1 s=1 t =1 t =1 T p where e = [1, ..., 1] ∈ R . Then, we re-write the above definition of the vector-valued Lagrange function L as follows: H G H G H G L y,μ,ξ,ϑ ,ϑ := L y,μ,ξ,ϑ ,ϑ , ..., L y,μ,ξ,ϑ ,ϑ := 1 p m r r H G f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y), ..., 1 j j s s t t t t j =1 s=1 t =1 t =1 m r r H G f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) . p j j s s t t t t j =1 s=1 t =1 t =1 123 Annals of Operations Research For x ∈ , we define the following vector Wolfe dual problem related to the considered multiobjective programming problem (MPVC) with vanishing constraints as follows: H G L y,μ,ξ,ϑ ,ϑ → V-max + m + such that λ f (y; x − y) + μ g (y; x − y) + i j i =1 i j =1 j r + r + + H G ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y)  0, s t t s=1 s t =1 t t =1 t λ ≥ 0, λ = 1, μ  0, (WDVC (x ) ) i =1 ϑ = w H (x ), w  0, t = 1, ..., r, t t t ϑ = θ − w G (x ), θ  0, t = 1, ..., r. t t t t Let H G (x ) = y,λ,μ,ξ,ϑ ,ϑ ,w,θ : verifying the constraints of (WDVC (x ) ) be the set of all feasible solutions in (WDVC(x )). Further, we define the set Y (x ) as follows: H G + Y (x ) = y ∈ X : y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ (x ) and J (x ) := j ∈ J : μ > 0 . Remark 4.1 In the Wolfe dual problem (WDVC(x )) given above, the significance of w and θ is the same as v and β in Theorem 1 (Achtziger and Kazanov (2008)). t t t Now, on the line Hu et al. (2020), we define the following vector dual problem in the sense of Wolfe related to the considered multicriteria optimization problem (MPVC) with vanishing constraints by H G L y,μ,v,ϑ ,ϑ → V-max (WDVC) H G such that y,λ,μ,v,ϑ ,ϑ ,w,θ ∈ , where the set of all feasible solutions in (WDVC) is defined by = (x ).Further, x ∈ H G let us define the set Y by Y = y ∈ X : y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ . H G Theorem 4.2 (Weak duality): Let x and y,λ,μ,ξ,ϑ ,ϑ ,w,θ be any feasible solu- tions for (MPVC) and (WDVC), respectively. Further, we assume that one of the following hypotheses is fulfilled: + + − A) f ,k = 1, ..., p, g ,j ∈ J x ,h ,s ∈ S x , −h ,s ∈ S x , −H ,t ∈ ( ) ( ) ( ) i j s s t + + H − T (x ) ∪ T (x ) ∪ T (x ),T (x ) := t ∈ T : ϑ > 0 ,H ,t ∈ T (x ) := 00 0− 0+ t 0+ 0+ 0+ t ∈ T : ϑ < 0 ,G ,t ∈ T (x ),are convex on ∪ Y. 0+ t +0 H G B) the vectorial Lagrange function L ·,μ,ξ,ϑ ,ϑ is convex on ∪ Y. H G Then, f (x ) ≮ L y,μ,ξ,ϑ ,ϑ . Proof We proceed by contradiction. Suppose, contrary to the result, that H G f (x ) < L y,μ,ξ,ϑ ,ϑ . Hence, by definition of the Lagrange function L, the aforesaid inequality gives m q r r H G f (x ) < f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y), i = 1, ..., p. i i j j s s t t t t j =1 s=1 t =1 t =1 (110) 123 Annals of Operations Research H G Thus, by y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ , it follows that p p q m r r H G λ f (x)< λ f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y). i i i i j j s s t t t t i =1 i =1 j =1 s=1 t =1 t =1 (111) A) Now, we prove this theorem under hypothesis A). From convexity assumptions, by Proposition 2.6, the inequalities f (x ) − f (y)  f (y; x − y), i = 1, ..., p, (112) i i 0  g (x )  g (y) + g (y; x − y), j ∈ J x , (113) ( ) j j + + 0 = h (x )  h (y) + h (y; x − y), s ∈ S (x ) , (114) s s + − 0 =−h (x )  −h (y) − h (y; x − y), s ∈ S (x ) , (115) s s + + 0 =−H (x )  −H (y) − H (y; x − y), t ∈ T (x ) ∪ T (x ) ∪ T (x ) , (116) t t 00 0− t 0+ + − 0 = H (x )  H (y) + H (y; x − y), t ∈ T (x ) , (117) t t 0+ 0 = G (x )  G (y) + G (y; x − y), t ∈ T (x ) (118) t t +0 hold. Multiplying (112)–(118) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively, p p p λ f (x )  λ f (y) + λ f (y; x − y), (119) i i i i i i =1 i =1 i =1 0  μ g (y) + μ g (y; x − y), (120) j j j + + j ∈J (x ) j ∈J (x ) 0  ξ h (y) + ξ h (y; x − y), (121) s s s + + s∈S (x ) s∈S (x ) 0  − (−ξ ) h (y) − (−ξ ) h (y; x − y), (122) s s s − − s∈S (x ) s∈S (x ) H H + 0  − ϑ H (y) − ϑ H (y; x − y), t t t + + t ∈T x ∪T x ∪T x t ∈T x ∪T x ∪T x ( ) ( ) ( ) ( ) ( ) ( ) 00 0− 00 0− 0+ 0+ (123) H H + 0  − ϑ H (y) + −ϑ H (y; x − y), (124) t t t − − t ∈T (x ) t ∈T (x ) 0+ 0+ G G + 0  ϑ G (y) + ϑ G (y; x − y). (125) t t t t ∈T (x ) t ∈T (x ) +0 +0 Taking into account Lagrange multipliers equal to 0, by (119)-(123), we obtain that the inequality 123 Annals of Operations Research p p q m r λ f (x )  λ f (y) + μ g (y) + v h (y) − ϑ H (y) + i i i i j j s s t i =1 i =1 j =1 s=1 t =1 r m G + + ϑ G (y) + λ f (y; x − y) + μ g (y; x − y) + t i j i j t =1 i =1 j =1 r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y) (126) s t t t t s=1 t =1 t =1 holds. By (111)and (126), we get that the inequality + + λ f (y; x − y) + μ g (y; x − y) + i j i j i =1 j =1 r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y)< 0 s t t t t s=1 t =1 t =1 holds, which contradicts the first constraint of (WDVC). H G B) Now, we prove this theorem under hypothesis B). From x ∈ and y,λ,μ,ξ,ϑ ,ϑ , w, θ ∈ , it follows that g (x ) = 0, μ  0, j ∈ J (x ) , (127) j j g (x ) < 0, μ  0, j ∈ / J (x ) , (128) j j h (x ) = 0, ξ ∈ R, s ∈ S, (129) s s −H (x ) = 0, ϑ ∈ R, t ∈ T (x ) , (130) t 0 −H (x)< 0, ϑ = 0, t ∈ T (x ) , (131) t + G (x ) = 0, ϑ  0, t ∈ T (x ) ∪ T (x ) , (132) t 00 +0 G (x ) = 0, ϑ = 0, t ∈ T (x ) ∪ T (x ) ∪∪T (x ) . (133) t 0+ 0− +− By (127)-(133), we obtain m r r H G μ g (x ) + ξ h (x ) − ϑ H (x ) + ϑ G (x )  0. (134) j j s s t t t t j =1 s=1 t =1 t =1 Since (111) is fulfilled, by (134), we get m q r f (x ) + μ g (x ) + ξ h (x ) − ϑ H (x ) i j j s s t j =1 s=1 t =1 G H G + ϑ G (x)< L y,μ,v,ϑ ,ϑ , i = 1, ..., p. t i t =1 Then, by the definition of the vector-valued Lagrange function L, it follows that H G H G L x,μ,ξ,ϑ ,ϑ < L y,μ,ξ,ϑ ,ϑ , i = 1, ..., p. (135) i i 123 Annals of Operations Research H G By hypothesis B), the vector-valued Lagrange function L ·,μ,ξ,ϑ ,ϑ is directionally differentiable convex on ∪ Y . Then, by Proposition 2.6, the following inequalities H G H G + H G L x,μ,ξ,ϑ ,ϑ − L y,μ,ξ,ϑ ,ϑ  L y,μ,ξ,ϑ ,ϑ ; x − y , i = 1, ..., p i i (136) are satisfied. Combining (135)and (136), we obtain + H G L y,μ,ξ,ϑ ,ϑ ; x − y < 0, i = 1, ..., p. (137) Multiplying inequalities (137) by the corresponding Lagrange multipliers λ , i = 1, ..., p, we have H G λ L y,μ,ξ,ϑ ,ϑ ; x − y < 0. i =1 Then, by the definition of the vector-valued Lagrange function L, one has p p + + λ f (y; x − y) + λ μ g (y; x − y)+ i i j i j i =1 i =1 j =1 q r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y) < 0. (138) s t t t t s=1 t =1 t =1 H G By y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ , it follows that λ .Thus,(138) implies that the i =1 following inequality p m + + λ f (y; x − y) + μ g (y; x − y) + i j i j i =1 j =1 q r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y)< 0 s t t t t s=1 t =1 t =1 holds, which contradicts the first constraint of (WDVC). This completes the proof of this theorem under both hypothesis A) and hypothesis B). If the stronger assumptions are imposed on the functions constituting (MPVC), then the following result is true: H G Theorem 4.3 (Weak duality): Let x and y,λ,μ,v,ϑ ,ϑ ,w,θ be any feasible solutions for (IVPVC) and (V C-IVWD), respectively. Further, we assume that one of the following hypotheses is fulfilled: + + A) f ,i = 1, ..., k, are strictly convex on ∪ Y, g ,j ∈ J (x ),h ,s ∈ S (x ), −h , i j s s + − s ∈ S (x ), −H ,t ∈ T (x ),H ,t ∈ T (x ),G ,t ∈ T (x ),are convex on ∪ Y. t t t +0 H 0+ H G B) the vector-valued vectorial Lagrange function L ·,μ,ξ,ϑ ,ϑ is strictly convex on ∪ Y. H G Then, f x  L y,μ,ξ,ϑ ,ϑ . ( ) Theorem 4.4 (Strong duality): Let x ∈ be a Pareto solution (a weak Pareto solution) in (MPVC) and the Abadie constraint qualification be satisfied at x. Then, there exist Lagrange 123 Annals of Operations Research H G p m q r r r r multipliers λ ∈ R , μ ∈ R , v ∈ R , ϑ ∈ R , ϑ ∈ R and w ∈ R , θ ∈ R such that H G x , λ, μ, v, ϑ , ϑ , w, θ is feasible in (WDVC). If also all hypotheses of the weak duality H G theorem - Theorem 4.3 (Theorem 4.2 ) are satisfied, then x , λ, μ, v, ϑ , ϑ , w, θ is an efficient solution (a weakly efficient) of a maximum type in (WDVC). Proof By assumption, x is a Pareto solution of (MPVC) and the Abadie constraint qualifi- p m q cation is satisfied at x. Then, there exist Lagrange multipliers λ ∈ R , μ ∈ R , v ∈ R , H G r r ϑ ∈ R , ϑ ∈ R such that the Karush–Kuhn–Tucker necessary optimality conditions are H G fulfilled. Then, we conclude that x , λ, μ, v, ϑ , ϑ , w, θ ,where w and θ satisfy the t t following conditions ϑ = w H (x ), w  0, t = 1, ..., r, t t t ϑ = θ − w G (x ), θ  0, t = 1, ..., r, t t t t tj is feasible in (WDVC). H G Now, we prove that x , λ, μ, ξ, ϑ , ϑ , w, θ is an efficient solution of a maximum type in (WDVC). We proceed by contradiction. Suppose, contrary to the result, that H G x , λ, μ, ξ, ϑ , ϑ , w, θ is not an efficient solution of a maximum type in (WDVC). H G Then, by definition, there exists  y, λ, μ, ξ, ϑ , ϑ , w,  θ ∈ such that the inequality H G H G L  y, μ, ξ, ϑ , ϑ ≥ L x , μ, ξ, ϑ , ϑ holds. Then, by the Karush–Kuhn–Tucker necessary optimality conditions, we conclude that H G L  y, μ, v, ϑ , ϑ ≥ f (x ) holds, which is a contradiction to the weak duality theorem (Theorem 4.3 ). Hence, we con- H G clude that x , λ, μ, v, ϑ , ϑ , w, θ is an efficient solution of a maximum type in (WDVC). H G The next two theorems give sufficient conditions for y,where y, λ, μ, ξ, ϑ , ϑ , w, θ is a feasible solution of the (WDVC), to be a Pareto solution of (MPVC). Theorem 4.5 (Converse duality): Let x be any feasible solution in (MPVC) and H G y, λ, μ, ξ, ϑ , ϑ , v, β be an efficient solution of a maximum type (a weakly efficient solution of a maximum type) in Wolfe dual problem (WDVC) such that y ∈ .Further,we assume that f ,i = 1, ..., k, are strictly convex (convex) on ∪ Y, g ,j ∈ J (x ),h , i j s + − + − s ∈ S (x ), −h ,s ∈ S (x ), −H ,t ∈ T (x ),H ,t ∈ T (x ),G ,t ∈ T (x ),are convex s t t t +0 H 0+ on ∪ Y. Then y is a Pareto solution (a weak Pareto solution) of (MPVC). Proof We proceed by contradiction. Suppose, contrary to the result, that y ∈ is not an efficient solution of (MPVC). Hence, by Definition 3.1, there exists x ∈ such that f ( x ) ≤ f (y) . (139) 123 Annals of Operations Research From convexity hypotheses, by Proposition 2.6, the inequalities f ( x ) − f (y)> f (y; x − y), i = 1, ..., p, (140) i i + + g ( x )  g (y) + g (y; x − y), j ∈ J ( x ) , (141) j j j + + h ( x )  h (y) + h (y; x − y), s ∈ S ( x ) , (142) s s + − −h ( x )  −h (y) − h (y; x − y), s ∈ S ( x ) , (143) s s + + −H ( x )  −H (y) − H (y; x − y), t ∈ T ( x ) ∪ T ( x ) ∪ T ( x ) , (144) t t 00 0− 0+ + − H ( x )  H (y) + H (y; x − y), t ∈ T ( x ) , (145) t t 0+ G ( x )  G (y) + G (y; x − y), t ∈ T ( x ) (146) t t +0 hold. Multiplying (140)-(146) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively, p p p λ f ( x)> λ f (y) + λ f (y; x − y), (147) i i i i i i =1 i =1 i =1 μ g ( x )  μ g (y) + μ g (y; x − y), (148) j j j j j + + + j ∈J ( x ) j ∈J ( x ) j ∈J ( x ) ξ h ( x )  ξ h (y) + ξ h (y; x − y), (149) s s s s s + + + s∈S ( x ) s∈S ( x ) s∈S ( x ) − −ξ h ( x )  − −ξ h (y) − −ξ h (y; x − y), (150) s s s s s s − − − s∈S ( x ) s∈S ( x ) s∈S ( x ) H H H − ϑ H ( x )  − ϑ H (y) − ϑ H (y; x − y), t t t t t t + + + t ∈T ( x )∪T ( x )∪T ( x ) t ∈T ( x )∪T ( x )∪T ( x ) t ∈T ( x )∪T ( x )∪T ( x ) 00 0− 00 0− 00 0− 0+ 0+ 0+ (151) H H H + − ϑ H ( x )  − ϑ H (y) + −ϑ H (y; x − y), (152) t t t t t t − − − t ∈T ( x ) t ∈T ( x ) t ∈T ( x ) 0+ 0+ 0+ G G G ϑ G ( x )  ϑ G (y) + ϑ G (y; x − y). (153) t t t t t t t ∈T+0 ( x ) t ∈T+0 ( x ) t ∈T+0 ( x ) By x, y ∈ , we have, respectively, g ( x )  0, g (y)  0, j ∈ J , (154) j j + − h ( x ) = h (y), s ∈ S ( x ) ∪ S ( x ) , (155) s s H ( x)> 0, ϑ = θ − w G ( x )  0, t ∈ T ( x ) t t t t + ⇒ ϑ H ( x )  0, (156) H ( x ) = 0, ϑ = θ − w G ( x ) ∈ R, t ∈ T ( x ) t =1 t t t t 0 G ( x)> 0, ϑ = w H ( x ) = 0, t ∈ T ( x ) t t t 0+ t ⎪ G ( x ) = 0, ϑ = w H ( x ) = 0, t ∈ T ( x ) t t t 00 t ⎪ ⇒ ϑ G ( x )  0. (157) G  x < 0, ϑ = w H ( x ) = 0, t ∈ T  x ( ) ( ) t t t t 0− t ⎪ t =1 G ( x ) = 0, ϑ = w H ( x )  0, t ∈ T ( x ) ⎪ t t t +0 t ⎪ G ( x)< 0, ϑ = w H ( x )  0, t ∈ T ( x ) t t t +− 123 Annals of Operations Research H G Hence, using (156), (157) together with y, λ, μ, ξ, ϑ , ϑ , w, θ ∈ ,weobtain r r H H − ϑ H ( x )  − ϑ H (y), (158) t t t t t =1 t =1 r r G G ϑ G ( x )  ϑ G (y). (159) t t t t t =1 t =1 Combining (147)–(159), multiplying by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we obtain, respectively, λ f (y; x − y)< 0, (160) i =1 μ g (y; x − y)  0, (161) j ∈J ( x ) ξ h (y; x − y)  0, (162) s s + − s∈S ( x )∪S ( x ) − ϑ H (y; x − y)  0, (163) t t + − t ∈T ( x )∪T ( x )∪T ( x )∪T ( x ) 00 0− 0+ 0+ ϑ G (y; x − y)  0. (164) t t t ∈T ( x ) Taking into account Lagrange multipliers equal to 0 and then combining (160)–(164), we get that the inequality p m q + + + λ f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y) i j s i j s i =1 j =1 s=1 r r H G + + − ϑ H (y; x − y) + ϑ G (y; x − y)< 0 t t t t t =1 t =1 holds, which is a contradiction to the first constraint of (WDVC). This completes the proof of this theorem. Theorem 4.6 (Strict converse duality): Let x be a feasible solution of (MPVC) and H G y, λ, μ, v, ϑ , ϑ , w, θ be a feasible solution of (WDVC) such that f (x ) = H G L y, μ, ξ, ϑ , ϑ . Further, we assume that one of the following hypotheses is fulfilled: + + A) f ,i = 1, ..., p, are strictly convex on ∪ Y, g ,j ∈ J (x ),h ,s ∈ S (x ), −h , i j s s + − s ∈ S (x ), −H ,t ∈ T (x ),H ,t ∈ T (x ),G ,t ∈ T (x ),are convex on ∪ Y. t t t +0 H 0+ H G B) the vector-valued Lagrange function L ·,μ,v,ϑ ,ϑ is strictly convex on ∪ Y. H G Then x is a Pareto solution in (MPVC) and y, λ, μ, ξ, ϑ , ϑ , w, θ is an efficient solution of a maximum type in (WDVC). Proof We proceed by contradiction. Suppose, contrary to the result, that x ∈ is not a Pareto solution in (MPVC). Hence, by Definition 3.1, there exists x ∈ such that f ( x ) ≤ f (x ) . (165) 123 Annals of Operations Research H G Using (165) with the assumption f (x ) = L y, μ, ξ, ϑ , ϑ ,weobtain H G f  x ≤ L y, μ, ξ, ϑ , ϑ . ( ) Since all hypotheses of Theorem 4.3 are fulfilled, the above relation contradicts weak duality. This means that x is a Pareto solution in (MPVC). Further, efficiency of a maximum type H G of y, λ, μ, ξ, ϑ , ϑ , v, β in (WDVC) follows from the weak duality theorem (Theorem 4.3 ). A restricted version of converse duality for the problems (MPVC) and (WDVC) gives a sufficient condition for the uniqueness of an efficient solution in (MPVC) and an efficient solution of a maximum type in (WDVC). Theorem 4.7 (Restricted converse duality): Let x be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints, H G y, λ, μ, ξ, ϑ , ϑ , w, θ be an efficient solution of a maximum type in its vector Wolfe dual problem (WDVC) and the VC-Abadie constraint qualification (VC-ACQ) be satisfied at x for (MPVC). Further, we assume that one of the following hypotheses is fulfilled: + + A) f ,i = 1, ..., p, are strictly convex on ∪ Y, g ,j ∈ J (x ),h ,s ∈ S (x ), −h , i j s s + − s ∈ S x , −H ,t ∈ T x ,H ,t ∈ T x ,G ,t ∈ T x ,are convex on ∪ Y. ( ) ( ) ( ) ( ) t t t +0 0+ H G B) the vectorial Lagrange function L ·, μ, ξ, ϑ , ϑ is strictly convex on ∪ Y. Then x = y. Proof By means of contradiction, suppose that x = y. Since is an efficient solution in p m q (MPVC), by Theorem 3.11, there exist Lagrange multipliers λ ∈ R , μ ∈ R , ξ ∈ R , H G r r ϑ ∈ R and ϑ ∈ R , not equal to 0, such that (87)-(93) are fulfilled. Thus, by (87)-(93), it follows that H G f (x ) = L x , μ, ξ, ϑ , ϑ . H G By assumption, y, λ, μ, ξ, ϑ , ϑ , w, θ is an efficient solution of a maximum type in vector Wolfe dual problem (MPVC). Thus, one has H G H G L x , μ, ξ, ϑ , ϑ = L y, μ, ξ, ϑ , ϑ . Combining two above relations, we get H G f (x ) = L y, μ, v, ϑ , ϑ . (166) Thus, (166)gives H G λ f (x ) = λ L y, μ, ξ, ϑ , ϑ , i = 1, ..., p, (167) i i i i Adding both sides of (167), by the definition of the vectorial Lagrange function L,weget p p λ f (x ) = λ f (y) + i i i i i =1 i =1 ! " p q H G m r r λ μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) . i j j s s t t k=1 j =1 s=1 t =1 t t =1 t (168) 123 Annals of Operations Research H G Hence, by y, λ, μ, ξ, ϑ , ϑ , w, θ ∈ , one has λ = 1. Thus, (168) implies k=1 p p λ f (x ) = λ f (y) + i i i i i =1 i =1 (169) H G m r r μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y). j s t t j s j =1 s=1 t =1 t t =1 t Proof under hypothesis A). Using hypothesis A), by Proposition 2.6, the inequalities f (x ) − f (y)> f (y; x − y), i = 1, ..., p, (170) i i + + 0  g (x )  g (y) + g (y; x − y), j ∈ J (x ) , (171) j j j + + 0 = h (x )  h (y) + h (y; x − y), s ∈ S (x ) , (172) s s + − 0 =−h (x )  −h (y) − h (y; x − y), s ∈ S (x ) , (173) s s + + 0 =−H (x )  −H (y) − H (y; x − y), t ∈ T (x ) ∪ T (x ) ∪ T (x ) , (174) t t 00 0− 0+ + − 0 = H (x )  H (y) + H (y; x − y), t ∈ T (x ) , (175) t t 0+ 0 = G (x )  G (y) + G (y; x − y), t ∈ T (x ) (176) t t +0 H G hold. By the feasibility of y, λ, μ, ξ, ϑ , ϑ , w, θ in (WDVC), (170)-(176) yield, respec- tively, p p p λ f (x)> λ f (y) + λ f (y; x − y), (177) i i i i i i =1 i =1 i =1 μ g (y) + μ g (y; x − y)  0, (178) j j j + + j ∈J (x ) j ∈J (x ) ξ h (y) + ξ h (y; x − y)  0, (179) s s s + + s∈S (x ) s∈S (x ) − −ξ h (y) − −ξ h (y; x − y)  0, (180) s s − − s∈S (x ) s∈S (x ) H H ϑ (−H (y)) + ϑ H (y; x − y)  0, t t t + + t ∈T (x )∪T (x )∪T (x ) t ∈T (x )∪T (x )∪T (x ) 00 0− 00 0− 0+ 0+ (181) H H −ϑ H (y) + −ϑ H (y; x − y)  0, (182) t t t − − t ∈T x t ∈T x ( ) ( ) 0+ 0+ G G ϑ G (y) + ϑ G (y; x − y)  0. (183) t t t t ∈T (x ) t ∈T (x ) +0 +0 Thus, the above inequalities yield p p λ f (x)> λ f (y) + + μ g (y) + + ξ h (y)+ i i i i j s s i =1 i =1 j ∈J (x ) j s∈S (x ) H H ξ h (y) + + ϑ (−H (y)) + − −ϑ H (y)+ − s t t s∈S (x ) s t t t ∈T (x )∪T (x )∪T (x )∪ t ∈T (x ) 00 0− 0+ 0+ G p + + ϑ G (y) + λ f (y; x − y) + + μ g (y; x − y)+ t i t ∈T (x ) t j ∈J (x ) j +0 i =1 i j + + + ξ h (y; x − y) − −ξ h (y; x − y) + + ϑ H (y; x − y)+ + − s s s s t t s∈S (x ) s∈S (x ) t ∈T (x )∪T (x )∪T (x ) 00 0− 0+ H G + + − −ϑ H (y; x − y) + ϑ G (y; x − y). t t t t t ∈T (x ) t ∈T (x ) 0+ 123 Annals of Operations Research Taking into account the Lagrange multipliers equal to 0, we have p p m λ f (x)> λ f (y) + μ g (y)+ i i i i j =1 j j i =1 i =1 q H G r r ξ h (y) − ϑ H (y) + ϑ G (y)+ s s t t s=1 t =1 t t =1 t (184) p q + m + λ f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y) j s i =1 i j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y). t t t =1 t t =1 t Hence, by the first constraint of (WDVC), (184) yields that the inequality p p λ f (x)> λ f (y) + μ g (y)+ i i i i i =1 i =1 j =1 j H G q r r ξ h (y) − ϑ H (y) + ϑ G (y) s t t s t =1 t t =1 t s=1 holds, contradicting (169). This completes the proof of this theorem under hypothesis A). Proof under hypothesis B) H G Now, we assume that the vector-valued Lagrange function L ·, μ, ξ, ϑ , ϑ is strictly convex on ∪ Y . Hence, by the definition of the vector-valued Lagrange function L,weget H G H G H G L x , μ, ξ, ϑ , ϑ − L y, μ, ξ, ϑ , ϑ > L (y, μ, ξ, ϑ , ϑ ; x − y), i = 1, .., p. i i Then, by the definition of L, one has H G m q r r f (x)> f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y)+ i i s t t j =1 j j s t =1 t t =1 t s=1 + m + f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y) j s i j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y), i = 1, ..., p t t t =1 t t =1 t Multiplying each above inequality by the corresponding Lagrange multiplier λ , i = 1, ..., p, respectively, and then summarizing the resulting inequalities, we obtain p p λ f (x)> λ f (y)+ i i i i i =1 i =1 ! " H G p q m r r λ μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) + i s t t j s i =1 j =1 j s=1 t =1 t t =1 t p p q + m + λ f (y; x − y) + λ μ g (y; x − y) + ξ h (y; x − y) i i j s i =1 i i =1 j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y) . t t t =1 t t =1 t H G By the feasibility of y, λ, μ, ξ, ϑ , ϑ , v, β in (WDVC), one has λ = 1. Then, i =1 the aforesaid inequality gives p p λ f (x)> λ f (y)+ i i i i i =1 i =1 H G m q r r μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) s t t j =1 j j s t =1 t t =1 t s=1 p q + m + λ f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y)+ i j s i =1 i j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y). t t t =1 t t =1 t 123 Annals of Operations Research Using the first constraint of (WDVC), we get that the following inequality p p λ f (x)> λ f (y)+ i i i i i =1 i =1 H G m q r r μ g (y) + v h (y) − ϑ H (y) + ϑ G (y) s s t t j =1 j j t =1 t t =1 t s=1 holds, contradicting (169). This completes the proof of this theorem under hypothesis B). 5 Conclusions This paper represents the study concerning the new class of nonsmooth vector optimization problems, that is, directionally differentiable multiobjective programming problems with vanishing constraints. Under the Abadie constraint qualification, the Karush–Kuhn–Tucker type necessary optimality conditions have been established for such nondifferentiable vec- tor optimization problems in the terms of the right directional derivatives of the involved functions. The nonlinear Gordan alternative theorem has been used in proving these afore- said necessary optimality conditions. However, the Abadie constraint qualification may not hold for such multicriteria optimization problems and therefore, the aforesaid necessary optimality conditions may not hold. Therefore, we have introduced the modified Abadie constraint qualification for the considered multiobjective programming problem with van- ishing constraints. Then, under the modified Abadie constraint qualification, which is weaker than the standard Abadie constraint qualification, we prove weaker necessary optimality conditions of the Karush–Kuhn–Tucker type for such nondifferentiable vector optimization problems with vanishing constraints. The sufficiency of the Karush–Kuhn–Tucker necessary optimality conditions have also been proved for the considered directionally differentiable multiobjective programming problem with vanishing constraints under appropriate convex- ity hypotheses. Furthermore, for the considered directionally differentiable multiobjective programming problems with vanishing constraints, its vector Wolfe dual problem has been defined on the line Hu et al. (2020). Then several duality theorems have been established between the primal directionally differentiable multiobjective programming problems with vanishing constraints and its vector Wolfe dual problem under convexity hypotheses. Thus, the above mentioned optimality conditions and duality results have been derived for a completely a new class of directionally differentiable vector optimization problems in comparison to the results existing in the literature, that is, for directionally differentiable multiobjective programming problems with vanishing constraints. Hence, the results estab- lished in the literature generally for scalar differentiable extremum problems with vanishing constraints have been generalized and extended to directionally differentiable multiobjective programming problems with vanishing constraints. It seems that the techniques employed in this paper can be used in proving similarly results for other classes of nonsmooth mathematical programming problems with vanishing constraints. We shall investigate these problems in the subsequent papers. Declarations Conflict of interest No potential conflict of interest was reported by the author. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the 123 Annals of Operations Research article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. References Achtziger, W., Hoheisel, T., & Kanzow, C. (2013). A smoothing-regularization approach to mathematical programs with vanishing constraints. Computational Optimization and Applications, 55, 733–767. Achtziger, W., & Kanzow, C. (2008). Mathematical programs with vanishing constraints: Optimality conditions and constraint qualifications. Mathematical Programming, 114, 69–99. Ahmad, I. (2011). Efficiency and duality in nondifferentiable multiobjective programming involving directional derivative. Applied Mathematics, 2, 452–460. Antczak, T. (2002). Multiobjective programming under d -invexity. European Journal of Operational Research, 137, 28–36. Antczak, T. (2009). Optimality conditions and duality for nondifferentiable multiobjective programming prob- lems involving d-r-type I functions. Journal of Computational and Applied Mathematics, 225, 236–250. Antczak, T. (2022). Optimality conditions and Mond–Weir duality for a class of differentiable semi-infinite multiobjective programming problems with vanishing constraints. 4OR, 20(3), 417–442. Arana-Jiménez, M., Ruiz-Garzón, G., Osuna-Gómez, R., & Hernández-Jiménez, B. (2013). Duality and a char- acterization of pseudoinvexity for Pareto and weak Pareto solutions in nondifferentiable multiobjective programming. Journal of Optimization Theory and Applications, 156, 266–277. Dinh, N., Lee, G. M., & Tuan, L. A. (2005). Generalized Lagrange multipliers for nonconvex directionally differentiable programs. In V. Jeyakumar & A. Rubinov (Eds.), Continuous optimization. Springer. Dorsch, D., Shikhman, V., & Stein, O. (2012). Mathematical programs with vanishing constraints: Critical point theory. Journal of Global Optimization, 52, 591–605. Dussault, J. P., Haddou, M., & Migot, T. (2019). Mathematical programs with vanishing constraints: Constraint qualifications, their applications and a new regularization method. Optimization, 68, 509–538. Florenzano, M., & Le Van, C. (2001). Finite Dimensional Convexity and Optimization. Studies in Economics Theory. Springer. Giorgi, G., et al. (2002). Osservazioni sui teoremi dell’alternativa non lineari implicanti relazioni di uguaglianza e vincolo insiemistico. In G. Crespi (Ed.), Optimality in economics, finance and industry (pp. 171–183). Milan: Datanova. Guu, S.-M., Singh, Y., & Mishra, S. K. (2017). On strong KKT type sufficient optimality conditions for multiobjective semi-infinite programming problems with vanishing constraints. Journal of Inequalities and Applications, 2017, 282. Hiriart-Urruty, J.-B., & Lemaréchal, C. (1993). Convex analysis and minimization algorithms I Grundlehren der mathematischen Wissenschaften. Springer. Hoheisel, T., & Kanzow, C. (2007). First- and second-order optimality conditions for mathematical programs with vanishing constraints. Applied Mathematics, 52, 495–514. Hoheisel, T., & Kanzow, C. (2008). Stationary conditions for mathematical programs with vanishing constraints using weak constraint qualifications. Journal of Mathematical Analysis and Applications, 337, 292–310. Hoheisel, T., & Kanzow, C. (2009). On the Abadie and Guignard constraint qualifications for mathematical programmes with vanishing constraints. Optimization, 58, 431–448. Hoheisel, T., Kanzow, C., & Schwartz, A. (2012). Mathematical programs with vanishing constraints: a new regularization approach with strong convergence properties. Optimization, 61, 619–636. Hu, Q. J., Chen, Y., Zhu, Z. B., & Zhang, B. S. (2014). Notes on some convergence properties for a smoothing- regularization approach to mathematical programs with vanishing constraints. Abstract and Applied Analysis, 2014, 1–7. Hu, Q., Wang, J., & Chen, Y. (2020). New dualities for mathematical programs with vanishing constraints. Annals of Operations Research, 287, 233–255. Ishizuka, Y. (1992). Optimality conditions for directionally differentiable multi-objective programming prob- lems. Journal of Optimization Theory and Applications, 72, 91–111. Izmailov, A. F., & Solodov, M. V. (2009). Mathematical programs with vanishing constraints: Optimality conditions. sensitivity, and relaxation method. Journal of Optimization Theory and Applications, 142, 501–532. Jahn, J. (2004). Vector optimization: Theory applications and extensions. Springer. 123 Annals of Operations Research Jiménez, B., & Novo, V. (2002). Alternative theorems and necessary optimality conditions for directionally multiobjective programs. Journal of Convex Analysis, 9, 97–116. Kharbanda, P., Agarwal, D., & Sinh, D. (2015). Multiobjective programming under (ϕ, d)-V -type I univexity. Opsearch, 52, 168–185. Khare, A., & Nath, T. (2019). Enhanced Fritz John stationarity, new constraint qualifications and local error bound for mathematical programs with vanishing constraints. Journal of Mathematical Analysis and Applications, 472, 1042–1077. Mangasarian, O. L. (1969). Nonlinear Programming. McGraw-Hill. Mishra, S. K., & Noor, M. A. (2006). Some nondifferentiable multiobjective programming problems. Journal of Mathematical Analysis and Applications, 316, 472–82. Mishra, S. K., Rautela, J. S., & Pant, R. P. (2008). On nondifferentiable multiobjective programming involving type-I α -invex functions. Applied Mathematics & Information Sciences, 2, 317–331. Mishra, S. K., Singh, V., & Laha, V. (2016). On duality for mathematical programs with vanishing constraints. Annals of Operations Research, 243, 249–272. Mishra, S. K., Singh, V., Laha, V., & Mohapatra, R. N. (2015). On constraint qualifications for multiobjective optimization problems with vanishing constraints. In H. Xu, S. Wang, & S.-Y. Wu (Eds.), Optimization methods (pp. 95–135). Springer. Mishra, S. K., Wang, S. Y., & Lai, K. K. (2004). Optimality and duality in nondifferentiable and multi objective programming under generalized d-invexity. Journal of Global Optimization, 29, 425–438. Preda, V., & Chitescu, I. (1999). On constraint qualification in multiobjective optimization problems: Semid- ifferentiable case. Journal of Optimization Theory and Applications, 100, 417–433. Rockafellar, R. T. (1970). Convex analysis. Princeton University Press. Slimani, H., & Radjef, M. S. (2010). Nondifferentiable multiobjective programming under generalized d - invexity. European Journal of Operational Research, 202, 32–41. Thung, L. T. (2022). Karush-Kuhn-Tucker optimality conditions and duality for multiobjective semi-infinite programming with vanishing constraints. Annals of Operations Research, 311, 1307–1334. Ye, Y. L. (1991). d-invexity and optimality conditions. Journal of Mathematical Analysis and Applications, 162, 242–249. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Annals of Operations Research Springer Journals

On directionally differentiable multiobjective programming problems with vanishing constraints

Annals of Operations Research , Volume 328 (2) – Sep 1, 2023

Loading next page...
 
/lp/springer-journals/on-directionally-differentiable-multiobjective-programming-problems-sCcnLNdWQ2

References (40)

Publisher
Springer Journals
Copyright
Copyright © The Author(s) 2023
ISSN
0254-5330
eISSN
1572-9338
DOI
10.1007/s10479-023-05368-5
Publisher site
See Article on Publisher Site

Abstract

In this paper, a class of directionally differentiable multiobjective programming problems with inequality, equality and vanishing constraints is considered. Under both the Abadie constraint qualification and the modified Abadie constraint qualification, the Karush–Kuhn– Tucker type necessary optimality conditions are established for such nondifferentiable vector optimization problems by using the nonlinear version Gordan theorem of the alternative for convex functions. Further, the sufficient optimality conditions for such directionally differ- entiable multiobjective programming problems with vanishing constraints are proved under convexity hypotheses. Furthermore, vector Wolfe dual problem is defined for the considered directionally differentiable multiobjective programming problem vanishing constraints and several duality theorems are established also under appropriate convexity hypotheses. Keywords Directionally differentiable multiobjective programming problems with vanishing constraints · Pareto solution · Karush–Kuhn–Tucker necessary optimality conditions · Wolfe vector dual · Convex function Mathematics Subject Classification 90C29 · 90C30 · 90C46 · 90C25 · 49K10 1 Introduction Multiobjective optimization problems, also known vector optimization problems or multi- criteria optimization problems, are extremum problems involving more than one objective function to be optimized. Many real-life problems can be formulated as multiobjective pro- gramming problems which include human decision making, economics, financial investment, portfolio, resource allocation, information transfer, engineering design, mechanics, control theory, etc. During the past five decades, the field of multiobjective programming, has grown remarkably in different directional in the setting of optimality conditions and duality theory. One of the classes of nondifferentiable multicriteria optimization problems studied in the recent past is the class of directionally differentiable vector optimization problems for which many authors have established the aforesaid fundamental results in optimization theory (see, Tadeusz Antczak tadeusz.antczak@wmii.uni.lodz.pl Faculty of Mathematics, University of Lodz, Banacha 22, 90-238 Łódz, ´ Poland 123 Annals of Operations Research for example, (Ahmad, 2011; Antczak, 2002, 2009; Arana-Jiménez et al., 2013; Dinh et al., 2005; Ishizuka, 1992; Kharbanda et al., 2015; Mishra & Noor, 2006; Mishra et al., 2008, 2015;Slimani&Radjef, 2010;Ye, 1991) and others). Recently, a special class of optimization problems, known as the mathematical program- ming problems with vanishing constraints, was introduced by Achtziger and Kanzow (2008), which serves as a unified frame work for several applications in structural and topology opti- mization. Since optimization problems with vanishing constraints, in their general form, are quite a new class of mathematical programming problems, only very few works have been published on this subject so far (see, for example, (Achtziger et al. 2013; Antczak 2022; Dorsch et al. 2012;Dussaultetal. 2019; Guu et al. 2017; Hoheisel and Kanzow 2008, 2009; Hoheisel et al. 2012;Huetal. 2014, 2020; Izmailov and Solodov 2009; Khare and Nath 2019; Mishra et al. 2015, 2016; Thung 2022). However, to the best our knowledge there are no works on optimality conditions for (convex) directionally differentiable multiobjective programming problems with vanishing constraints in the literature. The main purpose of this paper is, therefore, to develop optimality conditions for a new class of nondifferentiable multiobjective programming problems with vanishing constraints. Namely, this paper represents the study concerning both necessary and sufficient optimal- ity conditions for convex directionally differentiable vector optimization problems with inequality, equality and vanishing constraints. Considering the concept of a (weak) Pareto solution, we establish Karush–Kuhn–Tucker type necessary optimality conditions which are formulated in terms of directional derivatives. In proving the aforesaid necessary optimality conditions, we use a nonlinear version of the Gordan alternative theorem for convex func- tions and also the Abadie constraint qualification. Further, we illustrate the case that the necessary optimality conditions may not hold under the aforesaid constraint qualification. Therefore, we introduce the VC-Abadie constraint qualification and, under this weaker con- straint qualification in comparison to that classical one, we present the Karush–Kuhn–Tucker type necessary optimality conditions for the considered directionally differentiable multiob- jective programming problem. Further, we prove the sufficiency of the aforesaid necessary optimality conditions for such nondifferentiable vector optimization problems under appro- priate convexity hypotheses. The optimality results established in the paper are illustrated by the example of a convex directionally differentiable multiobjective programming problem with vanishing constraints. Furthermore, for the considered directionally differentiable vec- tor optimization problem with vanishing constraints, we define its vector Wolfe dual problem and we prove several duality theorems also under convexity hypotheses. 2 Preliminaries In this section, we provide some definitions and results that we shall use in the sequel. The following convention for equalities and inequalities will be used throughout the paper. T T n For any x = (x , x , ..., x ) , y = (y , y , ..., y ) in R ,wedefine: 1 2 n 1 2 n (i) x = y if and only if x = y for all i = 1, 2, ..., n; i i (ii) x < y if and only if x < y for all i = 1, 2, ..., n; i i (iii) x  y if and only if x  y for all i = 1, 2, ..., n; i i (iv) x ≤ y if and only if x  y and x = y. Throughout the paper, we will use the same notation for row and column vectors when the interpretation is obvious. 123 Annals of Operations Research Definition 2.1 The affine hull of the set C of points x , ..., x ∈ C is defined by 1 k k k af f C = α x : α ∈ R, α = 1 . i i i i i =1 i =1 Definition 2.2 (Hiriart-Urruty & Lemaréchal, 1993) The relative interior of the set C (denoted by relint C)isdefinedas relint C = {x ∈ C : B (x , r ) ∩ af f C ⊆ C for some r > 0} , where B (x , r ) := y ∈ R : x − y  r is the ball of radius r around x with respect to some norm on R . Remark 2.3 (Rockafellar, 1970) The definition of the relative interior of a nonempty convex set C can be reduced to the following: relint C = {x ∈ C :∀y ∈ C ∃λ> 1s.t. λx + (1 − λ)y ∈ C } . Definition 2.4 It is said that ϕ : C → R,where C ⊂ R is a nonempty convex set, is convex on C if the inequality ϕ (u + λ(x − u))  λϕ(x ) + (1 − λ) ϕ(u) (1) holds for all x , u ∈ C and any λ ∈ [0, 1]. It is said that ϕ is said to be strictly convex on C if the inequality ϕ (u + λ(x − u)) <λϕ(x ) + (1 − λ) ϕ(u) holds for all x , u ∈ C, x = u,and any λ ∈ (0, 1). Definition 2.5 We say that a mapping ϕ : X → R defined on a nonempty set X ⊆ R is directionally differentiable at u ∈ X into a direction v ∈ R if the limit ϕ (u + αv) − ϕ (u) ϕ (u; v) = lim (2) α→0 α exists finite. We say that ϕ is directionally differentiable or (Dini differentiable) at u, if its + n directional derivative ϕ (u; v) exists finite for all v ∈ R . n n Proposition 2.6 (Jahn, 2004) Let a mapping ϕ : R → R be convex. Then, at every u ∈ R n + and in every direction v ∈ R , the directional derivative ϕ (u; v) exists. Moreover, since the convex function ϕ has a directional derivative in the direction x − u for any x ∈ R ,then the following inequality ϕ (x ) − ϕ (u)  ϕ (u; x − u) (3) holds. n n Lemma 2.7 (Jahn, 2004)Let X ⊆ R be open, u ∈ X be given, f , g : X → R and v ∈ R . Further, assume that the directional derivatives of f and g at u in the direction v exist, + + i.e. f (u; v) and g (u; v) both exist. Then the directional derivative of f · g exists and + + ( f · g) (u; v) = f (u)g (u; v)+ f (u; v)g(u). Giorgi (2002) proved the following theorem of the alternative for convex functions, which may be considered as a nonlinear version of the Gordan theorem presented by Mangasarian (1969) in the linear case. 123 Annals of Operations Research n k Theorem 2.8 (Giorgi, 2002)Let C ⊂ R be a a nonempty convex set, F : C → R , m n q : C → R be convex functions and  : R → R be a linear function. Let us assume that there exists x ∈ relint C such that  (x ) < 0,j = 1, ..., m, and  (x )  0, 0 j 0 s 0 s = 1, ..., q. Then, the system F x < 0, i = 1, ..., k, ⎨ ( ) (x )  0, j = 1, ...m, (4) (x )  0, s = 1, ..., q T k m q admits no solutions if and only if there exists a vector (λ, θ, β) ∈ R × R × R , λ = 0, + + such that T T T λ F (x ) + θ  (x ) + β  (x )  0, ∀x ∈ C. Definition 2.9 The cone of sequential linear directions (also known as the sequential radial cone) to a set Q ⊂ R at x ∈ Q is the set denoted by Z Q; x and defined by ( ) Z (Q; x ) := v ∈ R :∃ (α ) ⊂ R α ↓ 0 such that x + α v ∈ Q, ∀k ∈ N . k + k k Definition 2.10 The tangent cone to a set Q ⊂ R at x ∈ cl Q is the set denoted by T (Q; x ) and defined by x − x T Q; x := v ∈ R :∃ x ⊆ A, α ⊂ R such that α ↓ 0 ∧ x → x ∧ → v ( ) ( ) ( ) k k + k k = v ∈ R :∃v → v, α ↓ 0 such that x + α v ∈ Q, ∀k ∈ N , k k k k where cl Q denotes the closure of Q. Note that the aforesaid cones are nonempty, T (Q; x ) is closed, it may not be convex and Z (Q; x ) ⊂ T (Q; x ). 3 Multiobjective programming with vanishing constraints In the paper, we consider the following constrained multiobjective programming problem (MPVC) with vanishing constraints defined by V -minimize f (x ) := f (x ), ..., f (x ) 1 p g (x )  0, j = 1, ..., m, h (x ) = 0, s = 1, ..., q, (MPVC) H (x )  0, t = 1, ..., r , H (x ) G (x )  0, t = 1, ..., r , t t x ∈ C , n n n where f : R → R, i ∈ I = {1, ..., p}, g : R → R, j ∈ J = {1, ..., m}, h : R → R, i j s n n s = 1, ..., r , H : R → R, G : R → R, t ∈ T = {1, ..., r }, are real-valued functions and t t C ⊆ R is a nonempty open convex set. For the purpose of simplifying our presentation, we will next introduce some notations which will be used frequently throughout this paper. Let ={x ∈ C : g (x )  0, j ∈ J , H (x )  0, H (x ) G (x )  0, t ∈ T } be the set of all feasible solutions for (MPVC). t t t Further, we denote by J (x ) := j ∈ J : g (x ) = 0 the set of inequality constraint indices that are active at x ∈ and by J (x ) ={ j ∈{1, ..., m}: g (x)< 0} the set of inequality constraint indices that are inactive at x ∈ . Then, J (x ) ∪ J (x ) = J . 123 Annals of Operations Research Before studying optimality in multiobjective programming, one has to define clearly the well-known concepts of optimality and solutions in multiobjective programming problem. The (weak) Pareto optimality in multiobjective programming associates the concept of a solution with some property that seems intuitively natural. Definition 3.1 A feasible point x is said to be a Pareto solution (an efficient solution) in (MPVC) if and only if there exists no other x ∈ such that f (x ) ≤ f (x ). Definition 3.2 A feasible point x is said to be a weak Pareto solution (a weakly efficient solution, a weak minimum) in (MPVC) if and only if there exists no other x ∈ such that f (x)< f (x ). As it follows from the definition of (weak) Pareto optimality, x is nonimprovable with respect to the vector cost function f . The quality of nonimprovability provides a complete solution if x is unique. However, usually this is not the case, and then one has to find the entire exact set of all Pareto optimality solutions in a multiobjective programming problem. Now, for any feasible solution x, let us denote the following index sets T (x ) = {t ∈ T : H (x ) > 0} , + t T (x ) = {t ∈ T : H (x ) = 0} . 0 t Further, let us divide the index set T (x ) into the following index subsets: T (x ) = {t ∈ T : H (x ) > 0, G (x ) = 0} , +0 t t T (x ) = {t ∈ T : H (x ) > 0, G (x ) < 0} . +− t t Similarly, the index set T (x ) can be partitioned into the following three index subsets: T (x ) = {t ∈ T : H (x ) = 0, G (x ) > 0} , 0+ t t T (x ) = {t ∈ T : H (x ) = 0, G (x ) = 0} , 00 t t T x = {t ∈ T : H x = 0, G x < 0} . ( ) ( ) ( ) 0− t t Moreover, we denote by T (x ) the set of indexes t ∈ T defined by T (x ) = T (x ) ∪ HG HG 00 T (x ) ∪ T (x ) ∪ T (x ). 0+ 0− +0 Before proving the necessary optimality conditions for the considered directionally dif- ferentiable multiobjective programming problem with vanishing constraints, we introduce the Abadie constraint qualification for this multicriteria optimization problem. In order to introduce the aforesaid constraint qualification, for x ∈ ,wedefine thesets Q (x ), l = 1, ..., p,and Q (x ) as follows 123 Annals of Operations Research Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i = l, i i g x  0, ∀ j ∈ J , ( ) h (x ) = 0, ∀s ∈ S, H (x )  0, ∀t ∈ T , H (x ) G (x )  0, ∀t ∈ T , t t Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i i g (x )  0, ∀ j ∈ J , h (x ) = 0, ∀s ∈ S, H (x )  0, ∀t ∈ T , H (x ) G (x )  0, ∀t ∈ T . t t Now, we give the definition of the almost linearizing cone for the considered multiobjective programming problem (MPVC) with vanishing constraints. It is a generalization of the almost linearizing cone introduced by Preda and Chitescu (1999) for a directionally differentiable multiobjective optimization problem with inequality constraints only. Definition 3.3 The almost linearizing cone L ( , x ) to the set at x ∈ is defined by n + + L ( , x ) = v ∈ R : f (x ; v)  0∀i ∈ I , g (x ; v)  0, j ∈ J (x ) , i j + + + h (x ; v) = 0, ∀s ∈ S, H (x ; v)  0, t ∈ T , (H G ) (x ; v)  0, t ∈ T . t t s t Now, we prove the result which gives the formulation of the almost linearizing cone to the sets Q (x ), l = 1, ..., p. Proposition 3.4 Let x ∈ be a Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Then, the linearizing cone to the set to each set l l Q (x ),l = 1, ..., p, at x, denoted by L Q (x ) ; x , is given by l n L Q (x ) ; x := {v ∈ R : f (x ; v)  0, ∀i ∈ I , i = l, + + g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , (5) 0+ j s + + H (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ),G (x ; v)  0, ∀t ∈ T (x ) . 00 0− +0 t t Proof Let us assume that x ∈ is a Pareto solution in the considered multiobjective pro- gramming problem (MPVC) with vanishing constraints. Then, by the definitions of the almost linearizing cone and index sets, we get l n L Q (x ) ; x := {v ∈ R : f (x ; v)  0, ∀i ∈ I , i = l, + + g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , 0+ (6) j s + + H (x ; v)  0, ∀t ∈ T (x ) , (H G ) (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ) . 0 t t 0 +0 Note that, by Lemma 2.7, one has + + + (H G ) (x ; v) = G (x)(H ) (x ; v) + H (x)(G ) (x ; v).(7) t t t t t t 123 Annals of Operations Research Then, by the definition of index sets, ( 7 )gives G (x)(H ) (x ; v) if t ∈ T (x ) ∪ T (x ) t t 0+ 0− (H G ) (x ; v) = 0if t ∈ T (x ) (8) t t 00 H (x)(G ) (x ; v) if t ∈ T (x ) . t t +0 Combining ( 6 )-( 8 ), we get ( 5 ). This completes the proof of this proposition. Remark 3.5 Note that the almost linearizing cone to Q (x ) at x ∈ Q (x ) is given by L (Q (x ) ; x ) = L Q (x ) ; x . (9) l=1 Indeed, by ( 5 ), we get ( 9 ). In other words, the formulation of L (Q (x ) ; x ) is given by n + L (Q (x ) ; x ) := v ∈ R : f (x ; v)  0, ∀i ∈ I , + + + g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , (10) 0+ s t + + H (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ) , G (x ; v)  0, ∀t ∈ T (x ) . 00 0− +0 t t + + + Proposition 3.6 If f x;· ,i ∈ I , g x;· ,j ∈ J x , h x;· ,s ∈ S, −H x;· , ( ) ( ) ( ) ( ) ( ) s t i j t ∈ T (x ) ∪ T (x ) ∪ T (x ),G (x;·),t ∈ T (x ),are convex on R ,then L (Q (x ) ; x ) 00 0− 0+ +0 is a closed convex cone. Proof Since the directional derivative is a positive homogenous function, therefore, if α  0 and v ∈ L (Q (x ) ; x ), one has αv ∈ L (Q (x ) ; x ). This means that L (Q (x ) ; x ) is a cone. Now, we prove that it is a convex cone. Let v , v ∈ L (Q (x ) ; x ) and α ∈ [0, 1].By 1 2 convexity assumption, it follows that + + + f (x ; αv + (1 − α) v )  α f (x ; v ) + (1 − α) f (x ; v )  0, i ∈ I , 1 2 1 2 i i i + + + g (x ; αv + (1 − α) v )  αg (x ; v ) + (1 − α) g (x ; v )  0, j ∈ J (x ) , 1 2 1 2 j j j + + + h (x ; αv + (1 − α) v )  αh (x ; v ) + (1 − α) h (x ; v )  0, s ∈ S, 1 2 1 2 s s s + + −H (x ; αv + (1 − α) v )  −α H (x ; v ) 1 2 1 t t − (1 − α) H (x ; v )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) , 2 00 0− 0+ + + + G (x ; αv + (1 − α) v )  αG (x ; v ) − (1 − α) G (x ; v )  0, t ∈ T (x ) . 1 2 1 2 +0 t t t The above inequalities imply that αv + (1 − α) v ∈ L (Q (x ) ; x ), which means that 1 2 L (Q (x ) ; x ) is a convex cone. Now, we prove the closedness of L (Q (x ) ; x ). In order to prove this property, we take a sequence {v } ⊂ L (Q (x ) ; x ) such that v → v as r →∞.Since v ∈ L (Q (x ) ; x ) for r r r any integer r, by the continuity of convex functions f (x;·), i ∈ I,wehave + + lim f (x ; v ) = f (x ; v)  0, ∀i ∈ I . i i r →∞ + + Similarly, we obtain g (x ; v)  0, j ∈ J (x ) , h (x ; v) = 0, s ∈ S, H (x ; v) = 0, j s + + t ∈ T (x ), H (x ; v)  0, t ∈ T (x ) ∪ T (x ), G (x ; v)  0, t ∈ T (x ). This means 0+ 00 0− +0 t t that the set L (Q (x ) ; x ) is closed. Remark 3.7 Based on the result established in the above proposition, we conclude that also L Q (x ) ; x , l = 1, ..., p, are also closed convex cones. 123 Annals of Operations Research Proposition 3.8 If, for each v ∈ Z (Q (x ) ; x ), the Dini directional derivatives f (x ; v), + + i ∈ I , g (x ; v),j ∈ J (x ) , h (x ; v),s ∈ S, H (x ; v),t ∈ T (x ) ∪ T (x ) ∪ T (x ), t 00 0− 0+ j s G (x ; v),t ∈ T (x ),exist,then Z Q (x ) ; x ⊂ L (Q (x ) ; x ) . (11) l=1 Proof Firstly, we prove that, for each l = 1, ..., p, l l Z Q x ; x ⊂ L Q x ; x.(12) ( ) ( ) Therefore, for each l = 1, ..., p,wetake v ∈ Z Q (x ) ; x . Then, by Definition 2.9,there exists (α ) ⊂ R , α ↓ 0, such that x + α v ∈ Q (x ) for all k ∈ N . Therefore, for each k + k k l = 1, ..., p,since x + α v ∈ Q (x ),wehave f (x + α v)  f (x ) , ∀i = 1, ..., p, i = l, i k i g (x + α v)  0 = g (x ) , ∀ j ∈ J (x ) , j k j h (x + α v) = 0 = h (x ) , ∀s ∈ S s k s H (x + α v)  0 = H (x ) , ∀t ∈ T (x ) , t k t 0 H (x + α v) G (x + α v)  0 = H (x ) G (x ) , ∀t ∈ T (x ) . t k t k t t HG Then, by Definition 2.5,wehave f (x + α v) − f (x ) i k i f (x ; v) = lim  0, ∀i = 1, ..., p, i = l, (13) α ↓0 α g (x + α v) − g (x ) j k j g (x ; v) = lim  0, ∀ j ∈ J (x ) , (14) α ↓0 α h (x + α v) − h (x ) s k s h (x ; v) = lim = 0, ∀s ∈ S, (15) α ↓0 α H (x + α v) − H (x ) t k t H (x ; v) = lim  0, ∀t ∈ T (x ) , (16) α ↓0 α (G H )(x + α v) − (G H )(x ) t t k t t (G H ) (x ; v) = lim  0, ∀t ∈ T (x ) . (17) t t HG α ↓0 α By Lemma 2.7, one has G (x ) H (x ; v) if t ∈ T (x ) ∪ T (x ) t t 0+ 0− (H G ) (x ; v) = 0if t ∈ T (x ) (18) t t 00 H (x ) G (x ; v) if t ∈ T (x ) . t t +0 Thus, ( 16 )-( 18 ) yield H (x ; v) = 0, ∀t ∈ T (x ) , (19) 0+ H (x ; v)  0, ∀t ∈ T (x ) ∪ T (x ) , (20) 00 0− G (x ; v)  0, ∀t ∈ T (x ) . (21) Hence, we conclude by ( 13 )-( 15 )and ( 19 )-( 21 )that v ∈ L Q (x ) ; x for each l l l = 1, ..., p. Therefore, since we have shown that Z Q (x ) ; x ⊂ L Q (x ) ; x for each 123 Annals of Operations Research l = 1, ..., p,wehaveby( 9 )that p p l l Z Q (x ) ; x ⊂ L Q (x ) ; x = L (Q (x ) ; x ) , l=1 l=1 as was to be shown. Note that, in general, the converse inclusion of ( 11 ) does not hold. Therefore, in order to prove the necessary optimality condition for efficiency in (MPVC), we give the definition of the Abadie constraint qualification. Definition 3.9 It is said that the Abadie constraint qualification holds at x ∈ for (MPVC) iff L (Q (x ) ; x ) ⊂ Z Q (x ) ; x.(22) l=1 Remark 3.10 By ( 11 ), ( 22 ) means that the Abadie constraint qualification (ACQ) holds at x for (MPVC) iff L (Q (x ) ; x ) = Z Q (x ) ; x . l=1 Now, we state a necessary condition for efficiency in (MPVC). Theorem 3.11 Let x ∈ be an efficient solution in (MPVC) and, for each v ∈ Z (C , x ) , + + + the directional derivatives f (x ; v),i = 1, .., p, g (x ; v),j ∈ J (x ),h (x ; v),s ∈ S, i j + + + H (x ; v),t ∈ T (x ),H (x ; v),t ∈ T (x ) ∪ T (x ) ∪ T (x ),G (x ; v),t ∈ T (x ), 0 00 0+ 0− +0 t t t exist. Further, we assume that g ,j ∈ J (x ),H ,t ∈ T (x ),G ,t ∈ T (x ), are continuous j t + t +− functions at x. If the Abadie constraint qualification (ACQ) holds at x for (MPVC), then, for each l = 1, ..., p, the system + + f (x ; v)  0,f (x ; v) < 0,i = 1, ..., p, i = l, (23) i l g (x ; v)  0, j ∈ J (x ), (24) h (x ; v) = 0, s ∈ S, (25) −H (x ; v)  0, t ∈ T (x ), (26) (H G ) (x ; v)  0, t ∈ T (x ) (27) t t HG has no solution v ∈ R . Proof We proceed by contradiction. Suppose, contrary to the result, that there exists l ∈ {1, ..., p} such that the system + + f (x ; v)  0, f (x ; v) < 0, i = 1, ..., p, i = l , (28) i l g (x ; v)  0, j ∈ J (x ), (29) h (x ; v) = 0, s ∈ S,(30) −H (x ; v)  0, t ∈ T (x ), (31) (H G ) (x ; v)  0, t ∈ T (x ) (32) t t HG 123 Annals of Operations Research has a solution v ∈ R . Then, by ( 8 ), the system + + f (x ; v)  0, f (x ; v) < 0, i = 1, ..., p, i = l , (33) i l g (x ; v)  0, ∀ j ∈ J (x ),(34) h (x ; v) = 0, ∀s ∈ S, (35) H (x ; v) = 0 ∀t ∈ T (x ) , (36) 0+ H (x ; v)  0 ∀t ∈ T (x ) ∪ T (x ) , (37) 00 0− G (x ; v)  0 ∀t ∈ T (x ) (38) has a solution v ∈ R . Hence, it is obvious that v ∈ L (Q (x ) ; x ). By assumption, (ACQ) is satisfied at x for (MPVC). Then, by Definition 3.9, v ∈ Z Q (x ) ; x . Thus, v ∈ l=1 Z Q (x ) ; x . Therefore, by Definition 2.9, there exists (α ) ⊂ R , α ↓ 0, such that k + k x + α v ∈ Q (x ) for all k ∈ N . Hence, x + α v ∈ C and, moreover, k k f (x + α v)  f (x ) , ∀i = 1, ..., p, i = l , (39) i k i 0 g (x + α v)  0, ∀ j ∈ J (x ) , (40) j k h (x + α v) = 0, ∀s = 1, ..., q, (41) s k H (x + α v) = 0, ∀t ∈ T (x ) , (42) t k 0+ H x + α v  0, ∀t ∈ T x , (43) ( ) ( ) t k 0 G (x + α v)  0, ∀t ∈ T (x ) . (44) t k +0 By the definition of indexes sets, one has g (x ) < 0, j ∈ J (x ), H (x ) > 0, t ∈ T (x ), j t + G (x)< 0, t ∈ T (x ). Therefore, by the continuity of g , j ∈ J (x ), H , t ∈ T (x ), G , t +− j t + t t ∈ T (x ), at x, there exists k ∈ N such that, for all k > k , +− 0 0 g (x + α v)  0, ∀ j ∈ / J (x ) , (45) j k H (x + α v)  0, ∀t ∈ T (x ), (46) t k + G (x + α v)  0, ∀t ∈ T (x ). (47) t k +− Thus, we conclude by ( 40 )-(47 ) that there exists δ> 0 such that x + α v ∈ ∩ B x ; δ , ( ) where B (x ; δ) denotes the open ball of radius δ around x. On the other hand, it follows from the assumption that x ∈ is an efficient solution in (MPVC). Hence, by Definition 3.1, there exists a number δ> 0 such that there is no x ∈ ∩ B (x ; δ) satisfying f (x )  f (x ) , i = 1, ..., p, (48) i i f (x ) < f (x ) for some i ∈ {1, ..., p} . (49) i i Hence, since x + α v ∈ ∩ B (x ; δ) and (39) holds, by (48)and (49), we conclude that, for all k ∈ N , the inequality f (x + α v) > f (x ) l k l 0 0 holds. Then, by Definition 2.5, the inequality above implies that the inequality f (x ; v)  0 holds, which is a contradiction to (28). Hence, the proof of this theorem is completed. 123 Annals of Operations Research Remark 3.12 As follows from the proof of Theorem 3.11, if the system (23)-(27)has no solution v ∈ R , then, for each l = 1, ..., p, the system + + f (x ; v)  0, f (x ; v) < 0, i = 1, ..., p, i = l, (50) i l g x ; v  0, ∀ j ∈ J (x ), (51) ( ) h (x ; v) = 0, ∀s ∈ S,(52) H (x ; v) = 0 ∀t ∈ T (x ) , (53) 0+ H (x ; v)  0 ∀t ∈ T (x ) ∪ T (x ),(54) 00 0− G (x ; v)  0 ∀t ∈ T (x ) (55) has no solution v ∈ R . n p Let us define the functions F = F , ..., F : R → R ,  =  , ..., 1 p 1 n |J (x )|+|T (x )|+|T (x )|+|T (x )| 00 0− +0 : R → R and  = |J (x )|+|T (x )|+|T (x )|+|T (x )| 00 0− +0 | | n q+ T (x ) 0+ , ...,  : R → R as follows 1 q+|T (x )| 0+ F (v) := f (x ; v) , i ∈ I , (56) g (x ; v) for l ∈ J (x ) ,α = 1, .., J (x ) , ⎪ l ⎪ + −H (x ; v) for l ∈ T (x ), α = |J (x )| + 1, ..., |J (x )| + |T (x )| , ⎪ 0 00 ⎪ l (v) := −H (x ; v) for l ∈ T (x ), α = |J (x )| + |T (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| , (57) α 0− 00 00 0− l ∈ T (x ), ⎪ +0 ⎪ + G (x ; v) for α = |J (x )| + |T (x )| + |T (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| + |T (x )| , 00 0− 00 0− +0 ⎨ h (x ; v) for l = 1, ..., q,β = 1, .., q, (v) := (58) ⎩ + H (x ; v) for l ∈ T (x ), β = q + 1, ..., q + |T (x )| . 0+ 0+ We are now in a position to formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution x to be an efficient solution in (MPVC) under the Abadie constraint qualification (ACQ). Theorem 3.13 (Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let x ∈ be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that f ,i ∈ I, g ,j ∈ J, h ,s ∈ S, H ,t ∈ T, G , i j s t t + + t ∈ T , are directionally differentiable functions at x, f (x;·),i ∈ I, g (x;·),j ∈ J (x ), i j + + + −H (x;·),t ∈ T (x ) ∪ T (x ),G (x;·),t ∈ T (x ), are convex functions, h (x;·), 00 0− +0 t t s s ∈ S, H (x;·),t ∈ T (x ), are linear functions, g ,j ∈ J (x ),H ,t ∈ T (x ),G , t 0+ j t + t t ∈ T (x ) ∪ T (x ), are continuous functions at x and, moreover, the Abadie constraint 0 +− qualification (ACQ) is satisfied at x for (MPVC). If there exists v ∈ relint Z (C ; x ) such p m that  (v ) < 0 and  (v )  0, then there exist Lagrange multipliers λ ∈ R , μ ∈ R , 0 0 H G q r r ξ ∈ R , ϑ ∈ R and ϑ ∈ R such that the following conditions p q + + λ f (x ; v) + μ g (x ; v) + ξ h (x ; v) − (59) j s s i j i =1 j =1 s=1 r r H G + + ϑ H (x ; v) + ϑ G (x ; v)  0, ∀v ∈ Z (C ; x ) , t t t t t =1 t =1 123 Annals of Operations Research μ g (x ) = 0,j ∈ J , (60) ϑ H (x ) = 0,t ∈ T, (61) ϑ G (x ) = 0,t ∈ T, (62) λ ≥ 0, μ  0, (63) H H H ϑ = 0,t ∈ T (x ) , ϑ  0,t ∈ T (x ) ∪ T (x ) , ϑ free, t ∈ T (x ) , + 00 0− 0+ t j t (64) G G ϑ = 0,t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , ϑ  0,t ∈ T (x ) 00 0+ 0− +− +0 t t (65) hold. Proof Let x ∈ be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Since (ACQ) is satisfied at x for (MPVC), by Remark 3.12, the system (50)-(55) has no solution v ∈ R .By(56)-(58), it follows that the system F (v) < 0, i = 1, ..., k, ⎪ i (v)  0, j = 1, ...m, (v)  0, s = 1, ..., q T k m q admits no solutions. Then, by Theorem 2.8, there exists a vector (λ, θ, β) ∈ R × R × R , + + λ = 0, such that T T T n λ F (v) + θ  (v) + β  (v)  0, ∀v ∈ R . Hence, by (56)-(58), one has p q + + + + λ f (x ; v) + θ g (x ; v) + β h (x ; v) − θ H (x ; v) − i j s t i j s t i =1 j ∈J (x ) s=1 t ∈T (x )∪T (x ) 00 0− + + β H (x ; v) + θ G (x ; v)  0, ∀v ∈ X . (66) s s t t t ∈T x t ∈T x ( ) ( ) 0+ +0 Let us set θ if j ∈ J (x ) , μ = (67) 0if j ∈ / J (x ) , ξ = β , s = 1, ..., q, (68) θ if t ∈ T (x ) ∪ T (x ) , α = |J (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| , ⎪ α 00 0− 00 0− ϑ = β if t ∈ T (x ) , α = q + 1, ..., q + |T (x )| , (69) α 0+ 0+ 0if t ∈ T (x ) , θ if t ∈ T (x ) , α = |J (x )| + |T (x )| + |T (x )| + 1, ..., |J (x )| + |T (x )| + |T (x )| + |T (x )| , α +0 00 0− 00 0− +0 ϑ = 0if t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , 00 0+ 0− +− (70) If we use (67)-(70)in(66), then we get the Karush–Kuhn–Tucker optimality condition (59). Moreover, note that (67)-(70) imply the Karush–Kuhn–Tucker optimality conditions (60)- (65). Hence, the proof of this theorem is finished. 123 Annals of Operations Research Note that, in general, the Abadie constraint qualification may not be fulfilled at an efficient solution in (MPVC) if T (x ) = ∅. Based on the definition of the index sets, we substitute the constraint H G (x )  0, t ∈ T t t by the constraints H (x ) = 0, G (x )  0, t ∈ T (x ) t t 0+ H (x )  0, G (x )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , t t 00 +0 0− +− in which the index sets depend on x. Then, we define the following vector optimization problem derived from (MPVC), some of the constraints of which depends on the optimal point x: V -minimize f (x ) := f (x ), ..., f (x ) 1 p g (x )  0, j = 1, ..., m, h (x ) = 0, s = 1, ..., q, H (x )  0, t = 1, ..., r , (MP (x ) ) H (x ) = 0, G (x )  0, t ∈ T (x ) t t 0+ H (x )  0, G (x )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , t t 00 +0 0− +− x ∈ C . In order to introduce the modified Abadie constraint qualification, for x ∈ ,wedefine the sets Q (x ), l = 1, ..., p,and Q (x ) as follows Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i = l, i i g (x )  0, ∀ j = 1, ..., m, h (x ) = 0, ∀s = 1, ..., q, H (x ) = 0, G (x )  0, ∀t ∈ T (x ) , t t 0+ H (x )  0, G (x )  0, ∀t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , t t 00 +0 0− +− Q (x ) := x ∈ C : f (x )  f (x ) , ∀i = 1, ..., p, i i g x  0, ∀ j = 1, ..., m, ( ) h (x ) = 0, ∀s = 1, ..., q, H (x ) = 0, G (x )  0, ∀t ∈ T (x ) , t t 0+ H (x )  0, G (x )  0, ∀t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) . t t 00 +0 0− +− Then, the almost linearizing cone for the sets Q (x ) is defined by L Q (x ) ; x := v ∈ R : f (x ; v)  0, ∀i ∈ I , i = l, + + + (71) g (x ; v)  0, ∀ j ∈ J (x ) , h (x ; v) = 0, ∀s ∈ S, H (x ; v) = 0, ∀t ∈ T (x ) , 0+ s t + + H x ; v  0, ∀t ∈ T x ∪ T x , G x ; v  0, ∀t ∈ T x ∪ T x . ( ) ( ) ( ) ( ) ( ) ( ) 00 0− 00 +0 t t Hence, the almost linearizing cone for the set Q (x ) is given as follows L Q (x ) ; x = L Q (x ) ; x . (72) l=1 123 Annals of Operations Research Remark 3.14 Note that the only difference between L (Q (x ) ; x ) and L Q (x ) ; x is that we add the inequality G (x ; v)  0, ∀t ∈ T (x ) in L Q (x ) ; x in comparison to L (Q (x ) ; x ). In particular, we always have the relation L Q (x ) ; x ⊂ L (Q (x ) ; x ) . (73) Proposition 3.15 Let x be a feasible solution in (MPVC). Then Z Q (x ) ; x ⊂ L (Q (x ) ; x ) . (74) l=1 Proof By Proposition 3.8, it follows that Z Q (x ) ; x ⊂ L (Q (x ) ; x ) . (75) l=1 Moreover, as it follows from the proof of Proposition 3.8, one has l l Z Q (x ) ; x ⊂ L Q (x ) ; x , ∀l = 1, ..., p. (76) Thus, (76)and (72) yield p p l l Z Q (x ) ; x ⊂ L Q (x ) ; x = L Q (x ) ; x . (77) l=1 l=1 Since Q (x ) ⊆ Q (x ), l = 1, ..., p, therefore, one has Z Q (x ) ; x ⊆ Z Q (x ) ; x , ∀l = 1, ..., p, (78) L Q (x ) ; x ⊆ L (Q (x ) ; x ) . (79) Combining (75)–(79), we get (74). Now, we are ready to introduce the modified Abadie constraint qualification which we name the VC-Abadie constraint qualification. Definition 3.16 Let x ∈ be an efficient solution in (MPVC). Then, the VC-Abadie con- straint qualification (VC-ACQ) holds at x for(MPVC)iff L Q (x ) ; x ⊆ Z Q (x ) ; x.(80) l=1 Now, we define the Abadie constraint qualification for (MP(x )) and we show that then the VC-Abadie constraint qualification (VC-ACQ) holds at x for (MPVC), even in a case in which the Abadie constraint qualification (ACQ) is not satisfied. Definition 3.17 Let x ∈ be a (weakly) efficient solution in (MPVC). Then, the modified Abadie constraint qualification (MACQ) holds at x for (MP(x ))iff L (Q (x ) ; x ) ⊆ Z Q (x ) ; x . (81) l=1 123 Annals of Operations Research We now give the sufficient condition for the VC-Abadie constraint qualification to be satisfiedatanefficientsolutionin(MPVC). Lemma 3.18 Let x ∈ be an efficient solution in (MPVC). If the modified Abadie constraint qualification (MACQ) holds at xfor (MP(x )), then the VC-Abadie constraint qualification (VC-ACQ) holds at x for (MPVC). Proof Assume that x ∈ is an efficient solution in (MPVC) and, moreover, the modified Abadie constraint qualification (MACQ) holds at x for (MP(x )). Then, by Definition 3.17, it follows that L (Q (x ) ; x ) ⊆ Z Q (x ) ; x.(82) l=1 Since Q (x ) ⊆ Q (x ), l = 1, ..., p,wehavethat Z Q x ; x ⊆ Z Q x ; x , l = 1, ..., p, (83) ( ) ( ) L Q x ; x ⊆ L Q x ; x , l = 1, ..., p. (84) ( ) ( ) Hence, (84) implies p p L Q x ; x = L Q x ; x ⊆ L Q x ; x = L Q x ; x . (85) ( ) ( ) ( ) ( ( ) ) l=1 l=1 Then, (83)gives p p Z Q (x ) ; x ⊆ Z Q (x ) ; x . (86) l=1 l=1 Thus, by (82), (85)and (86), we get L Q (x ) ; x ⊆ Z Q (x ) ; x , l=1 as was to be shown. Since the VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie con- straint qualification (ACQ), the necessary optimality conditions (59)-(65) may not hold. Therefore, in the next theorem, we formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution x to be an efficient solution in (MPVC) under the VC-Abadie constraint qualification (VC-ACQ). Theorem 3.19 (Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let x ∈ be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that f ,i ∈ I, g ,j ∈ J, h ,s ∈ S, H , i j s t + + t ∈ T, G ,t ∈ T , are directionally differentiable functions at x, f (x;·),i ∈ I, g (x;·), i j + + j ∈ J (x ), −H (x;·),t ∈ T (x ) ∪ T (x ),G (x;·),t ∈ T (x ) ∪ T (x ),are convex 00 0− 00 +0 t t + < functions, h (x;·),s ∈ S, H (x;·),t ∈ T (x ), are linear functions, g ,j ∈ J (x ), 0+ j s t H ,t ∈ T (x ),G ,t ∈ T (x ) ∪ T (x ), are continuous functions at x and, moreover, the t + t 0 +− VC-Abadie constraint qualification (VC-ACQ) is satisfied at x for (MPVC). If there exists v ∈ relint Z (C ; x ) such that  (v ) < 0 and  (v )  0, then there exist Lagrange 0 0 0 123 Annals of Operations Research H G p m q r r multipliers λ ∈ R , μ ∈ R , ξ ∈ R , ϑ ∈ R and ϑ ∈ R such that the following conditions p q + + λ f (x ; v) + μ g (x ; v) + ξ h (x ; v) − (87) j s s i j i =1 j =1 s=1 r r H G + + ϑ H (x ; v) + ϑ G (x ; v)  0∀v ∈ Z (C ; x ) , t t t t t =1 t =1 μ g (x ) = 0,j ∈ J , (88) ϑ H (x ) = 0,t ∈ T, (89) ϑ G (x ) = 0,t ∈ T, (90) λ ≥ 0, μ  0, (91) H H H ϑ = 0,t ∈ T (x ) , ϑ  0,t ∈ T (x ) ∪ T (x ) , ϑ free, t ∈ T (x ) , (92) + 00 0− 0+ t j t G G ϑ = 0,t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ) , ϑ  0,t ∈ T (x ) ∪ T (x ) 00 0+ 0− +− +0 00 t t (93) hold. Now, we prove the sufficiency of the Karush–Kuhn–Tucker optimality conditions for the considered multiobjective programming problem (MPVC) with vanishing constraints under appropriate convexity hypotheses. Theorem 3.20 Let x be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)–(65) be satisfied at x for (MPVC) with Lagrange multi- H G k m q r r pliers λ ∈ R , μ ∈ R , ξ ∈ R , ϑ ∈ R and ϑ ∈ R . Further, we assume that f ,i ∈ I, + + + − g ,j ∈ J (x ),h ,s ∈ S x := s ∈ S : ξ > 0 , −h ,s ∈ S x := s ∈ S : ξ < 0 , ( ) ( ) j s s s s −H ,t ∈ T (x ) ∪ T (x ) ∪ T (x ),G ,t ∈ T (x ),are convex on .Then xis a weak t 00 0− 0+ t +0 Pareto solution in (MPVC). Proof We proceed by contradiction. Suppose, contrary to the result, that x is not a weak Pareto solution in (MPVC). Thus, by Definition 3.1, there exists x ∈ such that f ( x)< f (x ).(94) By assumption, f is convex at x on . Hence, by Proposition 2.6,(94) yields f (x ; x − x ) < 0, i = 1, ..., p. (95) Since λ ≥ 0, the inequalities (95)give λ f (x ; x − x ) < 0. (96) i =1 123 Annals of Operations Research From x , x ∈ and the definition of J (x ), it follows that g ( x )  g (x ) = 0, j ∈ J (x ) , (97) j j h ( x ) = h (x ) = 0, s ∈ S, (98) s s −H ( x )  −H (x ) = 0, t ∈ T (x ) ∪ T (x ) ∪ T (x ), (99) t t 00 0− 0+ G ( x )  G (x ) = 0, t ∈ T (x ) . (100) t t +0 + − By assumption, g , j ∈ J (x ), h , s ∈ S (x ) = s ∈ S : ξ > 0 , −h , s ∈ S (x ) = j s s s s ∈ S : ξ < 0 , −H , t ∈ T (x ) ∪ T (x ) ∪ T (x ), G , t ∈ T (x ),are convex on . t 00 0− 0+ t +0 Then, by Proposition 2.6,(97)-(100) imply, respectively, g (x ; x − x )  0, j ∈ J (x ) , (101) + + h (x ; x − x )  0, s ∈ S (x ) , (102) − − −h (x ; x − x )  0, s ∈ S (x ) , (103) H (x ; x − x )  0, t ∈ T (x ) ∪ T (x ) ∪ T (x ), (104) 00 0− 0+ G (x ; x − x )) < 0, t ∈ T (x ). (105) < + − Taking into account that μ = 0, j ∈ J (x ), ξ = 0, s ∈ / S (x ) ∪ S (x ), ϑ = 0, j s t ∈ T (x ), ϑ = 0, t ∈ T (x ) ∪ T (x ) ∪ T (x ) ∪ T (x ), the foregoing inequalities + 00 0+ 0− +− yield, respectively, μ g (x ; x − x )  0, (106) j =1 ξ h (x ; x − x )  0, (107) s=1 ϑ H (x ; x − x )  0, (108) t t t =1 ϑ G (x ; x − x )  0. (109) t t t =1 Combining (96)and (106)-(109), we get that the inequality p q + + + λ f (x ; x − x ) + μ g (x ; x − x ) + ξ h (x ; x − x ) − j s s i j i =1 j =1 s=1 r r H G + + ϑ H (x ; x − x ) + ϑ G (x ; x − x ) < 0 t t t t t =1 t =1 holds, contradicting the Karush–Kuhn–Tucker type necessary optimality condition (59). This means that x is a weak Pareto solution in (MPVC). In order to prove the sufficient optimality conditions for a feasible solution x to be a Pareto solution in (MPVC), stronger convexity assumptions are needed imposed on the objective functions. 123 Annals of Operations Research Theorem 3.21 Let x be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)-(65) be satisfied at x for (MPVC) with Lagrange mul- H G m q r r tipliers λ ∈ R , μ ∈ R , ξ ∈ R , ϑ ∈ R and ϑ ∈ R . Further, we assume that f , + + i ∈ I , are strictly convex on ,g ,j ∈ J (x ),h ,s ∈ S (x ) = s ∈ S : ξ > 0 , −h , j s s s ∈ S (x ) := s ∈ S : ξ < 0 , −H ,t ∈ T (x ) ∪ T (x ) ∪ T (x ),G ,t ∈ T (x ),are t 00 0− 0+ t +0 convex on .Then x is a Pareto solution in (MPVC). Remark 3.22 In Theorem 3.21, all objective functions f , i ∈ I , are assumed to be strictly convex on in order to prove that x ∈ is a Pareto solution in (MPVC). However, as it follows from the proof of the aforesaid theorem, it is sufficient if we assume in Theorem 3.21 that at least one the objective function f , i ∈ I , is strictly convex on , but Lagrange multiplier λ associated to such an objective function f should be greater than 0. i i Remark 3.23 If x is such a feasible solution at which the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) in place of (59)-(65), then also the functions G , t ∈ T (x ), t 00 should be assumed to be convex on in the sufficient optimality conditions. Now, we illustrate the results established in the paper by an example of a convex direc- tionally differentiable multiobjective programming problem with vanishing constraints. Example 3.24 Consider a directionally differentiable multiobjective programming problem with vanishing constraints defined by V -minimize f (x ) = ( f (x ), f (x )) = (|x | − x , x + |x |) 1 2 1 2 1 2 H (x ) = x  0, (MPVC1) 1 2 H (x ) G (x ) = x (−x − x )  0. 1 1 2 1 2 Note that = (x , x ) ∈ R : x  0, x (−x − x )  0 , x = (0, 0) is a feasible solu- 1 2 2 2 1 2 1 2 tion in (MPVC1) and T x = {1}.Now,wedefine thesets Q x , Q x , Q x , Q x . ( ) ( ) ( ) ( ) ( ) Then, by definition, we have 1 2 Q x = x , x ∈ R : x + |x |  0, x  0, x −x − x  0 , ( ) ( ) ( ) 1 2 1 2 2 2 1 2 2 2 Q x = x , x ∈ R : |x | − x  0, x  0, x −x − x  0 , ( ) ( ) ( ) 1 2 1 2 2 2 1 2 Q (x ) = (x , x ) ∈ R : x + |x |  0, |x | − x  0, x  0, x (−x − x )  0 , 1 2 1 2 1 2 2 2 1 2 Q (x ) = (x , x ) ∈ R : x + |x |  0, |x | − x  0, x  0, − x − x  0 . 1 2 1 2 1 2 2 1 2 Further, by Definition 2.9 and the definition of the almost linearizing cone (see (5), (10)), we have, respectively, 1 2 Z Q (x ) ; x = (v ,v ) ∈ R : v + |v |  0,v  0, v (−v − v )  0 , 1 2 1 2 2 2 1 2 2 2 Z Q (x ) ; x = (v ,v ) ∈ R : |v | − v  0,v  0, v (−v − v )  0 , 1 2 1 2 2 2 1 2 L (Q (x ) ; x ) = (v ,v ) ∈ R : v + |v |  0, |v | − v  0,v  0 , 1 2 1 2 1 2 2 L Q (x ) ; x = (v ,v ) ∈ R : v + |v |  0, |v | − v  0,v  0, − v − v  0 . 1 2 1 2 1 2 2 1 2 Note that the Abadie constraint qualification (ACQ) is not satisfied at x = (0, 0) for (MPVC1) since the relation L (Q (x ) ; x ) ⊂ Z Q (x ) ; x is not satisfied. But the VC-Abadie l=1 constraint qualification (VC-ACQ) holds at x = (0, 0) for (MPVC1) since the relation L Q (x ) ; x ⊂ Z Q (x ) ; x is satisfied. As it follows even from this example, the l=1 123 Annals of Operations Research VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie constraint qualifi- cation (ACQ). Moreover, the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) are ful- H G 1 1 1 1 filled at x with Lagrange multipliers λ = , λ = , ϑ = , ϑ = . Further, note that 1 2 1 1 2 4 4 4 the functions constituting (MPVC1) are convex on and the objective function f is strictly convex on . Hence, by Theorem 3.21, x = (0, 0) is a Pareto solution in (MPVC1). Note that the optimality conditions established in the literature (see, for example, (Achtziger et al., 2013;Dorschetal., 2012;Dussaultetal., 2019; Hoheisel & Kanzow, 2008, 2007, 2009; Hoheisel et al., 2012; Izmailov & Solodov, 2009)) are not applicable for the consid- ered multiobjective programming problem (MPVC1) with vanishing constraints since the results established in the above mentioned works have been proved for scalar optimization problems with vanishing constraints. Moreover, the results presented in Guu et al. (2017) and Mishra et al. (2015) have been established for differentiable multiobjective programming problems with vanishing constraints only and, therefore, they are not useful for finding (weak) Pareto solutions in such nondifferentiable vector optimization problems as the directionally differentiable multiobjective programming problem (MPVC1) with vanishing constraints. 4 Wolfe duality In this section, for the considered vector optimization problem (MPVC) with vanishing con- straints, we define its vector Wolfe dual problem. Then we prove several duality results between problems (MPVC) and (WDVC) under convexity assumption imposed on the func- tions constituting them. We now define the vector-valued Lagrange function L for (MPVC) as follows H G L y,μ,ξ,ϑ ,ϑ := f (y) , ..., f (y) 1 p ⎛ ⎞ m q r r H G ⎝ ⎠ + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) e, j j s s t t t t j =1 s=1 t =1 t =1 T p where e = [1, ..., 1] ∈ R . Then, we re-write the above definition of the vector-valued Lagrange function L as follows: H G H G H G L y,μ,ξ,ϑ ,ϑ := L y,μ,ξ,ϑ ,ϑ , ..., L y,μ,ξ,ϑ ,ϑ := 1 p m r r H G f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y), ..., 1 j j s s t t t t j =1 s=1 t =1 t =1 m r r H G f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) . p j j s s t t t t j =1 s=1 t =1 t =1 123 Annals of Operations Research For x ∈ , we define the following vector Wolfe dual problem related to the considered multiobjective programming problem (MPVC) with vanishing constraints as follows: H G L y,μ,ξ,ϑ ,ϑ → V-max + m + such that λ f (y; x − y) + μ g (y; x − y) + i j i =1 i j =1 j r + r + + H G ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y)  0, s t t s=1 s t =1 t t =1 t λ ≥ 0, λ = 1, μ  0, (WDVC (x ) ) i =1 ϑ = w H (x ), w  0, t = 1, ..., r, t t t ϑ = θ − w G (x ), θ  0, t = 1, ..., r. t t t t Let H G (x ) = y,λ,μ,ξ,ϑ ,ϑ ,w,θ : verifying the constraints of (WDVC (x ) ) be the set of all feasible solutions in (WDVC(x )). Further, we define the set Y (x ) as follows: H G + Y (x ) = y ∈ X : y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ (x ) and J (x ) := j ∈ J : μ > 0 . Remark 4.1 In the Wolfe dual problem (WDVC(x )) given above, the significance of w and θ is the same as v and β in Theorem 1 (Achtziger and Kazanov (2008)). t t t Now, on the line Hu et al. (2020), we define the following vector dual problem in the sense of Wolfe related to the considered multicriteria optimization problem (MPVC) with vanishing constraints by H G L y,μ,v,ϑ ,ϑ → V-max (WDVC) H G such that y,λ,μ,v,ϑ ,ϑ ,w,θ ∈ , where the set of all feasible solutions in (WDVC) is defined by = (x ).Further, x ∈ H G let us define the set Y by Y = y ∈ X : y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ . H G Theorem 4.2 (Weak duality): Let x and y,λ,μ,ξ,ϑ ,ϑ ,w,θ be any feasible solu- tions for (MPVC) and (WDVC), respectively. Further, we assume that one of the following hypotheses is fulfilled: + + − A) f ,k = 1, ..., p, g ,j ∈ J x ,h ,s ∈ S x , −h ,s ∈ S x , −H ,t ∈ ( ) ( ) ( ) i j s s t + + H − T (x ) ∪ T (x ) ∪ T (x ),T (x ) := t ∈ T : ϑ > 0 ,H ,t ∈ T (x ) := 00 0− 0+ t 0+ 0+ 0+ t ∈ T : ϑ < 0 ,G ,t ∈ T (x ),are convex on ∪ Y. 0+ t +0 H G B) the vectorial Lagrange function L ·,μ,ξ,ϑ ,ϑ is convex on ∪ Y. H G Then, f (x ) ≮ L y,μ,ξ,ϑ ,ϑ . Proof We proceed by contradiction. Suppose, contrary to the result, that H G f (x ) < L y,μ,ξ,ϑ ,ϑ . Hence, by definition of the Lagrange function L, the aforesaid inequality gives m q r r H G f (x ) < f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y), i = 1, ..., p. i i j j s s t t t t j =1 s=1 t =1 t =1 (110) 123 Annals of Operations Research H G Thus, by y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ , it follows that p p q m r r H G λ f (x)< λ f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y). i i i i j j s s t t t t i =1 i =1 j =1 s=1 t =1 t =1 (111) A) Now, we prove this theorem under hypothesis A). From convexity assumptions, by Proposition 2.6, the inequalities f (x ) − f (y)  f (y; x − y), i = 1, ..., p, (112) i i 0  g (x )  g (y) + g (y; x − y), j ∈ J x , (113) ( ) j j + + 0 = h (x )  h (y) + h (y; x − y), s ∈ S (x ) , (114) s s + − 0 =−h (x )  −h (y) − h (y; x − y), s ∈ S (x ) , (115) s s + + 0 =−H (x )  −H (y) − H (y; x − y), t ∈ T (x ) ∪ T (x ) ∪ T (x ) , (116) t t 00 0− t 0+ + − 0 = H (x )  H (y) + H (y; x − y), t ∈ T (x ) , (117) t t 0+ 0 = G (x )  G (y) + G (y; x − y), t ∈ T (x ) (118) t t +0 hold. Multiplying (112)–(118) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively, p p p λ f (x )  λ f (y) + λ f (y; x − y), (119) i i i i i i =1 i =1 i =1 0  μ g (y) + μ g (y; x − y), (120) j j j + + j ∈J (x ) j ∈J (x ) 0  ξ h (y) + ξ h (y; x − y), (121) s s s + + s∈S (x ) s∈S (x ) 0  − (−ξ ) h (y) − (−ξ ) h (y; x − y), (122) s s s − − s∈S (x ) s∈S (x ) H H + 0  − ϑ H (y) − ϑ H (y; x − y), t t t + + t ∈T x ∪T x ∪T x t ∈T x ∪T x ∪T x ( ) ( ) ( ) ( ) ( ) ( ) 00 0− 00 0− 0+ 0+ (123) H H + 0  − ϑ H (y) + −ϑ H (y; x − y), (124) t t t − − t ∈T (x ) t ∈T (x ) 0+ 0+ G G + 0  ϑ G (y) + ϑ G (y; x − y). (125) t t t t ∈T (x ) t ∈T (x ) +0 +0 Taking into account Lagrange multipliers equal to 0, by (119)-(123), we obtain that the inequality 123 Annals of Operations Research p p q m r λ f (x )  λ f (y) + μ g (y) + v h (y) − ϑ H (y) + i i i i j j s s t i =1 i =1 j =1 s=1 t =1 r m G + + ϑ G (y) + λ f (y; x − y) + μ g (y; x − y) + t i j i j t =1 i =1 j =1 r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y) (126) s t t t t s=1 t =1 t =1 holds. By (111)and (126), we get that the inequality + + λ f (y; x − y) + μ g (y; x − y) + i j i j i =1 j =1 r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y)< 0 s t t t t s=1 t =1 t =1 holds, which contradicts the first constraint of (WDVC). H G B) Now, we prove this theorem under hypothesis B). From x ∈ and y,λ,μ,ξ,ϑ ,ϑ , w, θ ∈ , it follows that g (x ) = 0, μ  0, j ∈ J (x ) , (127) j j g (x ) < 0, μ  0, j ∈ / J (x ) , (128) j j h (x ) = 0, ξ ∈ R, s ∈ S, (129) s s −H (x ) = 0, ϑ ∈ R, t ∈ T (x ) , (130) t 0 −H (x)< 0, ϑ = 0, t ∈ T (x ) , (131) t + G (x ) = 0, ϑ  0, t ∈ T (x ) ∪ T (x ) , (132) t 00 +0 G (x ) = 0, ϑ = 0, t ∈ T (x ) ∪ T (x ) ∪∪T (x ) . (133) t 0+ 0− +− By (127)-(133), we obtain m r r H G μ g (x ) + ξ h (x ) − ϑ H (x ) + ϑ G (x )  0. (134) j j s s t t t t j =1 s=1 t =1 t =1 Since (111) is fulfilled, by (134), we get m q r f (x ) + μ g (x ) + ξ h (x ) − ϑ H (x ) i j j s s t j =1 s=1 t =1 G H G + ϑ G (x)< L y,μ,v,ϑ ,ϑ , i = 1, ..., p. t i t =1 Then, by the definition of the vector-valued Lagrange function L, it follows that H G H G L x,μ,ξ,ϑ ,ϑ < L y,μ,ξ,ϑ ,ϑ , i = 1, ..., p. (135) i i 123 Annals of Operations Research H G By hypothesis B), the vector-valued Lagrange function L ·,μ,ξ,ϑ ,ϑ is directionally differentiable convex on ∪ Y . Then, by Proposition 2.6, the following inequalities H G H G + H G L x,μ,ξ,ϑ ,ϑ − L y,μ,ξ,ϑ ,ϑ  L y,μ,ξ,ϑ ,ϑ ; x − y , i = 1, ..., p i i (136) are satisfied. Combining (135)and (136), we obtain + H G L y,μ,ξ,ϑ ,ϑ ; x − y < 0, i = 1, ..., p. (137) Multiplying inequalities (137) by the corresponding Lagrange multipliers λ , i = 1, ..., p, we have H G λ L y,μ,ξ,ϑ ,ϑ ; x − y < 0. i =1 Then, by the definition of the vector-valued Lagrange function L, one has p p + + λ f (y; x − y) + λ μ g (y; x − y)+ i i j i j i =1 i =1 j =1 q r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y) < 0. (138) s t t t t s=1 t =1 t =1 H G By y,λ,μ,ξ,ϑ ,ϑ ,w,θ ∈ , it follows that λ .Thus,(138) implies that the i =1 following inequality p m + + λ f (y; x − y) + μ g (y; x − y) + i j i j i =1 j =1 q r r + H + G + ξ h (y; x − y) − ϑ H (y; x − y) + ϑ G (y; x − y)< 0 s t t t t s=1 t =1 t =1 holds, which contradicts the first constraint of (WDVC). This completes the proof of this theorem under both hypothesis A) and hypothesis B). If the stronger assumptions are imposed on the functions constituting (MPVC), then the following result is true: H G Theorem 4.3 (Weak duality): Let x and y,λ,μ,v,ϑ ,ϑ ,w,θ be any feasible solutions for (IVPVC) and (V C-IVWD), respectively. Further, we assume that one of the following hypotheses is fulfilled: + + A) f ,i = 1, ..., k, are strictly convex on ∪ Y, g ,j ∈ J (x ),h ,s ∈ S (x ), −h , i j s s + − s ∈ S (x ), −H ,t ∈ T (x ),H ,t ∈ T (x ),G ,t ∈ T (x ),are convex on ∪ Y. t t t +0 H 0+ H G B) the vector-valued vectorial Lagrange function L ·,μ,ξ,ϑ ,ϑ is strictly convex on ∪ Y. H G Then, f x  L y,μ,ξ,ϑ ,ϑ . ( ) Theorem 4.4 (Strong duality): Let x ∈ be a Pareto solution (a weak Pareto solution) in (MPVC) and the Abadie constraint qualification be satisfied at x. Then, there exist Lagrange 123 Annals of Operations Research H G p m q r r r r multipliers λ ∈ R , μ ∈ R , v ∈ R , ϑ ∈ R , ϑ ∈ R and w ∈ R , θ ∈ R such that H G x , λ, μ, v, ϑ , ϑ , w, θ is feasible in (WDVC). If also all hypotheses of the weak duality H G theorem - Theorem 4.3 (Theorem 4.2 ) are satisfied, then x , λ, μ, v, ϑ , ϑ , w, θ is an efficient solution (a weakly efficient) of a maximum type in (WDVC). Proof By assumption, x is a Pareto solution of (MPVC) and the Abadie constraint qualifi- p m q cation is satisfied at x. Then, there exist Lagrange multipliers λ ∈ R , μ ∈ R , v ∈ R , H G r r ϑ ∈ R , ϑ ∈ R such that the Karush–Kuhn–Tucker necessary optimality conditions are H G fulfilled. Then, we conclude that x , λ, μ, v, ϑ , ϑ , w, θ ,where w and θ satisfy the t t following conditions ϑ = w H (x ), w  0, t = 1, ..., r, t t t ϑ = θ − w G (x ), θ  0, t = 1, ..., r, t t t t tj is feasible in (WDVC). H G Now, we prove that x , λ, μ, ξ, ϑ , ϑ , w, θ is an efficient solution of a maximum type in (WDVC). We proceed by contradiction. Suppose, contrary to the result, that H G x , λ, μ, ξ, ϑ , ϑ , w, θ is not an efficient solution of a maximum type in (WDVC). H G Then, by definition, there exists  y, λ, μ, ξ, ϑ , ϑ , w,  θ ∈ such that the inequality H G H G L  y, μ, ξ, ϑ , ϑ ≥ L x , μ, ξ, ϑ , ϑ holds. Then, by the Karush–Kuhn–Tucker necessary optimality conditions, we conclude that H G L  y, μ, v, ϑ , ϑ ≥ f (x ) holds, which is a contradiction to the weak duality theorem (Theorem 4.3 ). Hence, we con- H G clude that x , λ, μ, v, ϑ , ϑ , w, θ is an efficient solution of a maximum type in (WDVC). H G The next two theorems give sufficient conditions for y,where y, λ, μ, ξ, ϑ , ϑ , w, θ is a feasible solution of the (WDVC), to be a Pareto solution of (MPVC). Theorem 4.5 (Converse duality): Let x be any feasible solution in (MPVC) and H G y, λ, μ, ξ, ϑ , ϑ , v, β be an efficient solution of a maximum type (a weakly efficient solution of a maximum type) in Wolfe dual problem (WDVC) such that y ∈ .Further,we assume that f ,i = 1, ..., k, are strictly convex (convex) on ∪ Y, g ,j ∈ J (x ),h , i j s + − + − s ∈ S (x ), −h ,s ∈ S (x ), −H ,t ∈ T (x ),H ,t ∈ T (x ),G ,t ∈ T (x ),are convex s t t t +0 H 0+ on ∪ Y. Then y is a Pareto solution (a weak Pareto solution) of (MPVC). Proof We proceed by contradiction. Suppose, contrary to the result, that y ∈ is not an efficient solution of (MPVC). Hence, by Definition 3.1, there exists x ∈ such that f ( x ) ≤ f (y) . (139) 123 Annals of Operations Research From convexity hypotheses, by Proposition 2.6, the inequalities f ( x ) − f (y)> f (y; x − y), i = 1, ..., p, (140) i i + + g ( x )  g (y) + g (y; x − y), j ∈ J ( x ) , (141) j j j + + h ( x )  h (y) + h (y; x − y), s ∈ S ( x ) , (142) s s + − −h ( x )  −h (y) − h (y; x − y), s ∈ S ( x ) , (143) s s + + −H ( x )  −H (y) − H (y; x − y), t ∈ T ( x ) ∪ T ( x ) ∪ T ( x ) , (144) t t 00 0− 0+ + − H ( x )  H (y) + H (y; x − y), t ∈ T ( x ) , (145) t t 0+ G ( x )  G (y) + G (y; x − y), t ∈ T ( x ) (146) t t +0 hold. Multiplying (140)-(146) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively, p p p λ f ( x)> λ f (y) + λ f (y; x − y), (147) i i i i i i =1 i =1 i =1 μ g ( x )  μ g (y) + μ g (y; x − y), (148) j j j j j + + + j ∈J ( x ) j ∈J ( x ) j ∈J ( x ) ξ h ( x )  ξ h (y) + ξ h (y; x − y), (149) s s s s s + + + s∈S ( x ) s∈S ( x ) s∈S ( x ) − −ξ h ( x )  − −ξ h (y) − −ξ h (y; x − y), (150) s s s s s s − − − s∈S ( x ) s∈S ( x ) s∈S ( x ) H H H − ϑ H ( x )  − ϑ H (y) − ϑ H (y; x − y), t t t t t t + + + t ∈T ( x )∪T ( x )∪T ( x ) t ∈T ( x )∪T ( x )∪T ( x ) t ∈T ( x )∪T ( x )∪T ( x ) 00 0− 00 0− 00 0− 0+ 0+ 0+ (151) H H H + − ϑ H ( x )  − ϑ H (y) + −ϑ H (y; x − y), (152) t t t t t t − − − t ∈T ( x ) t ∈T ( x ) t ∈T ( x ) 0+ 0+ 0+ G G G ϑ G ( x )  ϑ G (y) + ϑ G (y; x − y). (153) t t t t t t t ∈T+0 ( x ) t ∈T+0 ( x ) t ∈T+0 ( x ) By x, y ∈ , we have, respectively, g ( x )  0, g (y)  0, j ∈ J , (154) j j + − h ( x ) = h (y), s ∈ S ( x ) ∪ S ( x ) , (155) s s H ( x)> 0, ϑ = θ − w G ( x )  0, t ∈ T ( x ) t t t t + ⇒ ϑ H ( x )  0, (156) H ( x ) = 0, ϑ = θ − w G ( x ) ∈ R, t ∈ T ( x ) t =1 t t t t 0 G ( x)> 0, ϑ = w H ( x ) = 0, t ∈ T ( x ) t t t 0+ t ⎪ G ( x ) = 0, ϑ = w H ( x ) = 0, t ∈ T ( x ) t t t 00 t ⎪ ⇒ ϑ G ( x )  0. (157) G  x < 0, ϑ = w H ( x ) = 0, t ∈ T  x ( ) ( ) t t t t 0− t ⎪ t =1 G ( x ) = 0, ϑ = w H ( x )  0, t ∈ T ( x ) ⎪ t t t +0 t ⎪ G ( x)< 0, ϑ = w H ( x )  0, t ∈ T ( x ) t t t +− 123 Annals of Operations Research H G Hence, using (156), (157) together with y, λ, μ, ξ, ϑ , ϑ , w, θ ∈ ,weobtain r r H H − ϑ H ( x )  − ϑ H (y), (158) t t t t t =1 t =1 r r G G ϑ G ( x )  ϑ G (y). (159) t t t t t =1 t =1 Combining (147)–(159), multiplying by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we obtain, respectively, λ f (y; x − y)< 0, (160) i =1 μ g (y; x − y)  0, (161) j ∈J ( x ) ξ h (y; x − y)  0, (162) s s + − s∈S ( x )∪S ( x ) − ϑ H (y; x − y)  0, (163) t t + − t ∈T ( x )∪T ( x )∪T ( x )∪T ( x ) 00 0− 0+ 0+ ϑ G (y; x − y)  0. (164) t t t ∈T ( x ) Taking into account Lagrange multipliers equal to 0 and then combining (160)–(164), we get that the inequality p m q + + + λ f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y) i j s i j s i =1 j =1 s=1 r r H G + + − ϑ H (y; x − y) + ϑ G (y; x − y)< 0 t t t t t =1 t =1 holds, which is a contradiction to the first constraint of (WDVC). This completes the proof of this theorem. Theorem 4.6 (Strict converse duality): Let x be a feasible solution of (MPVC) and H G y, λ, μ, v, ϑ , ϑ , w, θ be a feasible solution of (WDVC) such that f (x ) = H G L y, μ, ξ, ϑ , ϑ . Further, we assume that one of the following hypotheses is fulfilled: + + A) f ,i = 1, ..., p, are strictly convex on ∪ Y, g ,j ∈ J (x ),h ,s ∈ S (x ), −h , i j s s + − s ∈ S (x ), −H ,t ∈ T (x ),H ,t ∈ T (x ),G ,t ∈ T (x ),are convex on ∪ Y. t t t +0 H 0+ H G B) the vector-valued Lagrange function L ·,μ,v,ϑ ,ϑ is strictly convex on ∪ Y. H G Then x is a Pareto solution in (MPVC) and y, λ, μ, ξ, ϑ , ϑ , w, θ is an efficient solution of a maximum type in (WDVC). Proof We proceed by contradiction. Suppose, contrary to the result, that x ∈ is not a Pareto solution in (MPVC). Hence, by Definition 3.1, there exists x ∈ such that f ( x ) ≤ f (x ) . (165) 123 Annals of Operations Research H G Using (165) with the assumption f (x ) = L y, μ, ξ, ϑ , ϑ ,weobtain H G f  x ≤ L y, μ, ξ, ϑ , ϑ . ( ) Since all hypotheses of Theorem 4.3 are fulfilled, the above relation contradicts weak duality. This means that x is a Pareto solution in (MPVC). Further, efficiency of a maximum type H G of y, λ, μ, ξ, ϑ , ϑ , v, β in (WDVC) follows from the weak duality theorem (Theorem 4.3 ). A restricted version of converse duality for the problems (MPVC) and (WDVC) gives a sufficient condition for the uniqueness of an efficient solution in (MPVC) and an efficient solution of a maximum type in (WDVC). Theorem 4.7 (Restricted converse duality): Let x be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints, H G y, λ, μ, ξ, ϑ , ϑ , w, θ be an efficient solution of a maximum type in its vector Wolfe dual problem (WDVC) and the VC-Abadie constraint qualification (VC-ACQ) be satisfied at x for (MPVC). Further, we assume that one of the following hypotheses is fulfilled: + + A) f ,i = 1, ..., p, are strictly convex on ∪ Y, g ,j ∈ J (x ),h ,s ∈ S (x ), −h , i j s s + − s ∈ S x , −H ,t ∈ T x ,H ,t ∈ T x ,G ,t ∈ T x ,are convex on ∪ Y. ( ) ( ) ( ) ( ) t t t +0 0+ H G B) the vectorial Lagrange function L ·, μ, ξ, ϑ , ϑ is strictly convex on ∪ Y. Then x = y. Proof By means of contradiction, suppose that x = y. Since is an efficient solution in p m q (MPVC), by Theorem 3.11, there exist Lagrange multipliers λ ∈ R , μ ∈ R , ξ ∈ R , H G r r ϑ ∈ R and ϑ ∈ R , not equal to 0, such that (87)-(93) are fulfilled. Thus, by (87)-(93), it follows that H G f (x ) = L x , μ, ξ, ϑ , ϑ . H G By assumption, y, λ, μ, ξ, ϑ , ϑ , w, θ is an efficient solution of a maximum type in vector Wolfe dual problem (MPVC). Thus, one has H G H G L x , μ, ξ, ϑ , ϑ = L y, μ, ξ, ϑ , ϑ . Combining two above relations, we get H G f (x ) = L y, μ, v, ϑ , ϑ . (166) Thus, (166)gives H G λ f (x ) = λ L y, μ, ξ, ϑ , ϑ , i = 1, ..., p, (167) i i i i Adding both sides of (167), by the definition of the vectorial Lagrange function L,weget p p λ f (x ) = λ f (y) + i i i i i =1 i =1 ! " p q H G m r r λ μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) . i j j s s t t k=1 j =1 s=1 t =1 t t =1 t (168) 123 Annals of Operations Research H G Hence, by y, λ, μ, ξ, ϑ , ϑ , w, θ ∈ , one has λ = 1. Thus, (168) implies k=1 p p λ f (x ) = λ f (y) + i i i i i =1 i =1 (169) H G m r r μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y). j s t t j s j =1 s=1 t =1 t t =1 t Proof under hypothesis A). Using hypothesis A), by Proposition 2.6, the inequalities f (x ) − f (y)> f (y; x − y), i = 1, ..., p, (170) i i + + 0  g (x )  g (y) + g (y; x − y), j ∈ J (x ) , (171) j j j + + 0 = h (x )  h (y) + h (y; x − y), s ∈ S (x ) , (172) s s + − 0 =−h (x )  −h (y) − h (y; x − y), s ∈ S (x ) , (173) s s + + 0 =−H (x )  −H (y) − H (y; x − y), t ∈ T (x ) ∪ T (x ) ∪ T (x ) , (174) t t 00 0− 0+ + − 0 = H (x )  H (y) + H (y; x − y), t ∈ T (x ) , (175) t t 0+ 0 = G (x )  G (y) + G (y; x − y), t ∈ T (x ) (176) t t +0 H G hold. By the feasibility of y, λ, μ, ξ, ϑ , ϑ , w, θ in (WDVC), (170)-(176) yield, respec- tively, p p p λ f (x)> λ f (y) + λ f (y; x − y), (177) i i i i i i =1 i =1 i =1 μ g (y) + μ g (y; x − y)  0, (178) j j j + + j ∈J (x ) j ∈J (x ) ξ h (y) + ξ h (y; x − y)  0, (179) s s s + + s∈S (x ) s∈S (x ) − −ξ h (y) − −ξ h (y; x − y)  0, (180) s s − − s∈S (x ) s∈S (x ) H H ϑ (−H (y)) + ϑ H (y; x − y)  0, t t t + + t ∈T (x )∪T (x )∪T (x ) t ∈T (x )∪T (x )∪T (x ) 00 0− 00 0− 0+ 0+ (181) H H −ϑ H (y) + −ϑ H (y; x − y)  0, (182) t t t − − t ∈T x t ∈T x ( ) ( ) 0+ 0+ G G ϑ G (y) + ϑ G (y; x − y)  0. (183) t t t t ∈T (x ) t ∈T (x ) +0 +0 Thus, the above inequalities yield p p λ f (x)> λ f (y) + + μ g (y) + + ξ h (y)+ i i i i j s s i =1 i =1 j ∈J (x ) j s∈S (x ) H H ξ h (y) + + ϑ (−H (y)) + − −ϑ H (y)+ − s t t s∈S (x ) s t t t ∈T (x )∪T (x )∪T (x )∪ t ∈T (x ) 00 0− 0+ 0+ G p + + ϑ G (y) + λ f (y; x − y) + + μ g (y; x − y)+ t i t ∈T (x ) t j ∈J (x ) j +0 i =1 i j + + + ξ h (y; x − y) − −ξ h (y; x − y) + + ϑ H (y; x − y)+ + − s s s s t t s∈S (x ) s∈S (x ) t ∈T (x )∪T (x )∪T (x ) 00 0− 0+ H G + + − −ϑ H (y; x − y) + ϑ G (y; x − y). t t t t t ∈T (x ) t ∈T (x ) 0+ 123 Annals of Operations Research Taking into account the Lagrange multipliers equal to 0, we have p p m λ f (x)> λ f (y) + μ g (y)+ i i i i j =1 j j i =1 i =1 q H G r r ξ h (y) − ϑ H (y) + ϑ G (y)+ s s t t s=1 t =1 t t =1 t (184) p q + m + λ f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y) j s i =1 i j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y). t t t =1 t t =1 t Hence, by the first constraint of (WDVC), (184) yields that the inequality p p λ f (x)> λ f (y) + μ g (y)+ i i i i i =1 i =1 j =1 j H G q r r ξ h (y) − ϑ H (y) + ϑ G (y) s t t s t =1 t t =1 t s=1 holds, contradicting (169). This completes the proof of this theorem under hypothesis A). Proof under hypothesis B) H G Now, we assume that the vector-valued Lagrange function L ·, μ, ξ, ϑ , ϑ is strictly convex on ∪ Y . Hence, by the definition of the vector-valued Lagrange function L,weget H G H G H G L x , μ, ξ, ϑ , ϑ − L y, μ, ξ, ϑ , ϑ > L (y, μ, ξ, ϑ , ϑ ; x − y), i = 1, .., p. i i Then, by the definition of L, one has H G m q r r f (x)> f (y) + μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y)+ i i s t t j =1 j j s t =1 t t =1 t s=1 + m + f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y) j s i j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y), i = 1, ..., p t t t =1 t t =1 t Multiplying each above inequality by the corresponding Lagrange multiplier λ , i = 1, ..., p, respectively, and then summarizing the resulting inequalities, we obtain p p λ f (x)> λ f (y)+ i i i i i =1 i =1 ! " H G p q m r r λ μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) + i s t t j s i =1 j =1 j s=1 t =1 t t =1 t p p q + m + λ f (y; x − y) + λ μ g (y; x − y) + ξ h (y; x − y) i i j s i =1 i i =1 j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y) . t t t =1 t t =1 t H G By the feasibility of y, λ, μ, ξ, ϑ , ϑ , v, β in (WDVC), one has λ = 1. Then, i =1 the aforesaid inequality gives p p λ f (x)> λ f (y)+ i i i i i =1 i =1 H G m q r r μ g (y) + ξ h (y) − ϑ H (y) + ϑ G (y) s t t j =1 j j s t =1 t t =1 t s=1 p q + m + λ f (y; x − y) + μ g (y; x − y) + ξ h (y; x − y)+ i j s i =1 i j =1 j s=1 s H G r + r + − ϑ H (y; x − y) + ϑ G (y; x − y). t t t =1 t t =1 t 123 Annals of Operations Research Using the first constraint of (WDVC), we get that the following inequality p p λ f (x)> λ f (y)+ i i i i i =1 i =1 H G m q r r μ g (y) + v h (y) − ϑ H (y) + ϑ G (y) s s t t j =1 j j t =1 t t =1 t s=1 holds, contradicting (169). This completes the proof of this theorem under hypothesis B). 5 Conclusions This paper represents the study concerning the new class of nonsmooth vector optimization problems, that is, directionally differentiable multiobjective programming problems with vanishing constraints. Under the Abadie constraint qualification, the Karush–Kuhn–Tucker type necessary optimality conditions have been established for such nondifferentiable vec- tor optimization problems in the terms of the right directional derivatives of the involved functions. The nonlinear Gordan alternative theorem has been used in proving these afore- said necessary optimality conditions. However, the Abadie constraint qualification may not hold for such multicriteria optimization problems and therefore, the aforesaid necessary optimality conditions may not hold. Therefore, we have introduced the modified Abadie constraint qualification for the considered multiobjective programming problem with van- ishing constraints. Then, under the modified Abadie constraint qualification, which is weaker than the standard Abadie constraint qualification, we prove weaker necessary optimality conditions of the Karush–Kuhn–Tucker type for such nondifferentiable vector optimization problems with vanishing constraints. The sufficiency of the Karush–Kuhn–Tucker necessary optimality conditions have also been proved for the considered directionally differentiable multiobjective programming problem with vanishing constraints under appropriate convex- ity hypotheses. Furthermore, for the considered directionally differentiable multiobjective programming problems with vanishing constraints, its vector Wolfe dual problem has been defined on the line Hu et al. (2020). Then several duality theorems have been established between the primal directionally differentiable multiobjective programming problems with vanishing constraints and its vector Wolfe dual problem under convexity hypotheses. Thus, the above mentioned optimality conditions and duality results have been derived for a completely a new class of directionally differentiable vector optimization problems in comparison to the results existing in the literature, that is, for directionally differentiable multiobjective programming problems with vanishing constraints. Hence, the results estab- lished in the literature generally for scalar differentiable extremum problems with vanishing constraints have been generalized and extended to directionally differentiable multiobjective programming problems with vanishing constraints. It seems that the techniques employed in this paper can be used in proving similarly results for other classes of nonsmooth mathematical programming problems with vanishing constraints. We shall investigate these problems in the subsequent papers. Declarations Conflict of interest No potential conflict of interest was reported by the author. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the 123 Annals of Operations Research article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. References Achtziger, W., Hoheisel, T., & Kanzow, C. (2013). A smoothing-regularization approach to mathematical programs with vanishing constraints. Computational Optimization and Applications, 55, 733–767. Achtziger, W., & Kanzow, C. (2008). Mathematical programs with vanishing constraints: Optimality conditions and constraint qualifications. Mathematical Programming, 114, 69–99. Ahmad, I. (2011). Efficiency and duality in nondifferentiable multiobjective programming involving directional derivative. Applied Mathematics, 2, 452–460. Antczak, T. (2002). Multiobjective programming under d -invexity. European Journal of Operational Research, 137, 28–36. Antczak, T. (2009). Optimality conditions and duality for nondifferentiable multiobjective programming prob- lems involving d-r-type I functions. Journal of Computational and Applied Mathematics, 225, 236–250. Antczak, T. (2022). Optimality conditions and Mond–Weir duality for a class of differentiable semi-infinite multiobjective programming problems with vanishing constraints. 4OR, 20(3), 417–442. Arana-Jiménez, M., Ruiz-Garzón, G., Osuna-Gómez, R., & Hernández-Jiménez, B. (2013). Duality and a char- acterization of pseudoinvexity for Pareto and weak Pareto solutions in nondifferentiable multiobjective programming. Journal of Optimization Theory and Applications, 156, 266–277. Dinh, N., Lee, G. M., & Tuan, L. A. (2005). Generalized Lagrange multipliers for nonconvex directionally differentiable programs. In V. Jeyakumar & A. Rubinov (Eds.), Continuous optimization. Springer. Dorsch, D., Shikhman, V., & Stein, O. (2012). Mathematical programs with vanishing constraints: Critical point theory. Journal of Global Optimization, 52, 591–605. Dussault, J. P., Haddou, M., & Migot, T. (2019). Mathematical programs with vanishing constraints: Constraint qualifications, their applications and a new regularization method. Optimization, 68, 509–538. Florenzano, M., & Le Van, C. (2001). Finite Dimensional Convexity and Optimization. Studies in Economics Theory. Springer. Giorgi, G., et al. (2002). Osservazioni sui teoremi dell’alternativa non lineari implicanti relazioni di uguaglianza e vincolo insiemistico. In G. Crespi (Ed.), Optimality in economics, finance and industry (pp. 171–183). Milan: Datanova. Guu, S.-M., Singh, Y., & Mishra, S. K. (2017). On strong KKT type sufficient optimality conditions for multiobjective semi-infinite programming problems with vanishing constraints. Journal of Inequalities and Applications, 2017, 282. Hiriart-Urruty, J.-B., & Lemaréchal, C. (1993). Convex analysis and minimization algorithms I Grundlehren der mathematischen Wissenschaften. Springer. Hoheisel, T., & Kanzow, C. (2007). First- and second-order optimality conditions for mathematical programs with vanishing constraints. Applied Mathematics, 52, 495–514. Hoheisel, T., & Kanzow, C. (2008). Stationary conditions for mathematical programs with vanishing constraints using weak constraint qualifications. Journal of Mathematical Analysis and Applications, 337, 292–310. Hoheisel, T., & Kanzow, C. (2009). On the Abadie and Guignard constraint qualifications for mathematical programmes with vanishing constraints. Optimization, 58, 431–448. Hoheisel, T., Kanzow, C., & Schwartz, A. (2012). Mathematical programs with vanishing constraints: a new regularization approach with strong convergence properties. Optimization, 61, 619–636. Hu, Q. J., Chen, Y., Zhu, Z. B., & Zhang, B. S. (2014). Notes on some convergence properties for a smoothing- regularization approach to mathematical programs with vanishing constraints. Abstract and Applied Analysis, 2014, 1–7. Hu, Q., Wang, J., & Chen, Y. (2020). New dualities for mathematical programs with vanishing constraints. Annals of Operations Research, 287, 233–255. Ishizuka, Y. (1992). Optimality conditions for directionally differentiable multi-objective programming prob- lems. Journal of Optimization Theory and Applications, 72, 91–111. Izmailov, A. F., & Solodov, M. V. (2009). Mathematical programs with vanishing constraints: Optimality conditions. sensitivity, and relaxation method. Journal of Optimization Theory and Applications, 142, 501–532. Jahn, J. (2004). Vector optimization: Theory applications and extensions. Springer. 123 Annals of Operations Research Jiménez, B., & Novo, V. (2002). Alternative theorems and necessary optimality conditions for directionally multiobjective programs. Journal of Convex Analysis, 9, 97–116. Kharbanda, P., Agarwal, D., & Sinh, D. (2015). Multiobjective programming under (ϕ, d)-V -type I univexity. Opsearch, 52, 168–185. Khare, A., & Nath, T. (2019). Enhanced Fritz John stationarity, new constraint qualifications and local error bound for mathematical programs with vanishing constraints. Journal of Mathematical Analysis and Applications, 472, 1042–1077. Mangasarian, O. L. (1969). Nonlinear Programming. McGraw-Hill. Mishra, S. K., & Noor, M. A. (2006). Some nondifferentiable multiobjective programming problems. Journal of Mathematical Analysis and Applications, 316, 472–82. Mishra, S. K., Rautela, J. S., & Pant, R. P. (2008). On nondifferentiable multiobjective programming involving type-I α -invex functions. Applied Mathematics & Information Sciences, 2, 317–331. Mishra, S. K., Singh, V., & Laha, V. (2016). On duality for mathematical programs with vanishing constraints. Annals of Operations Research, 243, 249–272. Mishra, S. K., Singh, V., Laha, V., & Mohapatra, R. N. (2015). On constraint qualifications for multiobjective optimization problems with vanishing constraints. In H. Xu, S. Wang, & S.-Y. Wu (Eds.), Optimization methods (pp. 95–135). Springer. Mishra, S. K., Wang, S. Y., & Lai, K. K. (2004). Optimality and duality in nondifferentiable and multi objective programming under generalized d-invexity. Journal of Global Optimization, 29, 425–438. Preda, V., & Chitescu, I. (1999). On constraint qualification in multiobjective optimization problems: Semid- ifferentiable case. Journal of Optimization Theory and Applications, 100, 417–433. Rockafellar, R. T. (1970). Convex analysis. Princeton University Press. Slimani, H., & Radjef, M. S. (2010). Nondifferentiable multiobjective programming under generalized d - invexity. European Journal of Operational Research, 202, 32–41. Thung, L. T. (2022). Karush-Kuhn-Tucker optimality conditions and duality for multiobjective semi-infinite programming with vanishing constraints. Annals of Operations Research, 311, 1307–1334. Ye, Y. L. (1991). d-invexity and optimality conditions. Journal of Mathematical Analysis and Applications, 162, 242–249. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Journal

Annals of Operations ResearchSpringer Journals

Published: Sep 1, 2023

Keywords: Directionally differentiable multiobjective programming problems with vanishing constraints; Pareto solution; Karush–Kuhn–Tucker necessary optimality conditions; Wolfe vector dual; Convex function; 90C29; 90C30; 90C46; 90C25; 49K10

There are no references for this article.