2025v15i1 5
2025v15i1 5
Abstract. In this article, we propose a method for solving constrained multi-objective op-
timization problems using an extension of the classical Augmented Lagrangian method. We
demonstrate that any sequence generated by the algorithm is feasible, and that any limit point
is an optimal Pareto solution. A second algorithm is introduced to solve the subproblem within
the main algorithm, using the steepest descent method and a non-monotone Max-type linear
search technique. The theoretical and numerical results validate the performance and efficiency
of the proposed method.
Mathematics Subject Classification (2010): 34M60, 49M37, 65K05, 90C26, 90C29.
Key words: Multiobjective steepest descent, penalty function, Pareto stationary point, Pareto
front, max-type nonmonotone line search.
Article history:
Received: October 18, 2023
Received in revised form: March 16, 2025
Accepted: March 24, 2025
1. Introduction
Multiobjective optimization is a critical branch of the optimization field that aims to find
solutions that optimize multiple criteria simultaneously. In many real-world problems, decision-
making is often confronted with complex and diverse constraints, which necessitates the use of
constrained optimization methods to obtain viable solutions [12]. These constraints play a crucial
role in ensuring the practical applicability and feasibility of the obtained solutions, making con-
strained multiobjective optimization an essential tool for addressing real-world decision-making
challenges.
Amongst these methods, augmented Lagrangian has emerged as a promising approach for solv-
ing constrained multiobjective optimization problems. It effectively handles constraints through
an iterative process, enhancing the accuracy of the obtained solutions. The augmented La-
grangian method provides a robust and reliable solution for tackling constrained multi-objective
optimization problems.
In the literature, extensive studies have been conducted on the convergence analysis of the
augmented Lagrangian method in this context. It all started with the groundbreaking article by
Powell (1969) [22], which introduced the concept of augmented Lagrangian to address nonlinear
40
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
optimization problems with equality constraints. This seminal work paved the way for numerous
subsequent research endeavors. Subsequently, Hestenes (1969) [16] and Rockafellar (1973) [23]
laid down solid theoretical foundations for the augmented Lagrangian method and established its
global convergence properties for convex problems. Building upon this foundation, adaptations
of the augmented Lagrangian have been developed to handle non-convex and nondifferentiable
optimization problems. Notable contributions, such as those by Birgin and Martı́nez (2000) [6],
proposed variants tailored to address these specific types of problems.
Recently, there has been research on extending the augmented Lagrangian method to solve
multi-objective problems. Cocchi et al. [10, 11] introduced a method based on non-scalar
augmented Lagrangian, while Undapayer et al. [24] proposed an augmented Lagrangian method
based on the cone method, which transforms the multiobjective problem into a scalar single-
objective problem.
Augmented Lagrangian methods that employ multiplier safeguarding techniques have gained
significant interest in recent years. Notable contributions in this area include the works of
Andreani et al. [1], Birgin et al. [7, 8], and Galván et al. [15]. These methods offer advantages
over classical approaches, such as penalty and multiplier methods [4, 12, 23], as summarized in
[19].
The presence of abstract constraints and the complexity associated with handling approximate
stationary points contribute to the intriguing nature of this problem. Augmented Lagrangian
methods with multiplier safeguarding techniques provide effective tools for addressing these
challenges, enabling efficient and reliable solutions for constrained multi-objective optimization
problems.
Computing the global minimum of each Lagrangian subproblem would simplify the conver-
gence analysis without requiring additional assumptions about the admissible domain, except
for its closure [3]. However, computing the global minimum is often challenging in the presence
of non-convex objective functions. Therefore, our approach focuses on stationary points. Ad-
ditionally, it is often practical to start with approximate stationary points at the beginning of
the algorithm and gradually demand more accurate solutions as the algorithm progresses, which
simplifies the computations.
In this context, we propose in this study a method based on Augmented Lagrangian to solve
global constrained multi-objective problems. Our objective is to develop an approach that
minimizes the number of additional conditions, such as the requirement of convexity.
The rest of this work is structured as follows. In Section 2, we provide an overview of the pre-
liminary concepts related to multi-objective optimization. Section 3 focuses on the augmented
Lagrangian method for multi-objective optimization, providing a detailed description of the al-
gorithm used and presenting the results regarding the feasibility and optimality of the generated
sequences. In Section 4, we examine the practical application of the algorithm by presenting
results that demonstrate its validity. Section 5 is dedicated to the application of the proposed
algorithm through test problems. Finally, Section 6 summarizes our conclusions, offers future
perspectives, and suggests research directions to explore in this field.
41
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
2. Preliminaries
We will consider the multiobjective programming problem defined as follows:
min F(x) = (f1 (x), f2 (x), · · · , fq (x))
hi (x) = 0 ∀i ∈ I = {1, 2, 3, · · · , p}
(MOP)
s.t. : g (x) ≤ 0 ∀l ∈ L = {1, 2, 3, · · · , m}
l
x ∈ Rn .
In this formulation, F : Rn → Rq is a vector function with components f1 (x), f2 (x), . . . , fq (x).
The constraints include equality constraints hi (x) = 0 for all i in the index set I = {1, 2, 3, · · · , p},
and inequality constraints gl (x) ≤ 0 for all l in the index set L = {1, 2, 3, · · · , m}. In the follow-
ing, we assume that the functions fj , gi , hl are continuous and differentiable functions. Let X
denote the feasible space of problem (MOP), defined as X = {x ∈ Rn : h(x) = 0 and g(x) ≤ 0}.
In the rest of this work, we assume that the admissible space X is non-empty.
Throughout our study, we will adhere to the following conventions: the set of positive real
numbers is denoted as R++ . Rn represents the set of column vectors with dimension n. The
image space of a matrix A ∈ Rm×n is referred to as Im(A). The unit vector of dimension q is
represented by e. For any vectors u = (u1 , u2 , · · · , un )T and v = (v1 , v2 , · · · , vn )T , we establish
the following conventions regarding equality and inequality:
42
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
(b): a weak Pareto optimum for (MOP) if and only if for all y ∈ X we have
max {fj (y) − fj (x∗ )} ≥ 0.
j=1,2,...,q
We also define x∗ ∈ Rn as a local Pareto optimal point (or weak local Pareto optimal point)
if there exists a neighborhood V (x∗ ) ∈ Rn such that x∗ is a Pareto optimal point (or weakly
Pareto optimal point) for F restricted to V (x∗ ).
A necessary but generally not sufficient condition for weak Pareto optimality can be expressed
by the following relation:
(2.1) − (R++ )q ∩ Im(JF (x∗ )) = ∅,
where JF (.) denotes the Jacobian matrix of F . A point x∗ ∈ X is considered stationary for F
if it satisfies relation (2.1). Now, a necessary condition for Pareto optimality is given by the
following definition.
Definition 2.4. ([13]) A point x∗ ∈ X is considered a Pareto stationary point for the problem
(MOP) if, for any d ∈ Rn , the following inequality holds:
max ∇fj (x∗ )⊤ d ≥ 0.
j=1;q
Note that if x∗ is not a Pareto stationary point, there exists an admissible direction d such
that max ∇fj (x∗ )⊤ d < 0.
j=1;q
We can define an ε-Pareto-Stationary solution as follows:
Definition 2.5 ([10]). Let ε ≥ 0. A point x∗ ∈ Rn is ε-Pareto-stationary for problem (MOP) if
max ∇fj (x∗ )⊤ d ≥ −ε, ∀d ∈ {ξ ∈ Rn | ∥ξ∥ ≤ 1} .
i=1,...,m
Definition 2.6 presents the concepts of ε-Pareto optimal solution and weakly ε-Pareto optimal
solution.
Definition 2.6. Let ε ≥ 0. A point x∗ ∈ X is:
(a): ε-Pareto optimal for (MOP) if for every y ∈ X at least one of the following conditions
is satisfied:
(i): max {fj (y) − fj (x∗ )} > −ε, or
j=1,2,...,q
(ii): min {fj (y) − fj (x∗ )} ≥ −ε.
j=1,2,...,q
(b): weakly ε-Pareto optimal for (MOP) if max {fj (y) − fj (x∗ )} ≥ −ε for all y ∈ X .
j=1,2,...,q
3.1. Principle. Let’s consider the problem (MOP). Note that (MOP) is equivalent to
min : F (x) = (f1 (x), f2 (x), · · · , fq (x))
hi (x) = 0 ∀i ∈ I = {1, 2, 3, . . . , p} ,
s.t. g (x) + rl = 0, rl ≥ 0 ∀l ∈ L = {1, 2, 3, . . . , m}
l
x ∈ Rn ,
43
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
where r = (r1 , r2 , . . . , rm )⊤ . Applying the augmented Lagrangian method to problem (3.1) leads
to solving the following problem [3, 2]:
(3.1) min Lη (x, r, µ, λ),
x∈Rn , r≧0
where
p h
(m )
Xh τ 2
i X τ 2
i
Lτ (x, r, µ, λ) = F (x)+ µl (gl (x) + rl ) + (gl (x) + rl ) + λi (hi (x)) + (hi (x)) ·e,
2 2
l=1 i=1
Rm
and τ > 0, µ = (µ1 , µ2 , . . . , µm ) ∈
+ and λ = (λ1 , λ2 , . . . , λp ) ∈ Rp+ . The
minimum of (3.1) can
be obtained [2] by first minimizing Lτ (x, r, µ, λ) over the slack variables r ≧ 0, which implies
Lτ (x, µ, λ) = min Lτ (x, r, µ, λ),
r≧0
and then minimizing Lτ (x, µ, λ) when x ∈ Rn . Note that µl gl (x) + τ2 (gl (x) + rl )2 is quadratic
in r. Thus, it is easy to obtain a closed-form expression for Lτ (x, µ, λ) for each fixed x. For a
given x, we have
(m
X h τ i
(3.2) min Lτ (x, r, µ, λ) = F (x) + min µl (gl (x) + rl ) + (gl (x) + rl )2
r≥0 r≥0 2
l=1
p h
)
X τ 2
i
+ λi (hi (x)) + (hi (x)) · e.
2
i=1
n o
τ 2
However, the minimum of µl (gl (x) + rl ) + 2 (gl (x) + rl ) is obtained for each
n µ o
l
rl∗ = max 0, − + gl (x) .
τ
This implies
(3.3)
n τ o n µl o τ n µl o2
2
min µl (gl (x) + rl ) + (gl (x) + rl ) = µl max gl (x), − + max gl (x), − .
r≥0 2 τ 2 τ
Substituting into (3.2), we obtain
(m
X n µl o τ n µl o2
(3.4) Lτ (x, r, µ, λ) = F (x) + µl max gl (x), − + max gl (x), −
τ 2 τ
l=1
p h
)
X τ i
+ λi (hi (x)) + (hi (x))2 ·e
2
i=1
which represents the Augmented Lagrange function of (MOP), with λ and µ as vectors of
Lagrange multipliers and τ as the penalty parameter [5, 7, 8, 15, 17]. These parameters are
updated at each iteration according to the following equations:
h i
λk+1
i = P C λ k
i + τ h
k i (x k+1
) with C = [0, λmax ] for all i = 1, p
h i
µk+1
l = P Ω µ k
l + τk gl (x k+1
) with Ω = [0, µmax ] for all l = 1, m,
44
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
with PK (.) being the projection operator onto the convex space K.
Now, posing gl,+ (x, µl , τ ) = max gl (x), − µτl , and substituting in (3.4), we obtain the follow-
ing simplified expression (3.5), which we will use throughout the rest of the paper.
" p
Xn τk o
(3.5) Lτk (x, λk , µk ) = F (x) + λki hi (x) + (hi (x))2
2
i=1
m
#
X τk 2
k k k
+ µl gl,+ (x, µl , τk ) + gl,+ (x, µl , τk ) · e.
2
l=1
3.2. Algorithm. In this subsection, we introduce a new algorithm for solving multiobjective
optimization problems using the Augmented Lagrangian method. This algorithm is based on
the information presented in the preceding sections. The algorithm is outlined as follows:
5 for l = 1, 2, 3, · · · , m do
µk+1 = P[0;µ] τk gl (xk+1 ) + µkl
6 l
µkl
k+1 k+1
7 βl = max gl (x ), −
τk
k+1 k+1 ≤ σ max h(xk ) , β k
8 if max h(x ) , β then
9 τk+1 = τk
10 else
11 τk+1 = ατk
45
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
Assumption 3.1. The objective function F has bounded level sets in the multiobjective sense,
meaning that the set {x ∈ Rn , F (x) ≦ F (x0 )} is compact.
Assumption 3.2. The sequence {ϵk } satisfies lim ϵk = 0.
k→∞
Assumption 3.3. For any k = 0, 1, . . . , there exists xk+1 ∈ R an ϵk -Pareto point for
LTk (x, λk , µk ), that is: for any x ∈ Rn , there is a j ∈ {1, . . . , q} such that
LTk (xk+1 , λk , µk ) < LTk (x, λk , µk ) + ϵk ,
j j
where ϵk is a given bounded sequence.
The following theorem presents the admissibility results of the sequence {xk } generated by
Algorithm 1 at each iteration.
Theorem 3.4. (Algorithm 1 Feasibility) Assume that Assumption 3.3 holds. Let {xk } be the
sequence generated by Algorithm 1. Suppose that there exists K ⊆ N such that lim xk = x∗ .
∞ k→∞
k∈K
Then, for all x ∈ Rn , we have
(3.6) ∥h(x∗ )∥2 + ∥g+ (x∗ )∥2 ≤ ∥h(x)∥2 + ∥g+ (x)∥2 , where g+ (x) = max {g(x), 0}
Proof. We consider two cases on the sequence of penalty parameters:
(a): the sequence {τk } is bounded,
(b): the sequence {τk } is unbounded.
Case (a): From the definition of the sequence of penalty parameters in lines 10 to 12
of Algorithm 1, it can be observed that the terms of the sequence τk satisfy either
τk+1 = τk or τk+1 = ατk with α > 1. This indicates that τk forms a monotonically
increasing sequence. In order for {τk } to be bounded, the number of times the equality
τk+1 = ατk occurs must be finite. Therefore, there exists a positive integer k0 such that
for any k > k0 , we have τk = τk0 . Hence, for k0 ≥ 1,
n o n o
max h(xk0 +1 ) , ∥βk0 +1 ∥ ≤ σ max h(xk0 ) , ∥βk0 ∥
n o n o
max h(xk0 +2 ) , ∥βk0 +2 ∥ ≤ σ max h(xk0 +1 ) , ∥βk0 +1 ∥
n o
≤ σ 2 max h(xk0 ) , ∥βk0 ∥
..
.
n o
max h(xk0 +m ) , ∥βk0 +m ∥ ≤ σ m max h(xk0 ) , ∥βk0 ∥ .
µkl
k+1
Hence, h(x ) → 0 and max gl (x ), − k+1 → 0 for all l. Since h and g are
τk
∗ ∗
continuous, it follows that h(x ) = 0 and g(x ) ≤ 0.
Thus, the limit point x∗ of xk is feasible.
46
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
k∈K
Case (b): assume that τk → ∞. Let K ⊆ N be such that xk −−−→ x∗ . Assume by
∞
contradiction that there exists x ∈ Rn such that
(3.7) ∥h(x∗ )∥2 + ∥g+ (x∗ )∥2 > ∥h(x)∥2 + ∥g+ (x)∥2 .
By the continuity of h and g, the boundedness of µk and λk and the fact that
τk → ∞, there exist ζ > 0 and k0 ∈ N such that for all k ∈ K, k ≥ k0 ,
2 2 2 2
λk µk µk λk
k+1 k+1
(3.8) h(x ) + + g(x ) + − − >
τk τk + τk τk
2 2 2 2
λk µk λk µk
h(x) + + g(x) + − − + ζ.
τk τk + τk τk
Therefore, for all k ∈ K, k ≥ k0 , we have for all j = 1, q
p 2 X m n
k+1
X
k+1 τk k+1
(3.9) fj (x ) + λi hi (x ) + hi (x ) + µkl gl,+ (xk+1 , µkl , τ )
2
i=1 l=1
p n
τ 2 X τk o
+ gl,+ (xk+1 , µkl , τ ) > fj (x) + λki hi (x) + (hi (x))2
2 2
i=1
m
X τ 2 τk · ζ
+ µkl gl,+ (x, µkl , τ ) + gl,+ (x, µkl , τ ) + + fj (xk+1 ) − fj (x).
2 2
l=1
k∈K
Since xk −−−→ x∗ , and {ϵk } is bounded there exists k0 ≤ k0′ such that for all k ≥ k0′ and
j = 1, 2, . . . , q
τk · ζ
+ fj (xk+1 ) − fj (x) > ϵk .
2
Therefore for k > k0′ from equation (3.9), we can write
Lτk (xk+1 , λk , µk ) > Lτk (x, λk , µk ) + ϵk , for all j,
j j
Theorem 3.5. (Optimality of solution generated by Algorithm 1) Let x∗ be a cluster point for
k+1
the sequence x generated by Algorithm 1 under assumptions 3.1 and assumption 3.2. Then
x∗ is a weak Pareto point for problem (MOP).
Proof. Since the problem (MOP) is feasible by Theorem 3.4, the point x∗ is feasible i.e h(x∗ ) = 0
and g(x∗ ) ≤ 0.
Assume now by contradiction that x∗ is not a weak Pareto point for problem (MOP). Then,
there exists x ∈ X such that for all j = 1, q
(3.10) fj (x) < fj (x∗ ).
From the instruction of the algorithm and from Lemma 2.3 we know that
47
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
p
(
X τk 2
k+1
(3.11) min fj (x )+ λki hi (xk+1 ) + k+1
hi (x )
j=1,2,...,q 2
i=1
m 2
X τk
+ µkl gl,+ (xk+1 , µkl , τk )
+ k+1 k
gl,+ (x , µl , τk ) − fj (x)
2
l=1
m
)
τ 2
k
X
− µkl gl,+ (x, µkl , τk ) + gl,+ (x, µkl , τk ) ≤ ϵk ,
2
l=1
which means that
p 2
n
k+1
o X
k k+1 τk k+1
(3.12) min fj (x ) − fj (x) ≤ − λi hi (x ) + hi (x )
j=1,2,...,q 2
i=1
m 2
X
k k+1 k τk k+1 k
− µl gl,+ (x , µl , τk ) + gl,+ (x , µl , τk ) +
2
l=1
m 2
X τk
µkl gl,+ (x, µkl , τk ) + gl,+ (x, µkl , τk ) + ϵk ,
2
l=1
By rearranging, we obtain
(3.13)
p k 2 m 2
( )
µkl
n
k+1
o τk
X
k+1 λi τk X k+1
min fj (x ) − fj (x) ≤ − hi (x ) + − max 0, gl (x ) +
j=1,2,...,q 2 τk 2 τk
i=1 l=1
p 2 m 2
τk X λki µkl
τk X
+ + max 0, gl (x) + + ϵk .
2 τk 2 τk
i=1 l=1
48
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
p 2 m 2
n
k+1
o X λki X µkl
(3.16) ⇒ min fj (x ) − fj (x) ≤ + + ϵk .
j=1,2,...,q τk τk
i=1 l=1
k 2
2
µkl
λi
Taking limits for k ∈ K and using that lim = lim = 0 and ϵk → 0,
k→∞ τk k→∞ τk
recalling again λk and µk are bounded, we get
Case (ii): in this case, there exists k0 ∈ N such that τk = τk0 for all k ≥ k0 .
From the instruction of the algorithm, max h(xk+1 ) , β k+1 → 0 This implies
µ k
that hi (xk+1 ) → 0 and β k+1 → 0 =⇒ i → 0 as k → ∞, k ∈ K for all i such that
τk
gi (x∗ ) < 0. In particular, for such indices i it holds µki → 0 as k → ∞, k ∈ K. We
therefore have from the Lemma 2.3
p 2 )
(
n
k+1
o τk0 X k+1 λki
(3.17) min fj (x ) − fj (x) ≤ − hi (x ) +
j=1,2,...,q 2 τk0
i=1
k 2 p
m
( 2 )
λki
τk0 X µ τk
X
− max 0, gl (xk+1 ) + l + 0
2 τk0 2 τk0
l=1 i=1
m 2
µkl
τk0 X
+ max 0, gl (x) + + ϵk .
2 τk0
l=1
Given that h(x) = 0 and g(x) ≤ 0, it follows from equation (3.17) that
(3.18)
p 2 m 2
λki µkl
n
k+1
o τk0 X k+1 τk0 X k+1
min fj (x ) − fj (x) ≤ − hi (x ) + − max 0, gl (x ) +
j=1,2,...,q 2 τk0 2 τk0
i=1 l=1
p 2 m 2
τk X λki τk X µkl
+ 0 + 0 + ϵk for any k ∈ N.
2 τk0 2 τk0
i=1 l=1
Since xk → x∗ ,
and h(x∗ )
= 0, and because the functions hi for all i are continuous,
k
and the sequences {λi } are bounded, we deduce that
p 2 p 2
τk0 X k+1 λki τk0 X λki
− hi (x ) + + −→ 0.
2 τk0 2 τk0
i=1 i=1
49
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
m k 2 m 2
µkl
X µ l
X
k+1
− max 0, gl (x ) +
τk0 τk0
l=1 l=1
X µk 2
( 2
k 2
)
X µ k
µ
≤ l
+ l
− max 0, gl (xk+1 ) + l .
∗
τk 0 ∗
τk 0 τk0
l:gl (x )<0 l:gl (x )=0
Since µkl → 0 for all l such that gl (x∗ ) < 0, the first sum tends to 0. The second sum also
goes to 0 because xk → x∗ , gl (x∗ ) = 0, the functions gl are continuous, and the sequences {µkl }
are bounded.
Thus, it follows that
4. Practical Algorithm
In this section, we present an algorithm specifically designed to solve the sub-problem of
Algorithm 1, aiming for an efficient resolution. This algorithm utilizes the Steepest Descent
method. In the rest of this paper, we denote
p n
X τk o
(4.1) Tj = (Lτ (x, λ, µ))j = fj (x) + λki hi (x) + (hi (x))2
2
i=1
m 2
X
k k τ k
+ µl gl,+ (x, µl , τ ) + gl,+ (x, µl , τ ) for all j,
2
l=1
and
X
νq = w : wj = 1, wj ≥ 0, j ∈ 1, 2, . . . , q .
j∈1,2,...,q
50
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
The description of Algorithm 2 is as follows: first, we evaluate the function (Lτ (x, λ, µ))j using
the parameters λ and µ for each j calculated in Algorithm 1, while keeping x as a variable. Then,
we initialize the vector C 0 by evaluating the function Tj (x) with the value x0 . In the main loop,
we start by solving the problem presented at line 5 to determine the weighting factors. Then,
at line 6, these factors are used to determine the descent direction. The stopping condition is
checked at line 8, where we compare the computed value θk at line 9 with a tolerance threshold
ϵk defined previously in Algorithm 1. It is important to note that the threshold ϵk varies at each
iteration of Algorithm 1 and tends to 0. Finally, if the point xk is not ϵk -Pareto-Stationary, we
compute the descent step in steps 11 to 16. It should be emphasized that the proper definition
of Algorithm 2 relies on the use of the linear search technique to determine the step size φ in
these steps . Therefore, we start by presenting Assumption 4.1 regarding the proposed descent
direction dk in [21], and then we present a technical result stated in Lemma 4.2.
Assumption 4.1. For a sequence of iteration xk and search direction {dk } there exist positive
constants ϱ1 and ϱ2 such that
n o
max ∇Fj (xk )⊤ dk ≤ −ϱ1 | θk |,
j=1;q
∥dk ∥ ≤ ϱ2 | θk | .
51
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
Lemma 4.2. For every iteration k of Algorithm 2, the following inequality holds:
T (xk ) ≤ C k .
The following theorem demonstrates that the linear search technique presented in algorithm
2 is well-defined, meaning that the step size is determined in a finite number of iterations.
Theorem 4.3. Let xk be an iteration of Algorithm 2. If JT (xk )dk < 0, indicating that dk is
a descent direction, then for any ρ ∈ (0, 1), there exists a φ > 0 such that
Proof. Assume that JT (x)dk < 0. Since by the definition T is differentiable, we have
T (xk + φdk ) = T (xk ) + φ JT (xk )dk + Θ(φ) ,
Theorem 4.4. Assume that the Assumptions 3.1 and 4.1 hold. Let x∗ be a cluster point for
sequence xk generated by Algorithm 2. Then x∗ is Pareto critical point.
q
X
Proof. Since the steepest descent direction dk (xk ) = − wjk ∇Tj (xk ) with
j=1
2
q
k 1 X
w = arg min wj ∇Tj (xk )
w∈νq 2
j=1
clearly satisfies Assumption 4.1, the result holds for Algorithm 2 (see Theorem 6 of [21]). □
52
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
5. Numerical Experiments
In this section, we present the application of the proposed method to test problems to demon-
strate its ability to generate Pareto optimal solutions. The set of test problems used is sum-
marized in Table 1, encompassing both convex and non-convex problems. To evaluate the
performance of the method, we utilize specific parameter settings. The values used for the pa-
rameters are as follows: ρ = 10−4 , α = 10, σ = 0.9, λ0 = 0 ∈ Rp , µ0 = 0 ∈ Rm , µ = 105 , λ = 1,
φ0 = 1, δ = 0.5. The termination criterion used in all problems is that ϵk ≥ 10−6 . At each
iteration, ϵk is updated according to the formula
(
0.75 ∗ ϵk if k = 1,
ϵk+1 =
0.9 ∗ ϵk if k > 1.
Regarding the problems that have constraints in the form lb ≤ x ≤ ub, we transform them into
constraints of the form lb − x ≤ 0 and x − ub ≤ 0. This transformation is carried out to adapt
the constraints to the formalism used by the method. The total number of constraints generated
is equal to 2n, where n represents the number of variables in the problem. This transformation
ensures the feasibility of the obtained solutions and guarantees that they adhere to the specified
bounds. we used an HP EliteBook laptop equipped with an Intel Core i7-3687U processor with
a base frequency range of 2.10GHz to 2.60 GHz and 4GB of RAM to test our algorithms.
Table 2 shows the mathematical formulation of some test problems taken from Table 1.
53
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
min f 1 (x) = x21 + x22 + x23 + x24 + x25 ,
min f2 (x) = 3x1 + 2x2 − x3 + 0.01(x4 − x5 )3 ,
3
DD1 s.t. [21]
g1−5 (x) = −xi − 20 ≤ 0, i = 1, 2, . . . , 5
g6−10 (x) = xi − 20 ≤ 0, i = 1, 2, . . . , 5
min f1 (x) = x1
min f2 (x) = h(x)
x1n
x2 −0.2 2
o n o
x2 −0.6 2
h(x) = 2 − exp − 0.004
− 0.8 exp − 0.4
s.t.
Deb [9]
g1 (x) = −x1 − 0.1 ≤ 0
g2 (x) = x1 − 1 ≤ 0
g3 (x) = −x2 ≤ 0
g (x) = x2 − 1 ≤ 0
4 n
X
1
i(xi − i)4 ,
min f 1 (x) = n2
n = 10
i=1
n
!
X xi
+ ∥x∥22 ,
min f 2 (x) = exp
n
i=1
n
FDS X [21]
min f3 (x) = n(n+1) 1
i(n − i + 1)e−xi ,
i=1
s.t.
g1,10 (x) = −xi − 2 ≤ 0, i = 1, . . . , 10
g
11,20 (x) = x i − 2 ≤ 0, i = 1, . . . , 10
54
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
The figures below display the Pareto fronts of the problems mentioned in Table 1. These
Pareto fronts illustrate the optimal solutions that offer an optimal compromise between the
objectives of the problem, where no improvement in one objective is possible without sacrificing
another objective.
55
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
56
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
Figure 11. Pareto front of ZDT1 Figure 12. Pareto front of ZDT2
Figure 13. Pareto front of MLF1 Figure 14. Pareto front of Deb
6. Conclusion
In this paper, we have presented a method based on Augmented Lagrangian for solving
global constrained multi-objective optimization problems. We have demonstrated the feasibility
and optimality of the sequences generated by the proposed algorithm. Additionally, we have
introduced a second algorithm for its practical application, highlighting the results regarding its
validity and emphasizing the numerical outcomes obtained in various test problems.
Future research directions in this field include exploring variants of the proposed algorithm,
adapting it to specific problem classes, and extending the method to handle more complex
constraints.
References
[1] R. Andreani, E. Birgin, J. Martı́nez and M. Schuverdt, On augmented Lagrangian methods with general
lower-level constraints, SIAM Journal on Optimization 18(4) (2008), 1286-1309.
[2] D. Bertsekas, Convex Optimization Algorithms, Athena Scientific, 2015.
[3] D. Bertsekas, Nonlinear programming, Journal of the Operational Research Society 48(3) (1997), 334-334.
[4] D. Bertsekas, On penalty and multiplier methods for constrained minimization, SIAM Journal on Control
and Optimization 14(2) (1976), 216-235.
[5] E. Birgin and J. Martı́nez, On the application of an augmented Lagrangian algorithm to some portfolio
problems, EURO Journal on Computational Optimization 4(1) (2016), 79-92.
57
Romanian Journal of Mathematics and Computer Science Issue 1, Vol. 15 (2025)
[6] E. Birgin, J. Martı́nez and M. Raydan, Nonmonotone spectral projected gradient methods on convex sets,
SIAM Journal on Optimization 10(4) (2000), 1196-1211.
[7] E. Birgin, J. Martı́nez and L. Prudente, Optimality properties of an augmented Lagrangian method on infea-
sible problems, Computational Optimization and Applications 60(3) (2015), 609-631.
[8] E. Birgin and J. Martı́nez, Practical Augmented Lagrangian Methods for Constrained Optimization, Society
for Industrial and Applied Mathematics, 2014.
[9] J. Chen, L. Tang and X. Yang, A Barzilai-Borwein descent method for multiobjective optimization problems,
European Journal of Operational Research 311(1) (2023), 196-209.
[10] G. Cocchi and M. Lapucci, An augmented Lagrangian algorithm for multi-objective optimization, Computa-
tional Optimization and Applications 77(1) (2020), 29-56.
[11] G. Cocchi, M. Lapucci and P. Mansueto, Pareto front approximation through a multi-objective augmented
Lagrangian method, EURO Journal on Computational Optimization 9 (2021), 100008.
[12] I. Das and J. Dennis, Normal-boundary intersection: A new method for generating the Pareto surface in
nonlinear multicriteria optimization problems, SIAM Journal on Optimization 8(3) (1998), 631-657.
[13] J. Fliege and B. Svaiter, Steepest descent methods for multicriteria optimization, Mathematical Methods of
Operations Research (ZOR) 51(8) (2000), 479-494.
[14] J. Fliege, L. Drummond and B. Svaiter, Newton’s method for multiobjective optimization, SIAM Journal on
Optimization 20(2) (2009), 602-626.
[15] G. Galvan and M. Lapucci, On the convergence of inexact augmented Lagrangian methods for problems with
convex constraints, Operations Research Letters 47(3) (2019), 185-189.
[16] M. Hestenes, Multiplier and gradient methods, Journal of Optimization Theory and Applications 4(5) (1969),
303-320.
[17] R. Hug, E. Maitre and N. Papadakis, On the convergence of augmented Lagrangian method for optimal
transport between nonnegative densities, Journal of Mathematical Analysis and Applications 485(2) (2020),
123811.
[18] S. Huband, P. Hingston, L. Barone and L. While, A review of multiobjective test problems and a scalable test
problem toolkit, IEEE Transactions on Evolutionary Computation 10(10) (2006), 477-506.
[19] C. Kanzow and D. Steck, An example comparing the standard and safeguarded augmented Lagrangian methods,
Operations Research Letters 45(6) (2017), 598-603.
[20] K. Miettinen, Nonlinear Multiobjective Optimization, Springer Science and Business Media, 1999.
[21] K. Mita, E. Fukuda and N. Yamashita, Nonmonotone line searches for unconstrained multiobjective opti-
mization problems, Journal of Global Optimization 75(1) (2019), 63-90.
[22] M.D.J. Powell, A Method for Nonlinear Constraints in Minimization Problems, in: R. Fletcher, (Ed.), Opti-
mization, Academic Press, London, 1969.
[23] R. Rockafellar, A dual approach to solving nonlinear programming problems by unconstrained optimization,
Mathematical Programming 5(1) (1973), 354-373.
[24] A. Upadhayay, D. Ghosh, Q. Ansari and Jauny, Augmented Lagrangian cone method for multiobjective opti-
mization problems with an application to an optimal control problem, Optimization and Engineering 24(3)
(2022), 1633–1665.
58