0% found this document useful (0 votes)
2 views66 pages

workshopCNAM

The document discusses robust combinatorial optimization with polyhedral uncertainty, focusing on the Min-Max optimization problem involving a feasibility set and an uncertainty polytope. It presents various extensions, including applications to decision-dependent information discovery and non-linear objectives, while also addressing the NP-hardness of the robust selection problem. Key results and theorems are provided to illustrate the complexity and methods for solving these optimization challenges.

Uploaded by

Michael Poss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views66 pages

workshopCNAM

The document discusses robust combinatorial optimization with polyhedral uncertainty, focusing on the Min-Max optimization problem involving a feasibility set and an uncertainty polytope. It presents various extensions, including applications to decision-dependent information discovery and non-linear objectives, while also addressing the NP-hardness of the robust selection problem. Key results and theorems are provided to illustrate the complexity and methods for solving these optimization challenges.

Uploaded by

Michael Poss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Robust combinatorial optimization with

polyhedral uncertainty

Michael Poss
based on joint work with Marin Bougeret, Jérémy Omer, Artur Pessoa

October 2, 2024

1 / 29
Table of Contents

General results on polyhedral robust combinatorial optimization

Extension III: Application to decision-dependent information discovery

A crash course in open science

2 / 29
Given:
▶ Y ⊆ {0, 1}n : feasibility set of an optimization problem
▶ U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}: uncertainty polytope
▶ c: nominal cost vector

Solve:
min max u ⊤ y (Min-Max)
y ∈Y u∈U

Example of U:
u2

(0, 0) u1

3 / 29
Given:
▶ Y ⊆ {0, 1}n : feasibility set of an optimization problem
▶ U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}: uncertainty polytope
▶ c: nominal cost vector

Solve:
min max (c + u ′ )⊤ y (Min-Max)
y ∈Y u∈U

Example of U:
u2 u2′

c (0, 0)
u1′

(0, 0) u1

3 / 29
Given:
▶ Y ⊆ {0, 1}n : feasibility set of an optimization problem
▶ U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}: uncertainty polytope
▶ c: nominal cost vector

Solve:
min max (c + u ′ )⊤ y (Min-Max)
y ∈Y u∈U

Example of U:
u2 u2′

c (0, 0)
u1′

(0, 0) u1

3 / 29
Given:
▶ Y ⊆ {0, 1}n : feasibility set of an optimization problem
▶ U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}: uncertainty polytope
▶ c: nominal cost vector

Solve:
min max (c ′ + u ′′ )⊤ y (Min-Max)
y ∈Y u∈U

Example of U:
u2 u2′ u2′′
u1′′
c (0, 0)
u1′ (0, 0)

(0, 0) u1

3 / 29
A 3D example
u3′′
Translated
 description:
U ′′ = u ′′ ∈ R3 | u1′′ + u2′′ + u3′′ ≤ 2, 0 ≤ u ′′ ≤ 1

u2′′
u1′′

4 / 29
A 3D example
u3′′
Translated
 description:
U ′′ = u ′′ ∈ R3 | u1′′ + u2′′ + u3′′ ≤ 2, 0 ≤ u ′′ ≤ 1

u2′′
u1′′

“Natural”
n description: 8 symmetric copies of the polytope o
P3
U = u ∈ R3 | ci − 1 ≤ ui ≤ ci + 1, ∀i ∈ [3], i=1 |ui − c i | ≤ 2

4 / 29
A 3D example
u3′′
Translated
 description:
U ′′ = u ′′ ∈ R3 | u1′′ + u2′′ + u3′′ ≤ 2, 0 ≤ u ′′ ≤ 1

u2′′
u1′′

“Natural”
n description: 8 symmetric copies of the polytope o
P3
U = u ∈ R3 | ci − 1 ≤ ui ≤ ci + 1, ∀i ∈ [3], i=1 |ui − c i | ≤ 2
( 3 3
X X
= u ∈ R3 | ci − 1 ≤ ui ≤ ci + 1, ∀i ∈ [3], (ui − ci ) ≤ 2, (−ui + ci ) ≤ 2,
i=1 i=1
X
(ui − ci ) − uj + cj ≤ 2, ∀I ∪ j = [3],
i∈I
)
X
(−ui + ci ) + uj − cj ≤ 2, ∀I ∪ j = [3]
i∈I 4 / 29
Hardness I: the 2-scenarios case is hard
Theorem
The robust selection problem is NP-hard even when |U| = 2.

Proof.

5 / 29
Hardness I: the 2-scenarios case is hard
Theorem
The robust selection problem is NP-hard even when |U| = 2.

Proof.
P
1. SELECTION PROBLEM: min ui
S⊆N,|S|=p i∈S

5 / 29
Hardness I: the 2-scenarios case is hard
Theorem
The robust selection problem is NP-hard even when |U| = 2.

Proof.
P
1. SELECTION PROBLEM: min ui
S⊆N,|S|=p i∈S
P
2. ROBUST SEL. PROB.: min max ui
S⊆N,|S|=p u∈U i∈S

5 / 29
Hardness I: the 2-scenarios case is hard
Theorem
The robust selection problem is NP-hard even when |U| = 2.

Proof.
P
1. SELECTION PROBLEM: min ui
S⊆N,|S|=p i∈S
P
2. ROBUST SEL. PROB.: min max ui
S⊆N,|S|=p u∈U i∈S
P P 
3. PARTITION PROBLEM: min max ai , ai
S⊆N,|S|=|N|/2 i∈S i∈N\S

5 / 29
Hardness I: the 2-scenarios case is hard
Theorem
The robust selection problem is NP-hard even when |U| = 2.

Proof.
P
1. SELECTION PROBLEM: min ui
S⊆N,|S|=p i∈S
P
2. ROBUST SEL. PROB.: min max ui
S⊆N,|S|=p u∈U i∈S
P P 
3. PARTITION PROBLEM: min max ai , ai
S⊆N,|S|=|N|/2 i∈S i∈N\S
|N| 1 2
4. Reduction: p = 2 , and U = {u , u } such that

2 X
ui1 = ai
and ui2 = ak − ai
P P P  |N| k
⇒ max ui = max ai , ai
u∈U i∈S i∈S i∈N\S

5 / 29
Hardness II: polytope is more general than 2 scenarios

Corollary
The robust selection problem is NP-hard when the number of rows of
A is part of the input.

Proof.

6 / 29
Hardness II: polytope is more general than 2 scenarios

Corollary
The robust selection problem is NP-hard when the number of rows of
A is part of the input.

Proof.
1. y ∗ ∈ arg min max (c + u)⊤ y ⇔ y ∗ ∈ arg min max (c + u)⊤ y
y ∈Y u∈U y ∈Y u∈ext(U)

6 / 29
Hardness II: polytope is more general than 2 scenarios

Corollary
The robust selection problem is NP-hard when the number of rows of
A is part of the input.

Proof.
1. y ∗ ∈ arg min max (c + u)⊤ y ⇔ y ∗ ∈ arg min max (c + u)⊤ y
y ∈Y u∈U y ∈Y u∈ext(U)
1 2 1 2
2. {u , u } is hard =⇒ conv({u , u }) is hard.

6 / 29
Hardness II: polytope is more general than 2 scenarios

Corollary
The robust selection problem is NP-hard when the number of rows of
A is part of the input.

Proof.
1. y ∗ ∈ arg min max (c + u)⊤ y ⇔ y ∗ ∈ arg min max (c + u)⊤ y
y ∈Y u∈U y ∈Y u∈ext(U)
1 2 1 2
2. {u , u } is hard =⇒ conv({u , u }) is hard.
3. conv({u 1 , u 2 }) can be described by 2n + 2 inequalities

6 / 29
Constant number of rows
U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}, s denotes the number of rows of A

Theorem
Solving (Min-Max) amounts to solve O(ns ) problems of the form
min c̄ ⊤ y
y ∈Y

Proof.

7 / 29
Constant number of rows
U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}, s denotes the number of rows of A

Theorem
Solving (Min-Max) amounts to solve O(ns ) problems of the form
min c̄ ⊤ y
y ∈Y

Proof.
1. dualize the maximization over u =⇒ dual variables (α, π) ∈ D(y )

7 / 29
Constant number of rows
U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}, s denotes the number of rows of A

Theorem
Solving (Min-Max) amounts to solve O(ns ) problems of the form
min c̄ ⊤ y
y ∈Y

Proof.
1. dualize the maximization over u =⇒ dual variables (α, π) ∈ D(y )
2. we can construct a set |D|∗ ∈ O(ns ) such that:
[ h i
Projα ext(D(y )) ⊆ D ∗
x∈{0,1}n

=⇒ enumerate α ∈ D ∗

7 / 29
Constant number of rows
U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}, s denotes the number of rows of A

Theorem
Solving (Min-Max) amounts to solve O(ns ) problems of the form
min c̄ ⊤ y
y ∈Y

Proof.
1. dualize the maximization over u =⇒ dual variables (α, π) ∈ D(y )
2. we can construct a set |D|∗ ∈ O(ns ) such that:
[ h i
Projα ext(D(y )) ⊆ D ∗
x∈{0,1}n

=⇒ enumerate α ∈ D ∗
3. variables πk can be substituted by max(yk − f (α), 0)

7 / 29
Constant number of rows
U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d}, s denotes the number of rows of A

Theorem
Solving (Min-Max) amounts to solve O(ns ) problems of the form
min c̄ ⊤ y
y ∈Y

Proof.
1. dualize the maximization over u =⇒ dual variables (α, π) ∈ D(y )
2. we can construct a set |D|∗ ∈ O(ns ) such that:
[ h i
Projα ext(D(y )) ⊆ D ∗
x∈{0,1}n

=⇒ enumerate α ∈ D ∗
3. variables πk can be substituted by max(yk − f (α), 0)
4. max(yk − f (α)) = yk max(1 − f (α)) + (1 − yk ) max(−f (α), 0)

7 / 29
Extension I: non-linear objective
The previous results extend to:

min max(c + u)⊤ f (x)


y ∈Y u∈U

where fk (x) ∈ {0, Fk } for each y ∈ Y.

Examples
▶ max-cut with uncertain weights:
X
max min wij yi (1 − yj ),
y ∈Y w ∈U
{i,j}∈E

with Fij = 1.

8 / 29
Extension I: non-linear objective
The previous results extend to:

min max(c + u)⊤ f (x)


y ∈Y u∈U

where fk (x) ∈ {0, Fk } for each y ∈ Y.

Examples
▶ max-cut with uncertain weights:
X
max min wij yi (1 − yj ),
y ∈Y w ∈U
{i,j}∈E

with Fij = 1.
▶ minimizing weight of falling jobs
X
min max wi Ui (y ),
y ∈Y w ∈U
i∈J

with Ui (y ) = 1 iff job i fails in schedule y .


8 / 29
Extension II: constraint uncertainty
Define:
min c ⊤y
Ye = Y ∩ {(a + u)⊤ y ≤ h, ∀u ∈ U}
s.t. (a + u)⊤ y ≤ h, ∀u ∈ U
y ∈Y (1) Y(α)
e = Y ∩ {a(α)⊤ y ≤ h}

Theorem
[
Ye = Y(α)
e
α∈D ∗

9 / 29
Extension II: constraint uncertainty
Define:
min c ⊤y
Ye = Y ∩ {(a + u)⊤ y ≤ h, ∀u ∈ U}
s.t. (a + u)⊤ y ≤ h, ∀u ∈ U
y ∈Y (1) Y(α)
e = Y ∩ {a(α)⊤ y ≤ h}

Theorem
[
Ye = Y(α)
e
α∈D ∗

Corollary
Solving (1) amounts to solve O(ns ) problems of the form

min c ⊤y
s.t. ā⊤ y ≤ h, ∀u ∈ U
y ∈Y

9 / 29
Extension II: constraint uncertainty - D.-W. reformulation
master pricing
z}|{ z}|{
Y = YM ∩ YP

10 / 29
Extension II: constraint uncertainty - D.-W. reformulation
master pricing
z}|{ z}|{
Y = YM ∩ YP

YeP = Y P ∩{(a+u)⊤ y ≤ h, ∀u ∈ U}
Master:
!
X

min c λs xs
s
X
s.t. λs xs ∈ Y M
s
λ∈Λ
n o
Pricing: min c ⊤ y | x ∈ YeP

10 / 29
Extension II: constraint uncertainty - D.-W. reformulation
master pricing
z}|{ z}|{
Y = YM ∩ YP

YeP (α) = Y P ∩ {a(α)⊤ y ≤ h}


YeP = Y P ∩{(a+u)⊤ y ≤ h, ∀u ∈ U}
Master:
Master: !
! X X

X min c λs (α)xs
min c⊤ λs xs α∈D ∗ s
s X X
X
M s.t. λs (α)xs ∈ Y M
s.t. λs xs ∈ Y
α∈D ∗ s
s X
λ∈Λ λ(α) ∈ Λ
α∈D ∗
n o
Pricing: min c ⊤ y | x ∈ YeP n o
Pricing: min c ⊤ y | x ∈ YeP (α)

See Pessoa et al. [2021] for pre-processing techniques reducing the


number of α ∈ D ∗
10 / 29
Table of Contents

General results on polyhedral robust combinatorial optimization

Extension III: Application to decision-dependent information discovery

A crash course in open science

11 / 29
Step by step definition of the problem
Note: for the sake of simplicity, we consider
( )
n
P
∆= δ∈R | δi ≤ 1, 0 ≤ δ ≤ 1
i∈[n]

See Omer et al. [2024] for the generalization to U.


Decision dependent information discovery (DDID)
Before choosing y, one may ask for q deviations to be revealed:
▶ worst-case realization of those q deviations is considered,
▶ y is chosen after those q observations.
⇒ddid seeks for the optimal choice of the q observations.

12 / 29
Step by step definition of the problem
Note: for the sake of simplicity, we consider
( )
n
P
∆= δ∈R | δi ≤ 1, 0 ≤ δ ≤ 1
i∈[n]

See Omer et al. [2024] for the generalization to U.


Decision dependent information discovery (DDID)
Before choosing y, one may ask for q deviations to be revealed:
▶ worst-case realization of those q deviations is considered,
▶ y is chosen after those q observations.
⇒ddid seeks for the optimal choice of the q observations.

X
z DDID = min max min max (ci + di δi )yi , (ddid)
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]

▶ W = {w ∈ {0, 1}n |
P
i wi = q},

▶ ∆(w , δ̄) = δ ∈ ∆ | w ◦ δ = w ◦ δ̄ : deviations cannot change once
observed.
12 / 29
Applications
Any planning process were one may investigate the parameters before
taking the actual decision:

organ transplant (e.g. kidney exchange) underground works


Y = {short cycles, short paths} Y = {trees, . . .}
Carvalho et al. [2021] Focke et al. [2020]

Many more applications in Vayanos et al. [2020]

13 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ How to solve the following ?


X
max (ci + di δi )y i
δ∈∆(w ,δ̄)
i∈[n]

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ How to solve the following ?


X
max (ci + di δi )y i
δ∈∆(w ,δ̄)
i∈[n]

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ How to solve the following ?


X
max (ci + di δi )y i
δ∈∆(w ,δ̄)
i∈[n]

▶ And the following ?


X
min max (ci + di δi )y i .
y ∈Y δ∈∆(w ,δ̄)
i∈[n]

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ How to solve the following ?


X
max (ci + di δi )y i
δ∈∆(w ,δ̄)
i∈[n]

▶ And the following ?


X
min max (ci + di δi )y i .
y ∈Y δ∈∆(w ,δ̄)
i∈[n]

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ How to solve the following ?


X
max (ci + di δi )y i
δ∈∆(w ,δ̄)
i∈[n]

▶ And the following ?


X
min max (ci + di δi )y i .
y ∈Y δ∈∆(w ,δ̄)
i∈[n]

Theorem ([Bertsimas and Sim, 2003])


X X
min max (ci + di δi )y i = min Γd ℓ + min ciℓ y i ,
y ∈Y δ∈∆ ℓ∈[n] y ∈Y
i∈[n] i∈[n]

ℓ ℓ
where d and c , ℓ ∈ [n], follow simple formulas.
14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ So the following is easy:


X
min max (ci + di δi )y i .
y ∈Y δ∈∆(w ,δ̄)
i∈[n]

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ So the following is easy:


X
min max (ci + di δi )y i .
y ∈Y δ∈∆(w ,δ̄)
i∈[n]

▶ What about the following ?


X
max min max (ci + di δi )y i .
δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]

14 / 29
Robust combinatorial optimization
Let’s decompose X
min max min max (ci + di δi )y i .
w ∈W δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
n o
Remember: ∆ = δ ∈ [0, 1]n | i∈[n] δi ≤ Γ .
P

▶ So the following is easy:


X
min max (ci + di δi )y i .
y ∈Y δ∈∆(w ,δ̄)
i∈[n]

▶ What about the following ?


X
max min max (ci + di δi )y i .
δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]

⇒ This is our main contribution. We study


X
Φ(w ) = max min max (ci + di δi )y i ,
δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]
under two assumptions:
Y ⊆ {0, 1}n and conv(Y) = P

14 / 29
Linear formulation of the adversary problem I
For a given choice of observations w ∈ W, the (outer) adversary problem
chooses the revealed deviations δ̄:
X
Φ(w ) = max min max (ci + di δi )y i ,
δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]

15 / 29
Linear formulation of the adversary problem I
For a given choice of observations w ∈ W, the (outer) adversary problem
chooses the revealed deviations δ̄:
X
Φ(w ) = max min max (ci + di δi )y i ,
δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]

¯
Let us “simplify” ∆(w , δ̄) to ∆:
X
Φ(w ) = max min max (c̄i + d¯i δi )y i ,
¯
δ̄∈∆ y ∈Y δ∈∆
i∈[n]

where:
▶ c̄i and Γ̄ are affine functions of δ̄
¯ = δ ∈ [0, 1]n | P δi ≤ Γ̄

▶ ∆ i
▶ d¯ is independent of δ̄

15 / 29
Linear formulation of the adversary problem I
For a given choice of observations w ∈ W, the (outer) adversary problem
chooses the revealed deviations δ̄:
X
Φ(w ) = max min max (ci + di δi )y i ,
δ̄∈∆ y ∈Y δ∈∆(w ,δ̄)
i∈[n]

¯
Let us “simplify” ∆(w , δ̄) to ∆:
X
Φ(w ) = max min max (c̄i + d¯i δi )y i ,
¯
δ̄∈∆ y ∈Y δ∈∆
i∈[n]

where:
▶ c̄i and Γ̄ are affine functions of δ̄
¯ = δ ∈ [0, 1]n | P δi ≤ Γ̄

▶ ∆ i
▶ d¯ is independent of δ̄
1. Epigraphic formulation:
Φ(w ) = max ω
δ̄∈∆
X
s.t. ω ≤ min max (c̄i + d¯i δi )y i
¯
y ∈Y δ∈∆
i∈[n]

15 / 29
Linear formulation of the adversary problem II
1. Epigraphic formulation:

Φ(w ) = max ω
δ̄∈∆
X
s.t. ω ≤ min max (c̄i + d¯i δi )y i
¯
y ∈Y δ∈∆
i∈[n]

2. B&S theorem: rhs is equivalent to solving n + 1 independent


problems.

max ω
δ̄∈∆
X
s.t. ω ≤ min Γ̄d¯ℓ + min c̄iℓ y i ,
ℓ∈[n]0 y ∈Y
i

16 / 29
Linear formulation of the adversary problem II
1. Epigraphic formulation:

Φ(w ) = max ω
δ̄∈∆
X
s.t. ω ≤ min max (c̄i + d¯i δi )y i
¯
y ∈Y δ∈∆
i∈[n]

2. B&S theorem: rhs is equivalent to solving n + 1 independent


problems.

max ω
δ̄∈∆
X
s.t. ω ≤ min Γ̄d¯ℓ + min c̄iℓ y i ,
ℓ∈[n]0 y ∈Y
i

max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + min c̄iℓ y i , ℓ ∈ [n]0
y ∈Y
i

16 / 29
Linear formulation of the adversary problem III
2. B&S theorem: rhs is equivalent to solving n + 1 independent problems.
max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + min c̄iℓ y i , ℓ ∈ [n]0
y ∈Y
i

17 / 29
Linear formulation of the adversary problem III
2. B&S theorem: rhs is equivalent to solving n + 1 independent problems.
max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + min c̄iℓ y i , ℓ ∈ [n]0
y ∈Y
i

3. Dualization: use that conv(Y) = P to dualize the minimization.


Φ(w ) = max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + maxb T λℓ − πℓ,i , ℓ ∈ [n]0
π,λ
i∈[n]

s.t.(B·,i )T λℓ − πℓ,i ≤ c̄iℓ , ∀ℓ ∈ [n]0 , ∀i


λℓ , πℓ ≥ 0, ∀ℓ ∈ [n]0

17 / 29
Linear formulation of the adversary problem III
2. B&S theorem: rhs is equivalent to solving n + 1 independent problems.
max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + min c̄iℓ y i , ℓ ∈ [n]0
y ∈Y
i

3. Dualization: use that conv(Y) = P to dualize the minimization.


Φ(w ) = max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + maxb T λℓ − πℓ,i , ℓ ∈ [n]0
π,λ
i∈[n]

s.t.(B·,i )T λℓ − πℓ,i ≤ c̄iℓ , ∀ℓ ∈ [n]0 , ∀i


λℓ , πℓ ≥ 0, ∀ℓ ∈ [n]0
⇔ Φ(w ) = max ω
δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + b T λℓ − πℓ,i , ℓ ∈ [n]0
i∈[n]

(B·,i )T λℓ − πℓ,i ≤ c̄iℓ , ∀ℓ ∈ [n]0 , ∀i


λℓ , πℓ ≥ 0, ∀ℓ ∈ [n]0

17 / 29
Linear formulation of the adversary problem IV

We have reformulated min Φ(w ) as:


w ∈W

min max ω
w ∈W δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + b T λℓ − πℓ,i , ℓ ∈ [n]0
i∈[n]

(B·,i )T λℓ − πℓ,i ≤ c̄iℓ , ∀ℓ ∈ [n]0 , ∀i


λℓ , πℓ ≥ 0, ∀ℓ ∈ [n]0

18 / 29
Linear formulation of the adversary problem IV

We have reformulated min Φ(w ) as:


w ∈W

min max ω
w ∈W δ̄∈∆
X
s.t. ω ≤ Γ̄d¯ℓ + b T λℓ − πℓ,i , ℓ ∈ [n]0
i∈[n]

(B·,i )T λℓ − πℓ,i ≤ c̄iℓ , ∀ℓ ∈ [n]0 , ∀i


λℓ , πℓ ≥ 0, ∀ℓ ∈ [n]0

The formulation of the ddid is finally obtained by


▶ dualization of inner maximization problem,
▶ linearization of all the hidden products with the w .

18 / 29
Main result: Compact MILP formulation for ddid

 
X X X X
min r αℓ u ℓ + ci y ℓ,i + βℓ,i y 0ℓ,i  + di σ i
ℓ∈[n]0 i∈[n] i∈[n] i∈[n]
X
s.t. uℓ = 1
ℓ∈[n]0
X X
ai σi ≥ −ai αℓ u ℓ + y ℓ,i − (1 − w i ), ∀i ∈ [n]
ℓ∈[n]0 ℓ∈[n]0

By ℓ ≥ u ℓ b, ∀ℓ ∈ [n]0
y ℓ,i ≤ u ℓ , ∀ℓ ∈ [n]0 , i ∈ [n]
y 0ℓ,i ≥ y ℓ,i − w i , ∀ℓ ∈ [n]0 , i ∈ [n]
0
w ∈ W, u, y , y , σ ≥ 0.

19 / 29
Selection problem
Y = {select p = n/10 items among n to minimize total cost}
Vayanos et al. [2020]: K -adaptability heuristic: decision-maker must
chooses K strategies y 1 , . . . , y K among which one is chosen after
revealing δ̄.
▶ our own implementation
▶ budget uncertainty (L = 1)
▶ K -adaptability reformulations grow linearly in L and K

n T gap
n K T gap
10 6 0.14
2 7 10
10 20 10 0.26
3 24 11
30 26 0.39
2 46 7
15 40 49 0.16
3 3275 9
50 160 0.22
(a) K -adaptability MILP reformulation.
(b) Our exact MILP reformulation.

Note: times are in centiseconds


20 / 29
Orienteering problem

Y = {elementary paths with maximum time constraint}


▶ CB : exact algorithm: column and constraint generation to solve the
adversary problem embedded in combinatorial Benders’ cut
algorithm [Paradiso et al., 2022].
▶ BP : our branch-and-price algorithm

instance Opt Solved at root


CB BP BP
TS2N10 100% 100% 89%
TS1N15 100% 100% 90%
TS3N16 100% 100% 83%
TS2N19 76% 100% 73%
TS1N30 41% 96% 61%
TS3N31 37% 90% 80%

21 / 29
Conclusion

U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d} , L is the number of rows of A

Take-away messages
▶ min-max: not harder than nominal problem if L is constant
▶ ddid: If conv(Y) = P is a compact polyhedron and L is constant
▶ Computing Φ is a compact LP
▶ Solving ddid is a compact MILP

22 / 29
Conclusion

U = {u ∈ Rn | Au ≤ b, 0 ≤ u ≤ d} , L is the number of rows of A

Take-away messages
▶ min-max: not harder than nominal problem if L is constant
▶ ddid: If conv(Y) = P is a compact polyhedron and L is constant
▶ Computing Φ is a compact LP
▶ Solving ddid is a compact MILP

Perspectives
▶ Complexity: it is not known yet whether ddid can be NP-Hard
when the nominal problem is polynomial ⇒ thesis of Xiaoyu Chen
▶ Numerically: decomposition is promising when conv(Y) ̸= P

22 / 29
Table of Contents

General results on polyhedral robust combinatorial optimization

Extension III: Application to decision-dependent information discovery

A crash course in open science

23 / 29
What is open science?
Assuming our research is useful:
▶ Make the results of science available to everybody
▶ Help disseminating science to society

Open data ©
Benchmarks (MIPlib, QPlib, netlib. . .), real data from industrial
applications

Open code §
§§ Most published algorithms are not reproducible!
© Some journals enforce reproducibility (IJOC, MPC, OJMO, OR)

Open publications §
© Optimization online, arxiv
§ Many papers not available online
§§ Ever-increasing publication fees (cost ∼ 100M¤ annually in France)
24 / 29
The beginnings ...
Back in the days, publishing was expensive!

25 / 29
Internet and LATEX
▶ The advent of electronic publishing and dissemination via the Web
and the use of LATEXreduced significantly the operating costs of
publishers
▶ Led to nearly open and free publications?
▶ In fact no:

Note: Publishers ask at least 2000¤ for Open Access. Its real cost varies
between 3¤and 800¤ (when heavy typesetting is needed).
26 / 29
The uprising

Taken from the presentation of Marie Farge

27 / 29
Examples of fair journals/conferences
Machine learning, Artificial intelligence
▶ JAIR
▶ Journal of Machine Learning Research (JMLR)
▶ Transactions on Machine Learning Research (TMLR)
Graph theory, algorithmics
▶ Advances in Combinatorics
▶ TheoretiCS
▶ Theory of Computing
▶ Innovations in Graph Theory
And many conferences: LIPICs (mainly theoretical CS), and more . . .

28 / 29
Examples of fair journals/conferences
Machine learning, Artificial intelligence
▶ JAIR
▶ Journal of Machine Learning Research (JMLR)
▶ Transactions on Machine Learning Research (TMLR)
Graph theory, algorithmics
▶ Advances in Combinatorics
▶ TheoretiCS
▶ Theory of Computing
▶ Innovations in Graph Theory
And many conferences: LIPICs (mainly theoretical CS), and more . . .

Mathematical optimization
▶ Open Journal of Mathematical Optimization (OJMO)
▶ Area editors: G. Bayraksan, R. Luke, J. Malick, S. Pokutta
▶ indexed in most databases, Q2 at scimago
▶ fast track for short papers
▶ enforces reproducibility!
28 / 29
References I
Dimitris Bertsimas and Melvyn Sim. Robust discrete optimization and network flows.
Math. Program., 98(1-3):49–71, 2003. doi: 10.1007/s10107-003-0396-4. URL
https://doi.org/10.1007/s10107-003-0396-4.
Margarida Carvalho, Xenia Klimentova, Kristiaan Glorie, Ana Viana, and Miguel
Constantino. Robust models for the kidney exchange problem. INFORMS J.
Comput., 33(3):861–881, 2021. doi: 10.1287/IJOC.2020.0986. URL
https://doi.org/10.1287/ijoc.2020.0986.
Jacob Focke, Nicole Megow, and Julie Meißner. Minimum spanning tree under
explorable uncertainty in theory and experiments. ACM J. Exp. Algorithmics, 25:
1–20, 2020. doi: 10.1145/3422371. URL https://doi.org/10.1145/3422371.
Jérémy Omer, Michael Poss, and Maxime Rougier. Combinatorial Robust Optimization
with Decision-Dependent Information Discovery and Polyhedral Uncertainty. Open
Journal of Mathematical Optimization, 5:5, 2024. doi: 10.5802/ojmo.33. URL
https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.33/.
Rosario Paradiso, Angelos Georghiou, Said Dabia, and Denise Tönissen. Exact and
approximate schemes for robust optimization problems with decision dependent
information discovery. arXiv preprint arXiv:2208.04115, 2022.
Artur Alves Pessoa, Michael Poss, Ruslan Sadykov, and François Vanderbeck.
Branch-cut-and-price for the robust capacitated vehicle routing problem with
knapsack uncertainty. Oper. Res., 69(3):739–754, 2021. doi:
10.1287/opre.2020.2035. URL https://doi.org/10.1287/opre.2020.2035.
Phebe Vayanos, Angelos Georghiou, and Han Yu. Robust optimization with
decision-dependent information discovery. arXiv preprint arXiv:2004.08490, 2020.
29 / 29

You might also like