Mathematical Optimization
Mathematical Optimization
“Optimization” and “Optimum” redirect here. For other Given: a function f : A → R from some set A
uses, see Optimization (disambiguation) and Optimum to the real numbers
(disambiguation).
Sought: an element x0 in A such that f(x0 ) ≤
“Mathematical programming” redirects here. For the f(x) for all x in A (“minimization”) or such that
peer-reviewed journal, see Mathematical Programming.
f(x0 ) ≥ f(x) for all x in A (“maximization”).
In mathematics, computer science and operations re-
-1
0
y Typically, A is some subset of the Euclidean space Rn , of-
-15
-3
-2
-1
0
1
2 -3
-2
ten specified by a set of constraints, equalities or inequali-
3
x
ties that the members of A have to satisfy. The domain A
of f is called the search space or the choice set, while the
elements of A are called candidate solutions or feasible
Graph of a paraboloid given by f(x, y) = −(x² + y²) + 4. The solutions.
global maximum at (0, 0, 4) is indicated by a blue dot.
The function f is called, variously, an objective function,
a loss function or cost function (minimization),[2] a util-
search, mathematical optimization (alternatively, opti- ity function or fitness function (maximization), or, in
mization or mathematical programming) is the selec- certain fields, an energy function or energy functional.
tion of a best element (with regard to some criteria) from A feasible solution that minimizes (or maximizes, if that
some set of available alternatives.[1] is the goal) the objective function is called an optimal so-
In the simplest case, an optimization problem consists of lution.
maximizing or minimizing a real function by systemat- In mathematics, conventional optimization problems are
ically choosing input values from within an allowed set usually stated in terms of minimization. Generally, un-
and computing the value of the function. The generaliza- less both the objective function and the feasible region
tion of optimization theory and techniques to other for- are convex in a minimization problem, there may be sev-
mulations comprises a large area of applied mathematics. eral local minima. A local minimum x* is defined as a
More generally, optimization includes finding “best avail- point for which there exists some δ > 0 so that for all x
able” values of some objective function given a defined such that
domain (or a set of constraints), including a variety of
different types of objective functions and different types
of domains.
∥x − x∗ ∥ ≤ δ;
the expression
1 Optimization problems
f (x∗ ) ≤ f (x)
Main article: Optimization problem
holds; that is to say, on some region around x* all of the
An optimization problem can be represented in the fol- function values are greater than or equal to the value at
lowing way: that point. Local maxima are defined similarly.
1
2 3 HISTORY
A large number of algorithms proposed for solving non- This represents the value (or values) of the argument x in
convex problems – including the majority of commer- the interval (−∞, −1] that minimizes (or minimize) the
cially available solvers – are not capable of making a dis- objective function x2 + 1 (the actual minimum value of
tinction between local optimal solutions and rigorous op- that function is not what the problem asks for). In this
timal solutions, and will treat the former as actual so- case, the answer is x = −1, since x = 0 is infeasible, i.e.
lutions to the original problem. The branch of applied does not belong to the feasible set.
mathematics and numerical analysis that is concerned Similarly,
with the development of deterministic algorithms that are
capable of guaranteeing convergence in finite time to the
actual optimal solution of a non-convex problem is called
arg max x cos(y),
global optimization. x∈[−5,5], y∈R
or equivalently
2 Notation
arg max x cos(y), to: subject x ∈ [−5, 5], y ∈ R,
Optimization problems are often expressed with special x, y
notation. Here are some examples.
represents the (x, y) pair (or pairs) that maximizes (or
maximize) the value of the objective function x cos(y) ,
2.1 Minimum and maximum value of a with the added constraint that x lie in the interval [−5, 5]
function (again, the actual maximum value of the expression does
not matter). In this case, the solutions are the pairs of the
Consider the following notation: form (5, 2kπ) and (−5,(2k+1)π), where k ranges over all
integers.
arg min and arg max are sometimes also written argmin
min (x2 + 1) and argmax, and stand for argument of the minimum
x∈R
and argument of the maximum.
This denotes the minimum value of the objective function
x2 + 1 , when choosing x from the set of real numbers R .
The minimum value in this case is 1 , occurring at x = 0 3 History
.
Similarly, the notation Fermat and Lagrange found calculus-based formulas for
identifying optima, while Newton and Gauss proposed it-
erative methods for moving towards an optimum.
max 2x The term "linear programming" for certain optimization
x∈R
cases was due to George B. Dantzig, although much of
asks for the maximum value of the objective function 2x, the theory had been introduced by Leonid Kantorovich
where x may be any real number. In this case, there is no in 1939. (Programming in this context does not refer to
such maximum as the objective function is unbounded, so computer programming, but from the use of program by
the answer is "infinity" or “undefined”. the United States military to refer to proposed training
and logistics schedules, which were the problems Dantzig
studied at that time.) Dantzig published the Simplex al-
2.2 Optimal input arguments gorithm in 1947, and John von Neumann developed the
theory of duality in the same year.
Main article: Arg max
Other major researchers in mathematical optimization
include the following:
Consider the following notation:
• Aharon Ben-Tal
• Linear programming (LP), a type of convex • Stochastic optimization for use with random (noisy)
programming, studies the case in which the function measurements or random inputs in the
objective function f is linear and the con- search process.
straints are specified using only linear equal-
• Infinite-dimensional optimization studies the case
ities and inequalities. Such a set is called a
when the set of feasible solutions is a subset of an
polyhedron or a polytope if it is bounded.
infinite-dimensional space, such as a space of func-
• Second order cone programming (SOCP) is a tions.
convex program, and includes certain types of
quadratic programs. • Heuristics and metaheuristics make few or no as-
• Semidefinite programming (SDP) is a subfield sumptions about the problem being optimized. Usu-
of convex optimization where the underlying ally, heuristics do not guarantee that any optimal so-
variables are semidefinite matrices. It is gen- lution need be found. On the other hand, heuris-
eralization of linear and convex quadratic pro- tics are used to find approximate solutions for many
gramming. complicated optimization problems.
• Conic programming is a general form of con- • Constraint satisfaction studies the case in which the
vex programming. LP, SOCP and SDP can all objective function f is constant (this is used in
be viewed as conic programs with the appro- artificial intelligence, particularly in automated rea-
priate type of cone. soning).
• Geometric programming is a technique • Constraint programming.
whereby objective and inequality constraints
expressed as posynomials and equality con- • Disjunctive programming is used where at least one
straints as monomials can be transformed into constraint must be satisfied but not all. It is of par-
a convex program. ticular use in scheduling.
4 5 CLASSIFICATION OF CRITICAL POINTS AND EXTREMA
In a number of subfields, the techniques are designed pri- 4.2 Multi-modal optimization
marily for optimization in dynamic contexts (that is, de-
cision making over time): Optimization problems are often multi-modal; that is,
they possess multiple good solutions. They could all be
globally good (same cost function value) or there could
• Calculus of variations seeks to optimize an action be a mix of globally good and locally good solutions. Ob-
integral over some space to an extremum by varying taining all (or at least some of) the multiple solutions is
a function of the coordinates the goal of a multi-modal optimizer.
Classical optimization techniques due to their iterative
• Optimal control theory is a generalization of the cal-
approach do not perform satisfactorily when they are used
culus of variations which introduces control policies
to obtain multiple solutions, since it is not guaranteed
that different solutions will be obtained even with dif-
• Dynamic programming studies the case in which the ferent starting points in multiple runs of the algorithm.
optimization strategy is based on splitting the prob- Evolutionary algorithms are however a very popular ap-
lem into smaller subproblems. The equation that de- proach to obtain multiple solutions in a multi-modal op-
scribes the relationship between these subproblems timization task.
is called the Bellman equation.
equal(s) zero at an interior optimum is called a 'first-order the point is a local maximum; finally, if indefinite, then
condition' or a set of first-order conditions. the point is some kind of saddle point.
Optima of equality-constrained problems can be found by Constrained problems can often be transformed into un-
the Lagrange multiplier method. The optima of problems constrained problems with the help of Lagrange multipli-
with equality and/or inequality constraints can be found ers. Lagrangian relaxation can also provide approximate
using the 'Karush–Kuhn–Tucker conditions'. solutions to difficult constrained problems.
When the objective function is convex, then any local
minimum will also be a global minimum. There exist ef-
5.4 Sufficient conditions for optimality ficient numerical techniques for minimizing convex func-
tions, such as interior-point methods.
While the first derivative test identifies points that might
be extrema, this test does not distinguish a point that is a
minimum from one that is a maximum or one that is nei-
ther. When the objective function is twice differentiable, 6 Computational optimization
these cases can be distinguished by checking the second techniques
derivative or the matrix of second derivatives (called the
Hessian matrix) in unconstrained problems, or the ma-
To solve problems, researchers may use algorithms that
trix of second derivatives of the objective function and
terminate in a finite number of steps, or iterative meth-
the constraints called the bordered Hessian in constrained
ods that converge to a solution (on some specified class
problems. The conditions that distinguish maxima, or
of problems), or heuristics that may provide approximate
minima, from other stationary points are called 'second-
solutions to some problems (although their iterates need
order conditions’ (see 'Second derivative test'). If a candi-
not converge).
date solution satisfies the first-order conditions, then satis-
faction of the second-order conditions as well is sufficient
to establish at least local optimality. 6.1 Optimization algorithms
• Simplex algorithm of George Dantzig, designed for
5.5 Sensitivity and continuity of optima linear programming.
The envelope theorem describes how the value of an • Extensions of the simplex algorithm, designed for
optimal solution changes when an underlying parameter quadratic programming and for linear-fractional
changes. The process of computing this change is called programming.
comparative statics.
• Variants of the simplex algorithm that are especially
The maximum theorem of Claude Berge (1963) de- suited for network optimization.
scribes the continuity of an optimal solution as a function
of underlying parameters. • Combinatorial algorithms
than within the optimizer itself, which mainly has to oper- • Ellipsoid method: An iterative method for
ate over the N variables. The derivatives provide detailed small problems with quasiconvex objective
information for such optimizers, but are even harder to functions and of great theoretical interest, par-
calculate, e.g. approximating the gradient takes at least ticularly in establishing the polynomial time
N+1 function evaluations. For approximations of the 2nd complexity of some combinatorial optimiza-
derivatives (collected in the Hessian matrix) the number tion problems. It has similarities with Quasi-
of function evaluations is in the order of N². Newton’s Newton methods.
method requires the 2nd order derivates, so for each it-
• Reduced gradient method (Frank–Wolfe) for
eration the number of function calls is in the order of
approximate minimization of specially struc-
N², but for a simpler pure gradient optimizer it is only
tured problems with linear constraints, espe-
N. However, gradient optimizers need usually more iter-
cially with traffic networks. For general un-
ations than Newton’s algorithm. Which one is best with
constrained problems, this method reduces to
respect to the number of function calls depends on the
the gradient method, which is regarded as ob-
problem itself.
solete (for almost all problems).
• Simultaneous perturbation stochastic approx-
• Methods that evaluate Hessians (or approximate
imation (SPSA) method for stochastic opti-
Hessians, using finite differences):
mization; uses random (efficient) gradient ap-
• Newton’s method proximation.
• Sequential quadratic programming: A • Methods that evaluate only function values: If a
Newton-based method for small-medium problem is continuously differentiable, then gradi-
scale constrained problems. Some ver- ents can be approximated using finite differences,
sions can handle large-dimensional prob- in which case a gradient-based method can be used.
lems.
• Interpolation methods
• Methods that evaluate gradients or approximate gra-
dients using finite differences (or even subgradi- • Pattern search methods, which have better
ents): convergence properties than the Nelder–Mead
heuristic (with simplices), which is listed be-
• Quasi-Newton methods: Iterative methods for low.
medium-large problems (e.g. N<1000).
• Conjugate gradient methods: Iterative meth-
ods for large problems. (In theory, these 6.3 Global convergence
methods terminate in a finite number of steps
with quadratic objective functions, but this fi- More generally, if the objective function is not a
nite termination is not observed in practice on quadratic function, then many optimization methods use
finite–precision computers.) other methods to ensure that some subsequence of itera-
• Interior point methods: This is a large class of tions converges to an optimal solution. The first and still
methods for constrained optimization. Some popular method for ensuring convergence relies on line
interior-point methods use only (sub)gradient searches, which optimize a function along one dimen-
information, and others of which require the sion. A second and increasingly popular method for en-
evaluation of Hessians. suring convergence uses trust regions. Both line searches
and trust regions are used in modern methods of non-
• Gradient descent (alternatively, “steepest de- differentiable optimization. Usually a global optimizer
scent” or “steepest ascent”): A (slow) method is much slower than advanced local optimizers (such as
of historical and theoretical interest, which has BFGS), so often an efficient global optimizer can be con-
had renewed interest for finding approximate structed by starting the local optimizer from different
solutions of enormous problems. starting points.
• Subgradient methods - An iterative method
for large locally Lipschitz functions using
generalized gradients. Following Boris T. 6.4 Heuristics
Polyak, subgradient–projection methods are
similar to conjugate–gradient methods. Main article: Heuristic algorithm
• Bundle method of descent: An iterative
method for small–medium-sized problems Besides (finitely terminating) algorithms and (conver-
with locally Lipschitz functions, particularly gent) iterative methods, there are heuristics that can pro-
for convex minimization problems. (Similar vide approximate solutions to some optimization prob-
to conjugate gradient methods) lems:
7.3 Electrical engineering 7
• Memetic algorithm with game theory and the study of economic equilibria.
The Journal of Economic Literature codes classify math-
• Differential evolution ematical programming, optimization techniques, and re-
• Evolutionary algorithms lated topics under JEL:C61-C63.
In microeconomics, the utility maximization problem and
• Dynamic relaxation its dual problem, the expenditure minimization problem,
• Genetic algorithms are economic optimization problems. Insofar as they be-
have consistently, consumers are assumed to maximize
• Hill climbing with random restart their utility, while firms are usually assumed to maximize
their profit. Also, agents are often modeled as being risk-
• Nelder-Mead simplicial heuristic: A popular heuris- averse, thereby preferring to avoid risk. Asset prices are
tic for approximate minimization (without calling also modeled using optimization theory, though the un-
gradients) derlying mathematics relies on optimizing stochastic pro-
cesses rather than on static optimization. Trade theory
• Particle swarm optimization
also uses optimization to explain trade patterns between
• Artificial bee colony optimization nations. The optimization of market portfolios is an ex-
ample of multi-objective optimization in economics.
• Simulated annealing
Since the 1970s, economists have modeled dynamic de-
• Tabu search cisions over time using control theory. For example, mi-
croeconomists use dynamic search models to study labor-
• Reactive Search Optimization (RSO)[3] imple- market behavior.[5] A crucial distinction is between deter-
mented in LIONsolver ministic and stochastic models.[6] Macroeconomists build
dynamic stochastic general equilibrium (DSGE) mod-
els that describe the dynamics of the whole economy as
7 Applications the result of the interdependent optimizing decisions of
workers, consumers, investors, and governments.[7][8]
7.1 Mechanics
7.3 Electrical engineering
Problems in rigid body dynamics (in particular articu-
lated rigid body dynamics) often require mathematical
Some common applications of optimization techniques
programming techniques, since you can view rigid body
in electrical engineering include active filter design,[9]
dynamics as attempting to solve an ordinary differential
stray field reduction in superconducting magnetic energy
equation on a constraint manifold; the constraints are var-
storage systems, space mapping design of microwave
ious nonlinear geometric constraints such as “these two
structures,[10] handset antennas, electromagnetic design.
points must always coincide”, “this surface must not pene-
trate any other”, or “this point must always lie somewhere
on this curve”. Also, the problem of computing contact 7.4 Operations research
forces can be done by solving a linear complementarity
problem, which can also be viewed as a QP (quadratic Another field that uses optimization techniques exten-
programming) problem. sively is operations research.[11] Operations research also
Many design problems can also be expressed as optimiza- uses stochastic modeling and simulation to support im-
tion programs. This application is called design opti- proved decision-making. Increasingly, operations re-
mization. One subset is the engineering optimization, search uses stochastic programming to model dynamic
and another recent and growing subset of this field is decisions that adapt to events; such problems can be
multidisciplinary design optimization, which, while use- solved with large-scale optimization and stochastic opti-
ful in many problems, has in particular been applied to mization methods.
aerospace engineering problems.
including constraints and a model of the system to be con- [7] Julio Rotemberg and Michael Woodford (1997), “An
trolled. Optimization-based Econometric Framework for the
Evaluation of Monetary Policy.NBER Macroeconomics
Annual, 12, pp. 297-346.
7.6 Hydrology [8] From The New Palgrave Dictionary of Economics (2008),
2nd Edition with Abstract links:
Nonlinear optimization methods are applied to generate • "numerical optimization methods in economics" by Karl
computational models of subsurface reservoirs (such as Schmedders
water resources or oil reservoirs).[12] • "convex programming" by Lawrence E. Blume
• "Arrow–Debreu model of general equilibrium" by John
Geanakoplos.
7.7 Geophysics [9] Bishnu Prasad De, R. Kar, D. Mandal, S. P. Ghoshal,
“Optimal selection of components value for analog ac-
Optimization techniques are regularly used in geophysical tive filter design using simplex particle swarm optimiza-
parameter estimation problems. Given a set of geophysi- tion”, International Journal of Machine Learning and Cy-
cal measurements, e.g. seismic recordings, it is common bernetics August 2015, Volume 6, Issue 4, pp 621–636,
to solve for the physical properties and geometrical shapes 27 September 2014.
of the underlying rocks and fluids. [10] S. Koziel and J.W. Bandler, “Space mapping with multi-
ple coarse models for optimization of microwave compo-
nents,” IEEE Microwave and Wireless Components Let-
7.8 Molecular modeling ters, vol. 8, no. 1, pp. 1–3, Jan. 2008.
[11] “New force on the political scene: the Seophonisten”.
Main article: Molecular modeling http://www.seophonist-wahl.de. Retrieved 14 September
2013. External link in |publisher= (help)
Nonlinear optimization methods are widely used in [12] “History matching production data and uncertainty as-
conformational analysis. sessment with an efficient TSVD parameterization algo-
rithm”. Journal of Petroleum Science and Engineering
113: 54–71. doi:10.1016/j.petrol.2013.11.025.
8 Solvers
Main article: List of optimization software 11 Further reading
11.1 Comprehensive
[4] Lionel Robbins (1935, 2nd ed.) An Essay on the Nature • Magnanti, Thomas L. (1989). “Twenty years of
and Significance of Economic Science, Macmillan, p. 16. mathematical programming”. In Cornet, Bernard;
Tulkens, Henry. Contributions to Operations Re-
[5] A. K. Dixit ([1976] 1990). Optimization in Economic The-
search and Economics: The twentieth anniversary of
ory, 2nd ed., Oxford. Description and contents preview.
CORE (Papers from the symposium held in Louvain-
[6] A.G. Malliaris (2008). “stochastic optimal control,” The la-Neuve, January 1987). Cambridge, MA: MIT
New Palgrave Dictionary of Economics, 2nd Edition. Press. pp. 163–227. ISBN 0-262-03149-3. MR
Abstract. 1104662.
11.3 Combinatorial optimization 9
• J. E. Dennis, Jr. and Robert B. Schnabel, A • Bonnans, J. Frédéric; Shapiro, Alexander (2000).
view of unconstrained optimization (pp. 1– Perturbation analysis of optimization problems.
72); Springer Series in Operations Research. New York:
• Donald Goldfarb and Michael J. Todd, Linear Springer-Verlag. pp. xviii+601. ISBN 0-387-
programming (pp. 73–170); 98705-3. MR 1756264.
• Philip E. Gill, Walter Murray, Michael A. • Boyd, Stephen P.; Vandenberghe, Lieven (2004).
Saunders, and Margaret H. Wright, Con- Convex Optimization (pdf). Cambridge University
strained nonlinear programming (pp. 171– Press. ISBN 978-0-521-83378-3. Retrieved Octo-
210); ber 15, 2011.
• Ravindra K. Ahuja, Thomas L. Magnanti, and
• Jorge Nocedal and Stephen J. Wright (2006).
James B. Orlin, Network flows (pp. 211–369);
Numerical Optimization. Springer. ISBN 0-387-
• W. R. Pulleyblank, Polyhedral combinatorics 30303-0.
(pp. 371–446);
• George L. Nemhauser and Laurence A. • Ruszczyński, Andrzej (2006). Nonlinear Optimiza-
Wolsey, Integer programming (pp. 447–527); tion. Princeton, NJ: Princeton University Press. pp.
xii+454. ISBN 978-0-691-11915-1. MR 2199043.
• Claude Lemaréchal, Nondifferentiable opti-
mization (pp. 529–572); • Robert J. Vanderbei (2013). Linear Programming:
• Roger J-B Wets, Stochastic programming (pp. Foundations and Extensions, 4th Edition. Springer.
573–629); ISBN 978-1-4614-7629-0.
• A. H. G. Rinnooy Kan and G. T. Timmer,
Global optimization (pp. 631–662); 11.3 Combinatorial optimization
• P. L. Yu, Multiple criteria decision making:
five basic concepts (pp. 663–699). • R. K. Ahuja, Thomas L. Magnanti, and James B.
Orlin (1993). Network Flows: Theory, Algorithms,
• Shapiro, Jeremy F. (1979). Mathematical program-
and Applications. Prentice-Hall, Inc. ISBN 0-13-
ming: Structures and algorithms. New York: Wiley-
617549-X.
Interscience [John Wiley & Sons]. pp. xvi+388.
ISBN 0-471-77886-9. MR 544669. • William J. Cook, William H. Cunningham, William
R. Pulleyblank, Alexander Schrijver; Combinato-
• Spall, J. C. (2003), Introduction to Stochastic Search
rial Optimization; John Wiley & Sons; 1 edition
and Optimization: Estimation, Simulation, and Con-
(November 12, 1997); ISBN 0-471-55894-X.
trol, Wiley, Hoboken, NJ.
• Gondran, Michel; Minoux, Michel (1984). Graphs
and algorithms. Wiley-Interscience Series in Dis-
11.2 Continuous optimization crete Mathematics (Translated by Steven Vajda
from the second (Collection de la Direction des
• Roger Fletcher (2000). Practical methods of opti-
Études et Recherches d'Électricité de France [Collec-
mization. Wiley. ISBN 978-0-471-49463-8.
tion of the Department of Studies and Research of
• Mordecai Avriel (2003). Nonlinear Programming: Électricité de France], v. 37. Paris: Éditions Ey-
Analysis and Methods. Dover Publishing. ISBN 0- rolles 1985. xxviii+545 pp. MR 868083) French
486-43227-0. ed.). Chichester: John Wiley & Sons, Ltd. pp.
10 13 EXTERNAL LINKS
• Jon Lee; A First Course in Combinatorial Optimiza- • Decision Tree for Optimization Software Links to
tion; Cambridge University Press; 2004; ISBN 0- optimization source codes
521-01012-8. • Global optimization
• Christos H. Papadimitriou and Kenneth Steiglitz • Mathematical Programming Glossary
Combinatorial Optimization : Algorithms and Com-
plexity; Dover Pubns; (paperback, Unabridged edi- • Mathematical Programming Society
tion, July 1998) ISBN 0-486-40258-4.
• NEOS Guide currently being replaced by the NEOS
Wiki
11.4 Relaxation (extension method)
• Optimization Online A repository for optimization
Methods to obtain suitable (in some sense) natural ex- e-prints
tensions of optimization problems that otherwise lack • Optimization Related Links
of existence or stability of solutions to obtain problems
with guaranteed existence of solutions and their stabil- • Convex Optimization I EE364a: Course from Stan-
ity in some sense (typically under various perturbation of ford University
data) are in general called relaxation. Solutions of such
extended (=relaxed) problems in some sense character- • Convex Optimization – Boyd and Vandenberghe
izes (at least certain features) of the original problems, Book on Convex Optimization
e.g. as far as their optimizing sequences concerns. Re- • Book and Course on Optimization Methods for En-
laxed problems may also possesses their own natural lin- gineering Design
ear structure that may yield specific optimality conditions
different from optimality conditions for the original prob-
lems.
12 Journals
• Computational Optimization and Applications
14.2 Images
• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: CC-BY-SA-3.0 Contribu-
tors: ? Original artist: ?
• File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
• File:Max_paraboloid.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Max_paraboloid.svg License: CC BY-SA 4.0
Contributors: Own work Original artist: IkamusumeFan
• File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ?
Original artist: ?