0% found this document useful (0 votes)
63 views11 pages

Random-Weight Generation in Multiple Criteria Deci PDF

This document discusses random weight generation procedures for multiple criteria decision making models. It reviews three approaches for generating random weights without constraints and finds that they do not produce uniformly distributed weights. A new procedure is developed that generates weights uniformly distributed over the simplex by using random numbers and normalization. The document also discusses generating random weights subject to constraints by solving linear programs, as this allows incorporating preference information from the decision maker. Simulation can be used for sensitivity analysis by generating random weights rather than solving optimization problems.

Uploaded by

redame
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views11 pages

Random-Weight Generation in Multiple Criteria Deci PDF

This document discusses random weight generation procedures for multiple criteria decision making models. It reviews three approaches for generating random weights without constraints and finds that they do not produce uniformly distributed weights. A new procedure is developed that generates weights uniformly distributed over the simplex by using random numbers and normalization. The document also discusses generating random weights subject to constraints by solving linear programs, as this allows incorporating preference information from the decision maker. Simulation can be used for sensitivity analysis by generating random weights rather than solving optimization problems.

Uploaded by

redame
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228776187

RANDOM-WEIGHT GENERATION IN MULTIPLE CRITERIA DECISION MODELS

Article · July 2006

CITATIONS READS

6 77

2 authors, including:

Jingguo Wang
University of Texas at Arlington
32 PUBLICATIONS   608 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Jingguo Wang on 31 May 2014.

The user has requested enhancement of the downloaded file.


MCDM 2006, Chania, Greece, June 19-23, 2006

RANDOM-WEIGHT GENERATION
IN MULTIPLE CRITERIA DECISION MODELS

Jingguo Wang and Stanley Zionts


School of Management
State University of New York at Buffalo
Buffalo, NY 14260-4000
[email protected] ; [email protected]

Keywords: Decision Analysis, Multiple Criteria Analysis; Sensitivity Analysis; Simulation; Weight Generation

Summary: In this paper, we consider generating random uniformly-distributed weights for the simulation
of additive p-criteria decision models. First we review several approaches for generating random weights,
and show that some obvious approaches do not work well at all. Then, we consider several procedures for
generating weights that satisfy general linear constraints. Instead of solving many small linear
programming problems to find pairwise dominance of alternatives, we may solve such problems using
simulation.

1. Introduction

Multiple criteria decision making is a decision-making process used to evaluate a number of alternatives. A typical
multiple criteria decision making model generally has five main elements: alternatives; criteria; outcomes; weights;
and possibly an aggregation method (Belton and Pictet 1997). Alternatives constitute a set of actions or items from
which we choose one. Criteria are measures by which we evaluate alternatives. Outcomes are levels of performance
of alternatives on criteria. Weights are a measure of the relative importance of criteria. Not all methods employ
weights, but weights are sometimes used to aggregate criteria. Aggregation methods are approaches for computing
an overall score for each alternative and then choosing the one with the best overall score.

Weights are usually considered as a convenient expression of a decision maker’s (DM’s) system of values. A weight
is a generic term defined by Roy and Mousseau (1996) and Choo et al. (1999). Various plausible interpretations
have been offered (Choo et al. 1999), such as marginal contribution, indifference trade-off or rate of substitution,
gradient of the multattribute value function, etc.

Many methods have been proposed to help decision makers make decisions. Reviews of some methods may be
found in Belton and Stewart (2002), Keeney and Raiffa (1976)), and Steuer (1986). Some influential works include
the Zionts-Wallenius method (Zionts and Wallenius 1976) and the Analytic Hierarchy Process (Saaty 1980). In
general, methods are intended to find an optimal solution or a satisficing solution (a solution that satisfies certain
levels of criteria).

There is a gap between the theoretical work and the practical needs of decision makers. This gap is a result of
several factors:
1) Problem structure. In many situations the decision problem and/or the preferences of a decision maker are
not well-enough known to allow the successful application of a method. This may be because of time
pressure, lack of knowledge or data, and limited expertise with problem-solving methods.
2) Uncertainty. Neither the decision maker nor his associates know with any precision probabilities or
outcomes. In many situations, the decision maker can only provide partial information regarding his/her
preferences.
Methods and frameworks (Hazen 1986, Park and Kim 1997, Weber 1987) have been proposed to help a DM reach a
decision based on partial information. Among these, sensitivity analysis may be used to help the DM in exploring
not just the problem, but also his own preference structure (Antunes and Climaco 1992, Barron and Schmidt 1988,

1
Insua and French 1991, Lahdelma et al. In Press, Triantaphyllou and Sanchez 1997, Wang and Zionts In Press).
Simplified, sensitivity analysis is an analysis carried out to check the sensitivity of a decision to changes in the
weights or data. The sensitivity analysis may also be used to determine how robust1 a decision is.

While many sensitivity analysis approaches employ the techniques of linear programming and statistical methods
(Antunes and Climaco 1992, Barron and Schmidt 1988, Insua and French 1991, Triantaphyllou and Sanchez 1997),
simulation may also be used for sensitivity analysis (Barron and Barrett 1996, Butler et al. 1997, Jimenez et al. 2003,
Lahdelma et al. In Press, Wang and Zionts In Press). Generally four scenarios are considered for weight generation
in simulation:
1) Random weights. As an extreme case, weights for the measures may be generated completely at random.
(Butler et al. 1997, Jimenez et al. 2003, Lahdelma et al. In Press, Wang and Zionts In Press). This approach
has no prior information as to a decision maker’s preferences.
2) Rank order weights. In this case, the relative importance ranking of the criteria is provided by the decision
maker (Butler et al. 1997, Jimenez et al. 2003). The simulation generates random weights while preserving
the rank order of criteria.
3) Response distribution weights. Here simulation recognizes that the weight assessment procedure is subject
to variation (Barron and Barrett 1996, Butler et al. 1997, Jimenez et al. 2003). A decision maker may be
assumed to have weights that equal the true weights plus a response error (perhaps associated with the
weight assessment process). In the simulation, weights are generated from these distributions and then used
for decision making.
4) Random generation of weights subject to constraints. In some instances, the decision maker can provide
some additional information, which may be expressed by constraints. Scenario 2 above can be looked as a
special case of this scenario.

In this paper, we consider scenarios 1 and 4 (the generation of random weights with and without constraints) for
additive p-criteria problem solving (where we multiply the objectives or criteria by the weights and then sum the
weighted objectives to obtain a value function which is then maximized). One might think that generating random
weights is not difficult. However, this is not the case. We consider several procedures that have been proposed for
generating weights with and without constraints, and then explore their validity.

The paper is organized as follows. In Section 2, we review three random weight generation (without constraints)
procedures, and develop an fool-proof method. In Section 3 we consider procedures proposed for random weight
generation with general constraints. We present our conclusions in Section 4.

2. Random Weight Generation

When we think about generating random weights, we think of weights that are uniformly distributed over the weight
space i i ∑
λ = 1, λi ≥ 0 . In other words, we want to generate points uniformly distributed over the simplex. Harvey
Greenberg2 defines a simplex as follows:
p

“ S = {( x1 , x2 ,..., x p ) | xi ≥ 0, ∑ xi = 1} . For p =1, this is a point (x=1). For p=2, this is a line segment, joining
p

i =1

points (1,0) and (0,1). For p=3, this is a triangle, joining the vertices (1,0,0), (0,1,0), and (0,0,1). The dimension
p
of S is p-1.”
For two objectives, we can use one random number generated from [0,1] uniform distribution for one
objective, and one minus that number for the second objective. But what do we use for higher-dimensional problems?

2.1 Procedure 1

Steuer (1986) explored some aspects for generating random weights, and found difficulties in generating uniformly
distributed weights on the simplex. He proposed a naïve method. Steuer proposed generating p uniformly-

1
A robust solution is one that is optimal for many different preference functions
2
http://carbon.cudenver.edu/~hgreenbe/glossary/S.html

2
νi
distributed variablesν i , i=1, 2, … ,p, and used their normalized values as weights, namely, λi = p
. But, as
∑ν j
j =1

Steuer acknowledged, the λi are not uniformly distributed; rather Steuer said and demonstrated that the weights are
concentrated near their mean values. Steuer in his work used observations from the Weibull distribution to
supplement the weights, but acknowledged the ad-hoc nature of the procedure.

2.2 Procedure 2

Another procedure proposed by Fang et al. (2004) generates weights. The procedure first generates weight λ1 as a
random number over the interval [0, 1]. Let s1 = 1 − λ1 . Then it generates a second weight λ2 as a random number
over the interval [0, s1 ]. Let s2 = s1 − λ2 . A third weights λ3 is generated over the interval [0, s2 ]. And so on.

s
Let E (λi ) denote the expectation of weight λi . We can see from the above procedure that E (λi | s ) = . So
2
1
E ( λi ) = i
, i = 1, 2, ..., p − 1 . To be valid, each weight λi must have the same marginal distribution. Because we
2
have different means for each weight λi , this procedure does not generate uniformly distributed weights.

2.3 Procedure 3 --A Fool-proof Method

We now develop a fool-proof method for generating uniformly-distributed random weights over a simplex. For p
p −1

criteria, we generate p-1 random variables λi , i = 1, 2, ..., p − 1 from a uniform distribution [0,1]. If ∑λ i
≤ 1 , we
i =1
p −1

use the λi and let λ p = 1 − ∑ λi . Certainly the first p-1 variables are uniformly distributed over the space, as
i =1

desired, and all the p variables are uniformly distributed over the simplex as desired, because the p-simplex S p is
p −1
the convex combination of the (p-1)-simplex S with the origin.

p −1
The above procedure is inefficient. Though for p = 2 the condition ∑λ
i =1
i ≤ 1 is always satisfied, for p =3 the
p −1
1
condition ∑λ i
≤ 1 is satisfied half the time. We can prove the condition will be satisfied
( p − 1)!
of the time in
i =1

1
general, because the ratio of the hypervolume of the simplex to that of the hypercube is .
( p − 1)!

We can improve the efficiency of the algorithm. To do this we map weights that are generated outside of the
p −1

constraint set ∑λ i
≤ 1 to weights that are inside of a transformed constraint set. If the generated weights do not
i =1

p −1

satisfy the condition ∑λ i


≤ 1 , we may be able to transform the weights to a non-intersecting set of weights that do
i =1

p −1

satisfy a transformed condition. The original constraint set ∑λ i


≤ 1 is generated about the origin (0, 0, …, 0). But
i =1

3
p −1
the origin is only one of 2 extreme points of the hypercube. Consider other possible extreme points so that the
intersection of the union of their simplexes and their “origins” are nonintersecting except for extreme points and
facets. For p=3, there are a total of four extreme points, and we may use the extreme points (0, 0) (the origin) and (1,
1). The transformation we use if the constraint set λ1 + λ2 ≤ 1 is not satisfied is λ1 = 1 − λ1 and λ2 = 1 − λ2 . For p=4,
' '

there are eight extreme points, and we may use as origins the points (0,0,0), (1,1,0), (1,0,1), (0,1,1). For the corner
(1,1,0) we use the constraint (1 − λ1 ) + (1 − λ2 ) + λ3 ≤ 1 to form the simplex about (1,1,0); the transformation
λ1 = 1 − λ1 , λ2 = 1 − λ2 , and λ3 = λ3 yields the desired weights. We have comparable transformations for the other
' ' '

“origins”. We extend and prove this procedure in the appendix for p>4. However this procedure is still not efficient.

2.4 Procedure 4

The procedure to randomly generate appropriate weights is proposed by Rubinstein (1982), and later used in Butler
et al. (1997) for decision analysis. In the procedure, p-1 random numbers are first selected from a uniform
distribution on (0, 1) independently. We then rank the numbers. Suppose the ranked numbers are
1 ≥ ν p −1 ≥ ... ≥ ν 2 ≥ ν 1 > 0 . The first differences of these ranked numbers (including the bounds 0 and 1) can be
obtained as
λ p = 1 − ν p −1
λ p −1 = ν p −1 − ν p − 2
...
λ1 = ν 1 − 0
Then obviously the set of number ( λ1 , λ2 , ..., λ p ) will sum to one.

Theorem 1: ( λ1 , λ2 , ..., λ p ) generated by above procedure is uniformly distributed over the simplex
p

S = {( x1 , x2 , ..., x p ) | xi ≥ 0, ∑ xi = 1} .
p

i =1

The proof of the theorem may be found in Devroye (1986), page 207. This procedure is much more efficient than
the fool-proof procedure.

3. Random Weights Generation Subject to Constraints

3.1 The General Constraints

In some instances, the decision maker may provide additional information on his preferences. The information may
take the form of linear inequalities. The forms may be constructed in the following manner:
1) Weak preferences: λi ≥ λ j , i ≠ j .
2) Strict preferences: λi − λ j ≥ a , i ≠ j .
3) Preference with ratio comparison: λi ≥ bλ j , i ≠ j .
4) Interval preference with numerical ranges: c ≤ λi ≤ c + ε .
5) Preference differences: λi − λ j ≥ λk − λl , for i ≠ j ≠ k ≠ l
where i, j, k, l= 1,2…p, and a, b, c, ε are real numbers. These forms of constraints are well-explained in Park and
Kim (1997). We now consider linear constraints in a general form Aλ ≤ b , where λ = (λ1 , λ2 , ...λ p ) is the weight
vector, A is a p × m matrix, m is the number of the constraints, b is a real m-vector.

4
There are numbers of studies for addressing this problem. Weber (1987) and Park and Kim (1997) provide a
comprehensive literature review. Most of the studies use pairwise dominance of alternatives to obtain non-
dominated alternatives or the most preferred alternative. The pairwise dominance might be checked with following
LP models:
p

max (min) ∑ λ (x j ij
− xkj )
j =1

s.t .
Aλ ≤ b
where xij is the outcome of alternative i with respect to criteria j. To check all dominance relations as a maximum
one has to solve n × ( n − 1) linear programs.

White et al. (1982) has proposed a framework of interactive procedure for such problems. The procedure is:
1) First identify constraints on weights.
2) Establish pairwise dominance relations of the alternatives.
3) Rank the order of alternatives based on pairwise dominance.
4) If a selection can be made without further information, then stop.
5) If not, assess additional information about the weights, and go to step 2.

A weakness of such methods is that when the decision maker cannot provide enough information, the induced
dominance relation is not sufficient to rank all alternatives. We have to determine an optimal alternative or a
(complete) ranking with the help of additional decision rules and concepts (Insua and French 1991, Hazen 1986,
Park and Kim 1997, Weber 1987), and these decision rules are usually complicated and not intuitive.

3.2 Procedures for Random Generation of Weights Subject to Linear Constraints

Employing simulation to solve such problems was introduced by Butler (1998), section 4. Butler proposed a
procedure for random generation of weights subject to linear constraints. The procedure using our notation is:
1) Randomly select criterion i from the set {1, 2,..., n} (sampling without replacement).
2) Determine λi and λi* as
*

λi = max | Aλ ≤ b, ∑ λi = 1, 0 ≤ λi ≤ 1 ∀i
*

ki
i =1

λi* = min | Aλ ≤ b, ∑ λi = 1, 0 ≤ λi ≤ 1 ∀i
ki
i =1

3) Randomly generate a number r from the uniform ( λi* , λi ) distribution.


*

4) Set λi = r .
5) Repeat step 1 through 4 n-1 times.
6) The remaining weight is determined so that sum of all weights is one.

The author mentioned that step 3 may cause congestion at some of the extreme points of the feasible weight space;
that is these regions may be over sampled. However a priori it is difficult to determine when this will occur. He
suggested that one could sample the λi from a triangular distribution with minimum λi and maximum λi* and
*

mode β i (average all corner points of the polytope to get β = ( β1 , β 2 , ...β n ) ) so that the center of the feasible space
is over sampled rather than the extreme points. In an example to demonstrate the method, the author sampled the
weights from a uniform distribution. This procedure is both ad hoc and not likely to produce the desired results. We
introduce several other procedures, which can be applied to generate the random weights with linear constraints.

5
Let the additional information from the decision maker regarding the weights be represented by a general constraint
p

set Aλ ≤ b as described above. Together with natural constraints on the weights ∑λ i


= 1, λi ≥ 0 , i = 1, 2,... p , the
i =1

complete set of constraints defines a convex polytope (convex hull) W of the feasible weight space. We assume the
extreme points of the convex polytope W to be X 1 , X 2 , ... X K , where X i is a real p-vector, i=1,2…K. For simple
situations, we may use linear programming to find the extreme points of a polytope. For a complicated constraint set,
efficient algorithms have been proposed (Avis and Fukuda 1992, Bremner et al. 1998, Chan 1996, Provan 1994, etc).
Seidel (1997) provides an overview on the algorithms of extreme point enumeration. Public software programs
implementing these algorithms are also available (Fukuda 2000, http://cgm.cs.mcgill.ca/~avis/C/lrslib/links.html).
K K

By the property of convexity, if λ ∈ W , then λ = ∑k X i i


, where ∑k i
= 1, ki ≥ 0, i = 1, 2,..., K , and X i , i=1,2…K
i =1 i =1

are extreme points of W. We have the following theorem (Devroye 1986, page 569).

Theorem 2: If k = ( k1 , k 2 ,..., k K ) is uniformly distributed over the simplex S = {( x1 , x2 ,..., x p ) | xi ≥ 0, ∑ xi = 1} ,


K

i =1

then λ = ∑k X i i
is uniformly distributed over the polytope W, which is defined by the extreme points X 1 , X 2 , ... X K ,
i =1

and the extreme points are in general position.

We say the extreme points are in general position if no three points lie on a straight line, no four points lie on a plane,
etc. The proof of the theorem is provided in Devroye (1986), page 569.

Based on Theorem 1 and Theorem 2, an intuitive procedure for random generation of weights subject to linear
constraints is proposed as follows for the case in which the extreme points are in general position:
1) Enumerate the extreme points of the polytope defined by Aλ ≤ b , X 1 , X 2 , ... X K , where X i is a real p-
vector, i=1,2…K.
2) Let k1 , k 2 ,..., k K be the spacings generated by a uniform sample of size K-1 on [0, 1] following procedure 4
in section 2.4.
K

3) Generate the weight vector λ = ∑ ki X i .


i =1

4) Repeat steps 2 and 3 until enough weight samples have been generated.
A similar procedure may be found in G. Fishman (1996), Page 233.

However we can not use the procedure for weight generation if the condition of general position is violated. To
show that, consider an example with extreme points (0.1, 0), (0, 0.05), (0.1, 0.1), and (1, 0), i. e., four points in a
plane. We follow the procedure as above to generate 10 sets of random vectors in the feasible set. Each set of
random vectors has 1000 points. Let (x1, x2) denote one vector generated using the procedure. For each experiment
we count the number of the vectors that has x1<0.1, and the number of the vectors that has x1>0.9. The ratio
between the feasible area of x1<0.1and that of x1>0.9 in the region on the plane is 18:1. If the procedure generates
valid random vectors, then the points should be uniformly distributed over the entire space, and the ratio between the
number of points in each region should be approximately 18:1. However the mean number of the points in the area
x1<0.1 is 98.7 points, with a standard deviation of 10.18, while, the mean number of the points in the area x1>0.9 is
0.8, with a standard deviation of 1.02. This shows, at a high level of significance, that the vectors generated are not
uniformly distributed over the plane.

Several methods are described in Hormann et al. (2004) to overcome the problem if the extreme points are not in
general position. For a 2-dimensional polytopes, triangulation is used as a simple solution. The idea of triangulation
is to decompose the given polytope into simplices. The procedure is as follows. We first choose a root vertex and
join all remaining vertices with this vertex. Then we sample one of the triangles with probabilities proportional to
their areas. We return a point that is uniformly sampled following the intuitive procedure. However, a triangulation

6
of a convex polytope is not a simple task and the computing time explodes for higher dimensions. Rubin (1984)
gives a survey of this problem and provides a decomposition algorithm.

Another approach can be used is the grid method. The idea of the grid method is to decompose bounding hyper-
rectangle of the polytope into a set of hypercubes (rather than rectangles as stated in Hormann et al. (2004). There
are hypercubes that are totally inside the polytope (Type I) and hypercubes that are partially inside the polytope
(Type II). Then we may randomly sample the hypercubes in proportion to their hypervolume, and generate the
points inside of the hypercube following the intuitive method. The points generated inside of Type I hypercubes may
be accepted immediately, and those inside Type II hypercubes may follow the acceptance-rejection sample. If a
point in a Type II hypercube satisfies the constraint, it is accepted; otherwise it is rejected. The efficiency of the
algorithm depends on the total hypervolume in Type II hypercubes that satisfy the constraints. Making the total
hypervolume of Type II hypercubes that satisfy constraints increases the efficiency of the procedure as well as the
number of hypercubes.

Leydold and Hormann (1998) develop a sweep-plane algorithm for generating random tuples in simple polytopes.
Their method involves successively sweeping hyperplanes of decreasing dimension through the polytope, and
terminating the sweep randomly, and then repeating the process on a smaller dimensional polytope. The process
terminates when a sweep is done on one dimension. The procedure is complicated and is limited to simple polytopes,
which are polytopes with each vertex of the polytope is adjacent to no more than d vertices, where d is the
dimension of the space. For further information see Leydold and Hormann (1998).

4. Conclusion

Simulation is an important tool for sensitivity analysis in multiple-criteria decision making. In this paper, we
consider simulating an additive p-criteria model by generating weights that are uniformly distributed over the weight
space. One might think that generating such uniformly-distributed random weights is not difficult. As has been
shown in prior research, such generation is not easy. We reviewed and explored several procedures. The first two
procedures (Procedure 1 and Procedure 2) that we considered were simple and straightforward. Yet, they do not
generate weights that have the desired distribution. We proposed a fool-proof method for generating valid random
weights. However it is not efficient. The fourth procedure is both efficient and valid.

Further we consider generating random weights with additional constraints. Normally we use linear programming
through pairwise comparison solving such problems in decision analysis. We can also solve such problems through
simulation by determining the robustness of solutions in the feasible weights region. Several procedures are
reviewed in the paper and may be applied for this purpose.

REFERENCES
Antunes, C. H.,Climaco, J. N., 1992. Sensitivity Analysis in MCDM using the Weighting space, Operations
Research Letters 12 187-196.
Avis, D.,Fukuda, K., 1992. A pivoting algorithm for convex hull and vertex enumeration of arrangements and
polyhedra, Discrete Comput. Geom. 8 295 - 313.
Barron, H.,Barrett, B. E., 1996. Decision Quality Using Ranked Attribute Weights, Management Science 42(11)
1515-1523.
Barron, H.,Schmidt, C. P., 1988. Sensitivity Analysis of Additive Multiattribute Value Models, Operations Research
6 122-127.
Belton, V.,Pictet, J., 1997. A Framework For Group Decision Using A Mcda Model: Sharing, Aggregating Or
Comparing Individual Information? Journal of Decision Systems, Special Issue on Group Decision and
Negotiation 6 283-303.
Belton, V.,Stewart, T. J., 2002. Multiple criteria decision analysis: an integrated approach, Kluwer Academic
Publishers, Boston.
Bremner, D., Fukuda, K.,Marzetta, A., 1998. Primal-dual methods for vertex and facet enumeration, Discrete
Comput. Geom. 20 333-357.
Butler, J., Jia, J.,Dyer, J., 1997. Simulation techniques for the sensitivity analysis of mullti-criteria decision models,
European Journal of Operational research 103 531-546.

7
Butler, J. C., 1998. Simulation Analysis of Multi-criteria Decision Models. Management Science and Information
Systems, The University of Texas at Austin.
Chan, T. M., 1996. Output-sensitive results on convex hulls, extreme points, and related problems, Discrete Comput.
Geom. 16 369-387.
Choo, E. U., Schoner, B.,Wedley, W. C., 1999. Interpretation of criteria weights in multicriteria decision making,
Computers & Industrial Engineering 37 527-541.
Devroye, L., 1986. Non-Uniform Random Variate Generation, Springer-Verlag, New York.
Fang, X., Sheng, O. R. L., Gao, W.,Iyer, B., 2004. A Data-Mining-Based Prefetching Approach to Caching For
Network Storage Systems, INFORMS Journal on Computing Forthcoming.
Fukuda, K., 2000. FAQ in polyhedral computation, Swiss Federal Institute Of Technology, Lausanne and Zurich,
Switzerland, http://www.ifor.math.ethz.ch/~fukuda/fukuda.html.
Hazen, G. B., 1986. Partial Information, Dominance, and Potential Optimality in Multiattibute Utility Theory,
Operational Research 34(2) 296-310.
Hormann, W., Leydold, J.,Derflinger, G., 2004. Automatic Nonuniform Random Variate Generation, Springer,
Verlag Berlin Heidelberg.
Insua, D. R.,French, S., 1991. A framework for sensitivity analysis in discrete multi-objective decision-making,
European Journal of Operational Research 54(2) 176-190.
Jimenez, A., Rios-Insua, S.,Mateos, A., 2003. A Decision Support Syste for Multiattribute Utility Evaluation Based
on Imprecise Assignments, Decision Support Sytems 36 65-79.
Keeney, R. L.,Raiffa, H., 1976. Decision with multiple objectives. Preferences and value tradeoffs, John Wiley &
Sons, New York.
Lahdelma, R., Miettinen, K.,Salminen, P., In Press. Reference point approach for multiple decision makers,
European Journal of Operational Research.
Leydold, J.,Hormann, W., 1998. A Sweep-plane Alogrithm for Generating Random Tuples In Simple Polytopes,
Mathematics of Computaion 67(224).
Park, K. S.,Kim, S. H., 1997. Tools for interactive multiattribute decision making with incompletely identified
information, European Journal of Operational Research 98 111-123.
Provan, J. S., 1994. Efficient enumeration of the vertices of polyhedra associated with network LP's, Math. Program.
63 47-64.
Roy, B.,Mousseau, V., 1996. A theoretical framework for analysing the notion of relative importance of criteria,
Journal of Multi-Criteria Decision Analysis 5 145-159.
Rubin, P. A., 1984. Generating random points on a polytope, Communications in Statistics - Simulation 13(3) 375-
396.
Rubinstein, R. Y., 1982. Generating Random Vectors Uniformly Distributed Inside and On the Surface Of
Difference Regions, European Journal of Operational research 10 205-209.
Saaty, T. L., 1980. The analytic hierarchy process: planning, priority setting, resource allocation, McGraw-Hill, New
York; London.
Seidel, R., 1997. Convex hull computations. J. Goodman and J. O'Rouke, Handbook of Discrete and Computational
Geometry, CRC Press, Boca Raton, pp. ch. 19.
Steuer, R. E., 1986. Multiple Criteria Optimization: Theory, Computation, and Application, John Wiley & Sons,
New York.
Triantaphyllou, E.,Sanchez, A., 1997. A Sensitivity Analysis Approach for Some Deterministic Multi-Criteria
Decision Making Methods, Decision Sciences 28 151-194.
Wang, J.,Zionts, S., In Press. The Aspiration Level Interactive Method (AIM) Reconsidered: Robustness of
Solutions to Multiple Criteria Problems, European Journal of Operational Research.
Weber, M., 1987. Decision Making With Incomplete Information, European Journal of Operational Research 28
44-57.
White, C. C., Sage, A. P.,Scherer, W. T., 1982. Decision support with partially identified parameters, Large Scale
Systems 3 177-189.
Zionts, S.,Wallenius, J., 1976. An interactive Programming Method for Solving the Multiple Criteria Problems,
Management Science 22(6) 652-663.

APPENDIX

8
The Mapping Procedure

p −1

The first constraint set we consider is ∑ λi ≤ 1 . If the generated weights do not satisfy that condition, we consider
i =1

transforming the weights to a non-intersecting set of weights. In the paper we discussed transformations for p=3 and
p=4. How can we find such extreme points as origins for transformation in higher dimensions?

First note that every extreme point of the unit hypercube is a vector of zeroes and ones.

Definition 1 An Even Extreme Point is an extreme point whose vector has an even number of 1’s.

Definition 2 An Odd Extreme Point is an extreme point whose vector has an odd number of 1’s.

All corners that may be considered as “origins” for transformation are even extreme points.

p−2
Theorem 1: There are 2 sets of origins.

Proof: All origins are even extreme points, for which the number of the 1’s is 2i , i = 0,1,...,
⎢ p − 1 ⎥ . The total
⎢⎣ 2 ⎥⎦
⎣ p −1 / 2 ⎦
number of possible even extreme points is ∑ C p −1 = 2
2i p−2
including (0, 0, …, 0). [Q.E.D.]
i=0

Definition 3 One extreme point is an adjacent extreme point of another extreme point if and only if they differ in
one component.

Theorem 2: All adjacent extreme points of even extreme points are odd extreme points.
Proof: From the definition of even extreme points and adjacent extreme points. [Q.E.D.]

Theorem 3: There are p-2 adjacent extreme points to each extreme point.
Proof: From the definition of the adjacent efficient points. [Q.E.D.]

p−2
Theorem 4: Consider an even extreme point (b1 , b2 ,...bi , ..., b p −1 ) , where j = 1, 2, ..., 2 , bi ∈ {0,1} . The
j j j j j

{ }
p −1 p −1

∑ (b + ( −1) λi ) ≤ 1, λ p = 1 − ∑ (bi + ( −1) λi )


j j

simplexes S j = λ ∈ ℜ ,
p+ j bi j bi
i
formed by different even extreme
i =1 i =1

point as origin are non-intersecting, except possibly at extreme points.


Proof: Consider two even extreme points j and k. Let S j and S k denote the simplex generated having j and k as its
origin respectively. Since both j and k are even extreme points, according to theorem 2, they cannot be non-adjacent.
Thus at least 2 components (of even extreme points) must be different, and in general an even number of
components (of two even extreme points) must be different. Because all adjacent extreme points to even extreme
points are odd, the only points simplexes from different origins can have in common are odd extreme points or their
convex combinations. Assuming that we have infinite-dimensional random numbers, then the probability of having
a random number generated at the intersection of simplexes is zero. [Q.E.D.]

p−2
Theorem 5: Consider an even extreme point (b1 , b2 ,...bi , ..., b p −1 ) , where j = 1, 2, ..., 2 , bi ∈ {0,1} . The union of
j j j j j

{ }
p −1 p −1

∪ λ ∈ ℜ , ∑ (bi + ( −1) λi ) ≤ 1, λ p = 1 − ∑ (bi + (−1) λi ) covers all the edges of the hypercube.
j j
p+ j bi j bi
simplexes
j i =1 i =1

Proof: Each edge connects an odd extreme point and an even extreme point. Each simplex is a convex combination
of one even extreme point and its adjacent extreme points. From Theorem 2, we know the adjacent points of the

9
even extreme point are odd extreme points. The union of all simplexes covers all even extreme points, and therefore
covers all odd extreme points. Consequently it covers all the edges of the hypercube. [Q.E.D.]

Theorem 6: Suppose a corner is an even extreme point (b1 , b2 ,..., bp −1 ) , where bi ∈ {0,1} . We generate p-1 random
p −1

variables λi , i = 1, 2, ..., p − 1 from a uniform distribution [0,1]. If ∑ (b + (−1)


i
bi
λi ) ≤ 1 , then
i =1

p −1

λ 'i = bi + ( −1) λi , λn ' = 1 − ∑ λ 'i .


bi

i =1

p −1 p −1

Proof: Because ∑ (b + (−1)


i
bi
λi ) ≤ 1 , by transformation ∑λ '
i
≤ 1 . Also if λi i = 1, 2,..., p − 1 is uniformly
i =1 i =1

p −1

distributed over the simplex ∑ (bi + ( −1) λi ) ≤ 1 , then the transformation λ 'i = bi + ( −1) λi , λi , i = 1, 2, ..., p − 1 ,
bi bi '

i =1

p −1

λi is uniformly distributed over the simplex


'
∑λ '
i
≤ 1 . [Q.E.D.]
i =1

p−2
2
Based on Theorem 1, we know that the efficiency of the algorithm is . Table 1 shows the efficiency of the
( p − 1)!
algorithm. It decreases exponentially.

p 2 3 4 5 6 7 8 9 10
Efficiency 100.00% 100.00% 66.67% 33.33% 13.33% 4.44% 1.27% 0.32% 0.07%
Table 1 The efficiency of the algorithm

We may generate additional simplexes inside the hypercube. The number of simplexes inside the hypercube is
p−2
( p − 1)!− 2 . For example, there are two simplexes inside the hypercube for p=4, and the number of interior
simplexes increases negative-exponentially as p increases. Though we can define linear transformations to utilize
these interior simplexes, the procedure may be sufficiently complicated as to make the process unusable. In general
p is not very large. So using a computer even for values of p<8 might not be bad. We used a Dell laptop with a CPU
P4 1.7GHz to generate 5000 observations for p=6, 7, 8, and the procedure is programmed in MATLAB 6.5. It took
1.27 minutes to generate 5000 sets of multipliers for p=6, and 8.38 minutes for p=7, while for p=8, it took about 57
minutes. If time were of concern, a more efficient computer program could be constructed for such generation.

10

View publication stats

You might also like