0% found this document useful (0 votes)
57 views

CS30 5 System Modeling and Simulation Prof. Dr. Khaled Mahar

Purpose & Overview: Input models provide the driving force for a simulation model. The quality of the output is no better than the quality of inputs. In this chapter, we will discuss the 4 steps of input model development: Collect data from the real system Identify a probability distribution to represent the input process Choose parameters for the distribution Evaluate the chosen distribution and parameters for goodness of fit.

Uploaded by

Abdelrahman Elba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

CS30 5 System Modeling and Simulation Prof. Dr. Khaled Mahar

Purpose & Overview: Input models provide the driving force for a simulation model. The quality of the output is no better than the quality of inputs. In this chapter, we will discuss the 4 steps of input model development: Collect data from the real system Identify a probability distribution to represent the input process Choose parameters for the distribution Evaluate the chosen distribution and parameters for goodness of fit.

Uploaded by

Abdelrahman Elba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

CS30 System Modeling and

5 Simulation

Prof. Dr. Khaled Mahar


[email protected]

Lecture 8

1
Chapter 9
Input Modeling

Banks, Carson, Nelson & Nicol


Discrete-Event System Simulation
Purpose & Overview
 Input models provide the driving force for a simulation model.
 The quality of the output is no better than the quality of inputs.
 In this chapter, we will discuss the 4 steps of input model
development:
 Collect data from the real system
 Identify a probability distribution to represent the input
process
 Choose parameters for the distribution
 Evaluate the chosen distribution and parameters for
goodness of fit.

3
Data Collection

•To illustrate data collection activities, consider modeling a


painting station, where
• jobs arrive at random, wait in the buffer until the sprayer is available
• having been sprayed, they leave the station
• suppose that the spray nozzle can get blocked – an event that
results in a stoppage during which the nozzle is cleaned or replaced.
• suppose further that the measure of interest is the expected job delay in the
buffer.
•The data collected in this simple case would consist of:
1. collection of job inter-arrival times
2. collection of painting times
3. collection of times between nozzle blockage
4. collection of nozzle cleaning/replacement times
5
Identifying the Distribution
 Histograms
 Selecting families of distribution
 Parameter estimation
 Goodness-of-fit tests
 Fitting a non-stationary process

6
Histograms [Identifying the distribution]

 A frequency distribution or histogram is useful in


determining the shape of a distribution

1. Divide the range of data into intervals.


(Usually of equal length)
2. Label the horizontal axis to conform to the intervals
selected.
3. Find the frequency of occurrences within each
interval.
4. Label the vertical axis so that the total occurrences
can be plotted for each interval.
5. Plot the frequencies on the vertical axis.

7
Histograms continued
 The number of class intervals depends on:
 The number of observations
 The dispersion of the data
 Suggested: No. of interval = the square root of the sample size
 For continuous data:
 Corresponds to the probability density function of a theoretical
distribution
 For discrete data:
 Corresponds to the probability mass function
 If few data points are available: combine adjacent cells to
eliminate the ragged appearance of the histogram

8
Sample Histograms

Ragged, Coarse, and appropriate histogram

6
5
4
3
2
1
0
0 2 4 6 8 10 12 14 16 18 20 22 24

(1) Original Data - Too ragged

9
Sample Histograms (cont.)

Ragged, Coarse, and appropriate histogram

25

20

15

10

0
0~7 8 ~ 15 16 ~ 24

(2) Combining adjacent cells - too coarse

10
Sample Histograms (cont.)

Ragged, Coarse, and appropriate histogram

12
10
8
6
4
2
0
0~2 3~5 6~8 9~11 12~14 15~17 18~20 21~24

(3) Combining adjacent cells - appropriate

11
Discrete Data Example

The number of vehicles arriving at an intersection in


a 5-minute period between 7:00 a.m. and 7:05 a.m.
was monitored for five workdays over a 20-week
period.
Arrivals/period Frequency Arrivals/period Frequency
0 12 6 7
1 10 7 5
2 19 8 5
3 17 9 3
4 10 10 3
5 8 11 1

12
Discrete Data Example (cont.)

Arrivals/period Frequency Arrivals/period Frequency


0 12 6 7
1 10 7 5
2 19 8 5
3 17 9 3
4 10 10 3
5 8 11 1

The first entry in the table indicates that there


were 12 5-minute periods during which zero
vehicles arrived, 10 periods during which one
vehicle arrived, and so on.
The resulting histogram is shown in next slide

13
Histogram of number of arrivals per period

20
18
16
14
12
10
8
6
4
2
0
0 1 2 3 4 5 6 7 8 9 10 11
Number of arrivals per period

Since the data is discrete and there are ample data, so the
histogram may have a cell for each possible value in the data
range 14
Continuous Data Example

Life tests were performed on a random sample of 50


electronic chips at 1.5 times the normal voltage, and
their lifetime (or time to failure) in days was recorded:

79.919 3.081 0.062 1.961 5.845 3.027 6.505 0.021 0.012 0.123
6.769 59.899 1.192 34.760 5.009 18.387 0.141 43.565 24.420 0.433
144.695 2.663 17.967 0.091 9.003 0.941 0.878 3.371 2.157 7.579
0.624 5.380 3.148 7.078 23.960 0.590 1.928 0.300 0.002 0.543
7.004 31.764 1.005 1.147 0.219 3.217 14.382 1.008 2.336 4.562

15
Continuous Data Example (cont.)

Chip Life (Days) Frequency Chip Life (Days) Frequency


0£ xi < 3 23 30 £ xi < 33 1
3£ xi < 6 10 33 £ xi < 36 1
6£ xi < 9 5 .......... .....
9£ xi < 12 1 42 £ xi < 45 1
12 £ xi < 15 1 .......... .....
15 £ xi < 18 2 57 £ xi < 60 1
18 £ xi < 21 0 .......... .....
21 £ xi < 24 1 78 £ xi < 81 1
24 £ xi < 27 1 .......... .....
27 £ xi < 30 0 143 £ xi < 146 1
Electronic Chip Data

16
Continuous Data Example (cont.)

23

10

2
1 1 1 1 1 1
0 0
0 3 6 9 12 15 18 21 24 27 30 33 36 ...

Histogram of chip life

17
Selecting the Family of Distributions
[Identifying the distribution]
 A family of distributions is selected based on:
 The context of the input variable
 The physical characteristics of the input process
 Is it naturally discrete or continuous valued?
 Are the observable values inherently bounded or is there

no natural bound?
 Shape of the histogram
 There is no “true” distribution for any stochastic input process
 Goal: obtain a good approximation

18
Selecting the Family of Distributions
[Identifying the distribution]
 Page 364: Use the physical basis of the distribution as a
guide, for example:
 Binomial: # of successes in n trials
 Poisson: # of independent events that occur in a fixed amount of
time or space
 Normal: distribution of a process that is the sum of a number of
component processes (time to assemble a product is the sum of
times required for each assembly operation)
 Exponential: time between independent events, or a process time
that is memoryless
 Weibull: time to failure for components
 Discrete or continuous uniform: models complete uncertainty
 Triangular: a process for which only the minimum, most likely,
and maximum values are known.
19
Parameter Estimation [Identifying the distribution]

 This is the next step after selecting a family of distributions


 If observations in a sample of size n are X1, X2, …, Xn (discrete
or continuous), the sample mean and variance are:
i1 X i 
n n
X i
2
 nX 2

X S2  i 1
n n 1
 If the data are discrete and have been grouped in a frequency
distribution:
 j 1 f j X j 
n n
j 1
f j X 2
j  nX 2

X S2 
n n 1

where fj is the observed frequency of value Xj

20
Parameter Estimation [Identifying the distribution]

 Vehicle Arrival Example (continued): Table in the histogram


example on slide 12(Table 9.1 in book) can be analyzed to obtain:
n  100, f1  12, X 1  0, f 2  10, X 2  1,...,

 j 1 f j X j  364, and  j 1 f j X 2j  2080


k k
and

 The sample mean and variance are

364
X  3.64
100
2080  100 * (3.64) 2
S 
2

99
 7.63

 The histogram suggests X to have a Poisson distribution


21
 However, note that sample mean is not equal to sample variance.
Parameter Estimation [Identifying the distribution]

 The Maximal-likelihood Estimation (MLE) method


assumes a particular class of distributions (e.g., normal,
uniform, exponential, etc.), and then estimates their
parameters from the sample, such that the resulting
parameters give rise to the maximal likelihood (highest
probability or density) of obtaining the sample.
 Example: Suppose that a random sample of size n, x1,x2,
…,xn has been taken and that the samples are assumed
to come from an exponential distribution. Use the MLE
method to estimate the distribution parameter.

22
Maximum Likelihood Method [Identifying the distribution]

  The density function of the exponential distribution with


rate  is
 For x ≥ 0, if x1,x2,…,xn are i.i.d, and each have
exponential distribution, then their joint distribution is
, , ,…, )=…
=
To maximize f(.) w.r.t , let the likelihood function be:
L()=, thus

Taking the derivative w.r.t  and equating to zero, we get

23
Goodness-of-Fit Tests [Identifying the distribution]

 Conduct hypothesis testing on input data distribution using:


 Kolmogorov-Smirnov test
 Chi-square test
 No single correct distribution in a real application exists.
 If very little data are available, it is unlikely to reject any candidate
distributions
 If a lot of data are available, it is likely to reject all candidate
distributions

24
Chi-Square test [Goodness-of-Fit Tests]

 Intuition: comparing the histogram of the data to the shape of


the candidate density or mass function
 Valid for large sample sizes when parameters are estimated by
maximum likelihood
 By arranging the n observations into a set of k class intervals or
cells, the test statistics is:
k
(Oi  Ei ) 2

Expected Frequency
 02  Ei = n*pi
i 1
Ei
where pi is the theoretical
Observed prob. of the ith interval.
Frequency Suggested Minimum = 5

which approximately follows the chi-square distribution with k-s-1


degrees of freedom, where s = # of parameters of the hypothesized
distribution estimated by the sample statistics.

25
Chi-Square test [Goodness-of-Fit Tests]

 The hypothesis of a chi-square test is:


H0: The random variable, X, conforms to the distributional
assumption with the parameter(s) given by the estimate(s).
H1: The random variable X does not conform.

 If the distribution tested is discrete and combining adjacent cell


is not required (so that Ei > minimum requirement):
 Each value of the random variable should be a class interval,
unless combining is necessary, and
pi  p(xi )  P(X  xi )

26
Chi-Square test [Goodness-of-Fit Tests]

 If the distribution tested is continuous:


ai
pi   ai 1
f ( x ) dx  F (ai )  F (ai 1 )

where ai-1 and ai are the endpoints of the ith class interval
and f(x) is the assumed pdf, F(x) is the assumed cdf.
 Recommended number of class intervals (k):
Sample Size, n Number of Class Intervals, k
20 Do not use the chi-square test
50 5 to 10
100 10 to 20
1/2
> 100 n to n/5

 Caution: Different grouping of data (i.e., k) can affect the 27


Chi-Square test

 Vehicle Arrival :
H0: the random variable is Poisson distributed.
H1: the random variable is not Poisson distributed.
xi Observed Frequency, Oi Expected Frequency, Ei (Oi - Ei)2/Ei Ei  np ( x)
0 12 2.6
7.87 e   x
1
2
10
19
9.6
17.4 0.15
n
3 17 21.1 0.8
x!
4 19 19.2 4.41
5 6 14.0 2.57
6 7 8.5 0.26
7 5 4.4
8 5 2.0
9 3 0.8 11.62 Combined because
10 3 0.3
> 11 1 0.1 of min Ei
100 100.0 27.68

 02  27.68   02.05,5  11 .1
 Degree of freedom is k-s-1 = 7-1-1 = 5, hence, the hypothesis is
rejected at the 0.05 level of significance. 28
Kolmogorov-Smirnov Test
[Goodness-of-Fit Tests]
 Recall from Chapter 7:
 The test compares the continuous cdf, F(x), of the hypothesized
distribution with the empirical cdf, SN(x), of the N sample
observations.
 Based on the maximum difference statistics (Tabulated in A.8):
D = max| F(x) - SN(x)|
 A more powerful test, particularly useful when:
 Sample sizes are small,
 No parameters have been estimated from the data.

29
Fitting a Non-stationary Poisson Process
 Fitting a NSPP to arrival data is difficult, possible approaches:
 Fit a very flexible model with lots of parameters or
 Approximate constant arrival rate over some basic interval of time,
but vary it from time interval to time interval. Our focus

 Suppose we need to model arrivals over time [0,T], our


approach is the most appropriate when we can:
 Observe the time period repeatedly and
 Count arrivals.

30
Fitting a Non-stationary Poisson Process
 The estimated arrival rate during the ith time period is:
1 n
̂ (t )  
nt j 1
Cij

where n = # of observation periods, Dt = time interval length


Cij = # of arrivals during the ith time interval on the jth observation
period
 Example: Divide a 10-hour business day [8am,6pm] into equal
intervals k = 20 whose length
Number Dt = ½,
of Arrivals and Arrival
Estimated observe over n =3
Time Period Day 1 Day 2 Day 3
days Rate (arrivals/hr)

8:00 - 8:30 12 14 10 24 For instance,


1/3(0.5)*(23+26+32)
8:30 - 9:00 23 26 32 54
= 54 arrivals/hour
9:00 - 9:30 27 18 32 52

9:30 - 10:00 20 13 12 30
31
Covariance and Correlation
[Multivariate/Time Series]

 Correlation between X1 and X2 (values between -1 and 1):


cov( X 1 , X 2 )
  corr ( X 1 , X 2 ) 
 1 2
where the covariance between X1 and X2 is given by:
cov( X 1 , X 2 )  E[( X 1  1 )( X 2   2 )]  E ( X 1 X 2 )  1 2

 The closer r is to -1 or 1, the stronger the linear relationship is


between X1 and X2.

32
Some Correlation Patterns

r r==0;0;No r r==.931;
.931;Strong
Strongpositive
positivecorrelation
Nocorrelation
correlation correlation

r r==1;1;Linear
Linearrelationship
relationship
r r==-.67;
-.67;Weaker
Weakernegative
negative
correlation
correlation

You might also like