0% found this document useful (0 votes)
59 views51 pages

CH 03

The document discusses analysis of variance (ANOVA) which is used to analyze experiments with more than two levels of a factor or multiple factors. ANOVA partitions variability in response into treatment and error components to test if treatment means are equal. If treatment means differ, the treatment mean square will be larger than the error mean square. Assumptions like normality and equal variance should be checked. Post-ANOVA comparisons determine which specific means differ. ANOVA provides an objective analysis for experiments with more than two levels or factors.

Uploaded by

LusyifaFebioza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views51 pages

CH 03

The document discusses analysis of variance (ANOVA) which is used to analyze experiments with more than two levels of a factor or multiple factors. ANOVA partitions variability in response into treatment and error components to test if treatment means are equal. If treatment means differ, the treatment mean square will be larger than the error mean square. Assumptions like normality and equal variance should be checked. Post-ANOVA comparisons determine which specific means differ. ANOVA provides an objective analysis for experiments with more than two levels or factors.

Uploaded by

LusyifaFebioza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Chapter 3 Design & Analysis of Experiments 1

8E 2012 Montgomery
What If There Are More Than Two
Factor Levels?
• The t-test does not directly apply
• There are lots of practical situations where there are
either more than two levels of interest, or there are
several factors of simultaneous interest
• The analysis of variance (ANOVA) is the appropriate
analysis “engine” for these types of experiments
• The ANOVA was developed by Fisher in the early 1920s,
and initially applied to agricultural experiments
• Used extensively today for industrial experiments

Chapter 3 Design & Analysis of Experiments 2


8E 2012 Montgomery
An Example (See pg. 66)
• An engineer is interested in investigating the relationship
between the RF power setting and the etch rate for this tool.
The objective of an experiment like this is to model the
relationship between etch rate and RF power, and to specify the
power setting that will give a desired target etch rate.
• The response variable is etch rate.
• She is interested in a particular gas (C2F6) and gap (0.80 cm),
and wants to test four levels of RF power: 160W, 180W, 200W,
and 220W. She decided to test five wafers at each level of RF
power.
• The experimenter chooses 4 levels of RF power 160W, 180W,
200W, and 220W
• The experiment is replicated 5 times – runs made in random
order

Chapter 3 Design & Analysis of Experiments 3


8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 4
8E 2012 Montgomery
An Example (See pg. 66)

Chapter 3 Design & Analysis of Experiments 5


8E 2012 Montgomery
• Does changing the power change the
mean etch rate?
• Is there an optimum level for power?
• We would like to have an objective
way to answer these questions
• The t-test really doesn’t apply here –
more than two factor levels

Chapter 3 Design & Analysis of Experiments 6


8E 2012 Montgomery
The Analysis of Variance (Sec. 3.2, pg. 68)

• In general, there will be a levels of the factor, or a treatments,


and n replicates of the experiment, run in random order…a
completely randomized design (CRD)
• N = an total runs
• We consider the fixed effects case…the random effects case
will be discussed later
• Objective is to test hypotheses about the equality of the a
treatment means

Chapter 3 Design & Analysis of Experiments 7


8E 2012 Montgomery
The Analysis of Variance
• The name “analysis of variance” stems from a
partitioning of the total variability in the
response variable into components that are
consistent with a model for the experiment
• The basic single-factor ANOVA model is

 i  1, 2,..., a
yij     i   ij , 
 j  1, 2,..., n

  an overall mean,  i  ith treatment effect,


 ij  experimental error, NID(0,  2 )
Chapter 3 Design & Analysis of Experiments 8
8E 2012 Montgomery
Models for the Data

There are several ways to write a model


for the data:

yij     i   ij is called the effects model


Let i     i , then
yij  i   ij is called the means model
Regression models can also be employed
Chapter 3 Design & Analysis of Experiments 9
8E 2012 Montgomery
The Analysis of Variance
• Total variability is measured by the total sum
of squares:
a n
SST   ( yij  y.. ) 2
i 1 j 1

• The basic ANOVA partitioning is:


a n a n

 ij ..  i. .. ij i.
( y  y
i 1 j 1
)  [( y 2
y )  ( y  y
i 1 j 1
)]2

a a n
 n ( yi.  y.. ) 2   ( yij  yi. )2
i 1 i 1 j 1

SST  SSTreatments  SS E
Chapter 3 Design & Analysis of Experiments 10
8E 2012 Montgomery
The Analysis of Variance

SST  SSTreatments  SS E
• A large value of SSTreatments reflects large differences in
treatment means
• A small value of SSTreatments likely indicates no
differences in treatment means
• Formal statistical hypotheses are:

H 0 : 1   2     a
H1 : At least one mean is different

Chapter 3 Design & Analysis of Experiments 11


8E 2012 Montgomery
The Analysis of Variance
• While sums of squares cannot be directly compared
to test the hypothesis of equal means, mean
squares can be compared.
• A mean square is a sum of squares divided by its
degrees of freedom:

dfTotal  dfTreatments  df Error


an  1  a  1  a (n  1)
SSTreatments SS E
MSTreatments  , MS E 
a 1 a (n  1)
• If the treatment means are equal, the treatment and
error mean squares will be (theoretically) equal.
• If treatment means differ, the treatment mean square
will be larger than the error mean square.
Chapter 3 Design & Analysis of Experiments 12
8E 2012 Montgomery
The Analysis of Variance is
Summarized in a Table

• Computing…see text, pp 69
• The reference distribution for F0 is the Fa-1, a(n-1) distribution
• Reject the null hypothesis (equal treatment means) if
F0  F ,a 1,a ( n 1)

Chapter 3 Design & Analysis of Experiments 13


8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 14
8E 2012 Montgomery
ANOVA Table
Example 3-1

Chapter 3 Design & Analysis of Experiments 15


8E 2012 Montgomery
The Reference Distribution:

P-value

Chapter 3 Design & Analysis of Experiments 16


8E 2012 Montgomery
A little (very little) humor…

Chapter 3 Design & Analysis of Experiments 17


8E 2012 Montgomery
ANOVA calculations are usually done via computer

• Text exhibits sample calculations from


three very popular software packages,
Design-Expert, JMP and Minitab
• See pages 102-105
• Text discusses some of the summary
statistics provided by these packages

Chapter 3 Design & Analysis of Experiments 18


8E 2012 Montgomery
Model Adequacy Checking in the ANOVA
Text reference, Section 3.4, pg. 80

• Checking assumptions is important


• Normality
• Constant variance
• Independence
• Have we fit the right model?
• Later we will talk about what to do if
some of these assumptions are
violated
Chapter 3 Design & Analysis of Experiments 19
8E 2012 Montgomery
Model Adequacy Checking in the ANOVA
• Examination of
residuals (see text, Sec.
3-4, pg. 80)
eij  yij  yˆij
 yij  yi.
• Computer software
generates the residuals
• Residual plots are very
useful
• Normal probability plot
of residuals

Chapter 3 Design & Analysis of Experiments 20


8E 2012 Montgomery
Other Important Residual Plots

Chapter 3 Design & Analysis of Experiments 21


8E 2012 Montgomery
Post-ANOVA Comparison of Means
• The analysis of variance tests the hypothesis of equal
treatment means
• Assume that residual analysis is satisfactory
• If that hypothesis is rejected, we don’t know which
specific means are different
• Determining which specific means differ following an
ANOVA is called the multiple comparisons problem
• There are lots of ways to do this…see text, Section 3.5,
pg. 89
• We will use pairwise t-tests on means…sometimes
called Fisher’s Least Significant Difference (or Fisher’s
LSD) Method

Chapter 3 Design & Analysis of Experiments 22


8E 2012 Montgomery
Design-Expert Output

Chapter 3 Design & Analysis of Experiments 23


8E 2012 Montgomery
Graphical Comparison of Means
Text, pg. 91

Chapter 3 Design & Analysis of Experiments 24


8E 2012 Montgomery
The Regression Model

Chapter 3 Design & Analysis of Experiments 25


8E 2012 Montgomery
Why Does the ANOVA Work?
We are sampling from normal populations, so
SSTreatments SS E
  2
a 1 if H 0 is true, and   2
a ( n 1)
 2
 2

Cochran's theorem gives the independence of


these two chi-square random variables
SSTreatments /(a  1)  a21 /(a  1)
So F0   2  Fa 1,a ( n 1)
SS E /[a (n  1)]  a ( n 1) /[a (n  1)]
n
n i2
Finally, E ( MSTreatments )   2  i 1
and E ( MS E )   2
a 1
Therefore an upper-tail F test is appropriate.
Chapter 3 Design & Analysis of Experiments 26
8E 2012 Montgomery
Sample Size Determination
Text, Section 3.7, pg. 105
• FAQ in designed experiments
• Answer depends on lots of things; including
what type of experiment is being
contemplated, how it will be conducted,
resources, and desired sensitivity
• Sensitivity refers to the difference in means
that the experimenter wishes to detect
• Generally, increasing the number of
replications increases the sensitivity or it
makes it easier to detect small differences in
means
Chapter 3 Design & Analysis of Experiments 27
8E 2012 Montgomery
Sample Size Determination
Fixed Effects Case
• Can choose the sample size to detect a specific
difference in means and achieve desired values of
type I and type II errors
• Type I error – reject H0 when it is true (  )
• Type II error – fail to reject H0 when it is false (  )
• Power = 1 - 
• Operating characteristic curves plot  against a
parameter  where a
n  i2
2  i 1

a 2
Chapter 3 Design & Analysis of Experiments 28
8E 2012 Montgomery
Sample Size Determination
Fixed Effects Case---use of OC Curves
• The OC curves for the fixed effects model are in the
Appendix, Table V
• A very common way to use these charts is to define a
difference in two means D of interest, then the minimum
value of  2 is 2
nD
 2

2a 2
• Typically work in term of the ratio of D /  and try values
of n until the desired power is achieved
• Most statistics software packages will perform power and
sample size calculations – see page 108
• There are some other methods discussed in the text

Chapter 3 Design & Analysis of Experiments 29


8E 2012 Montgomery
Power and sample size calculations from Minitab (Page 108)

Chapter 3 Design & Analysis of Experiments 30


8E 2012 Montgomery
3.8 Other Examples of Single-Factor Experiments

Chapter 3 Design & Analysis of Experiments 31


8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 32
8E 2012 Montgomery
Conclusions?

Chapter 3 Design & Analysis of Experiments 33


8E 2012 Montgomery
Chapter 3 34
Design & Analysis of Experiments
8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 35
8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 36
8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 37
8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 38
8E 2012 Montgomery
3.9 The Random Effects Model

• Text reference, page 118


• There are a large number of possible
levels for the factor (theoretically an infinite
number)
• The experimenter chooses a of these
levels at random
• Inference will be to the entire population of
levels
Chapter 3 Design & Analysis of Experiments 39
8E 2012 Montgomery
Variance components

Chapter 3 Design & Analysis of Experiments 40


8E 2012 Montgomery
Covariance structure:

Chapter 3 Design & Analysis of Experiments 41


8E 2012 Montgomery
Observations (a = 3 and n = 2):

Chapter 3 Design & Analysis of Experiments 42


8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 43
8E 2012 Montgomery
ANOVA F-test is identical to the fixed-effects case

Chapter 3 Design & Analysis of Experiments 44


8E 2012 Montgomery
Estimating the variance components using the ANOVA method:

Chapter 3 Design & Analysis of Experiments 45


8E 2012 Montgomery
• The ANOVA variance component estimators
are moment estimators
• Normality not required
• They are unbiased estimators
• Finding confidence intervals on the variance
components is “clumsy”
• Negative estimates can occur – this is
“embarrassing”, as variances are always
non-negative

Chapter 3 Design & Analysis of Experiments 46


8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 47
8E 2012 Montgomery
• Confidence interval for the error variance:

• Confidence interval for the interclass


correlation:

Chapter 3 Design & Analysis of Experiments 48


8E 2012 Montgomery
Chapter 3 Design & Analysis of Experiments 49
8E 2012 Montgomery
Maximum Likelihood Estimation of the
Variance Components
• The likelihood function is just the joint pdf
of the sample observations with the
observations fixed and the parameters
unknown:

• Choose the parameters to maximize the


likelihood function
Chapter 3 Design & Analysis of Experiments 50
8E 2012 Montgomery
Residual Maximum Likelihood (REML) is used to
estimate variance components

Point estimates from REML agree with the


moment estimates for balanced data

Confidence
intervals

Chapter 3 Design & Analysis of Experiments 51


8E 2012 Montgomery

You might also like