0% found this document useful (0 votes)
2 views35 pages

Factorial Experiment

The document discusses factorial experiments, which involve testing multiple factors at different levels to estimate their effects and interactions, contrasting them with single factor experiments. It outlines the advantages of factorial experiments, such as efficiency and logical hypothesis testing, while noting the complexity of analysis due to increasing treatment combinations. The document also details the 2^2-factorial experiment, methods for analysis of variance, and provides examples of applying these methods in agricultural research.

Uploaded by

styloxsakib199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views35 pages

Factorial Experiment

The document discusses factorial experiments, which involve testing multiple factors at different levels to estimate their effects and interactions, contrasting them with single factor experiments. It outlines the advantages of factorial experiments, such as efficiency and logical hypothesis testing, while noting the complexity of analysis due to increasing treatment combinations. The document also details the 2^2-factorial experiment, methods for analysis of variance, and provides examples of applying these methods in agricultural research.

Uploaded by

styloxsakib199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Factorial Experiment

In basic experimental designs (Completely randomized design, Randomized block design and
Latin square design) used to compare the effects of one set of treatments such as effect of different
feeding effect on the weight gain of poultry birds, feeding effect on the milk yield etc. In these
experiments only one factor (feed, usually termed as treatment) is involved and these are classified
as single factor experiments. In order to increase the scope of the experiment several factors each
at different levels may be involved with a view to estimate and compare their effects as well as
their interactions. These are called factorial experiments. Thus a factorial experiment is one in
which a number of factors at different levels are tested for their effects and interactions. The
factorial experiment, itself, is not considered as an experimental design. Any of the basic designs
may be used for a factorial experiment.

To illustrate the difference between the single factor experiment and a factorial experiment
consider an experimenter has two feeds. Traditional (T) and Pellet (P) to be compared in respect
of their effects on milk production. For simplicity consider that two level of T and P are used; the
lower level is coded by 0 and the higher level is coded by 1. In single factor approach two different
experiments may be conducted, one for determining the optimum level of T and the other for P.
This approach is time consuming and expensive but is unable to provide information about the
interaction effect of T and P. On the other hand it is possible to estimate and test the effects of T
and P and also their interaction in the same experiment conducted with the treatment combinations
constituted by the different doses of T and P.

Factorial experiments with n factors each at S levels are termed as Sn.-factorial experiments. Thus
a 2-factor experiment each factor having 2 levels is a 22-factorial experiment, a 2-factor experiment
each factor being at 3 levels is a 32-factorial experiment and a 3 factor experiment with 2 levels
for each factor is a 23-factorial experiment. A 2-factor experiment with first factor at p levels and
the second factor at q levels is a p x q factorial experiment. Similarly a p x q x r factorial experiment
is an experiment involving 3 factors, the first factor has p levels, the second factor has q levels and
the third factor is at r levels.
Advantages of Factorial Experiments:

(1) Factorial experiments are more efficient compared to single factor experiments because main
effects as well as the interactions can be estimated in factorial experiments.

(2) Since hypothesis regarding the main effects and their interactions can be tested separately in
factorial experiments, these are more logical.

Disadvantage of Factorial Experiments:

The only disadvantage of factorial experiment is that the number of treatment combinations
increases rapidly with the increase of number of factors or the levels of different factors and
analysis of variance becomes complicated.

22-Factorial Experiment:

The simplest factorial experiment consists of two factors each at two levels and is known as a 22-
factorial experiment. There are four treatment combinations. Denoting the factors by A and B, the
treatment combinations may be written as -
a0b0 a1b0 a0b1 a1b1
These combinations are often written as I, a, b and ab. The symbol I indicates that both the factors
are at the lower level.

These four treatment combinations can be compared by laying out an experiment in any basic
design. In factorial experiment the main objective is to carry out separate treatment sum of squares
with 3 degrees of freedom into three orthogonal components each with 1 degree of freedom and
each associated either with the main effects A and B or their interaction A B.

Main Effects and Interaction:

Consider a 22-factorial experiment with factors A and B is conducted in r-replicates. Suppose [I],
[a], [b] and [ab] are the total yields of the r units receiving the treatments I, a, b and ab respectively
and the corresponding mean values are (I), (a), (b) and (ab) respectively.
Table. Treatment Combinations of 22-Factorial Experiment.

Levels of Levels of Factor A


Factor B 0 1
0 a0b0 a1b0
1 a0b1 a1b1

The effect of factor A can be represented by the difference between mean yields obtained at each
level. Thus the effect of A at the first level b0 of B
is a1b0 - a0b0 = a - 1

Similarly, the effect of A at the second level b1 of B


is a1b1 - a0b1 = ab - b

These two effects are the simple effects of A at different levels of B. The average of these simple
effects is called the 'main effect' due to A and is obtained as -
A = ½{(ab - b) + (a - 1)} = ½{b(a - 1) + (a -1)} = ½ (a-1) (b + 1)

Similarly the simple effect of B at the first level a0 of A


is a0b1 - a0b0 = b - 1
The simple effect of B at the second level a1 of A
is a1b1 - a1b0 = ab - a
and the main effect of B is obtained as -
B = ½{(ab - a) + (b - 1)} = ½ (ab - a + b - 1) = ½ (a+1) (b -1)

The interaction of the two factors is the failure of the levels of one factor, say A, to retain the same
order and magnitude of performance throughout all levels of the other factor, say B. Thus
interaction A is obtained as -

AB = ½{(ab - b) - (a - 1)} = ½ (a - 1) (b - 1)
or
AB = ½{(ab - a) - (b - 1)} = ½ (a - 1) (b - 1)

Note that

BA = ½ (b-1) (a-1) = AB.


The mean yield of the four treatment combinations M (General mean) is defined as

M = ¼ {ab + a + b + 1}

= ¼ (a + 1) (b + 1)
The general mean M and the 3 mutually orthogonal contrasts A, B and AB can be summarized is
terms of means of the treatment combinations as follows:

Table. Main Effects and Interaction Expressed in Terms of Individual Treatment Means:
22-Factorial.
Treatment combination
Effect 1 a b ab Diviser
M + + + + 4
A - + - + 2
B - - + + 2
AB + - - + 2

Model and Analysis of 22-Factorial Experiments:

22-Factorial Experiment in Completely Randomized Design

Consider two factors A and B each at two levels (0, 1). The linear model for an experiment to be
conducted in completely randomized design is
Yijk = µ + i + j + ()ij + ijk; i, j, = 0, 1; k = 1, 2 ... r.

The only difference with a single factor experiment in completely randomized design is that in
place of treatment in a single factor experiment, we have main effects of A and B and their
interactions (that is, the treatment combinations of a, b and ab) in the factorial experiments. Total
sum of squares will, therefore, be partitioned as
SS (Total) = SS (A) + SS (B) + SS (AB) + SS (Error).
Here SS (A), SS (B) and SS (AB) together constitute the sum of squares due to treatment
combinations.
Analysis of variance of 22-factorial experiment in completely randomized design with r replicates
is given below:

Table. ANOVA for General 2-factor Experiment in CRD

Source of Degrees of Sum of Mean


F-ratio
variation Freedom Squares Squares

Total 4r-1 SS(Total)


Treatment 3 SST MST Ft = MST
MSE
A 1 SSA MSA FA =
MSA
MSE
B 1 SSB MSB FB = MSB
MSE
AB (Interaction) 1 SS(AB) MS(AB) FAB =
MS(AB )
MSE
Error 4(r-1) SSE MSE

Total sum of squares is calculated in the usual manner, while sum of squares due to the main
effects of A and B, null hypothesis related to the main effects and interaction and their interaction,
i.e. SS(A), SS(B) and SS(AB) can be computed in three different methods:

(a) Method I (Yate's Method):


This is an automatic method for computing the factorial effect totals and sum of squares. The steps
are as follows:
Step 1. Treatment combinations are written in the systematic manner and the total of the
corresponding treatment from all replicates (blocks in case of randomized block design) are paired.
Step 2. Sum of the pairs of treatment total above constitute the upper half of the next column. The
first member of each pair is subtracted from the second one to get the lower half of this column.
The values of this column are also paired as in step 1 above.
Step 3. The process in step-2 is applied to get the next column. This process in continued till the
grand total appears as the first member in a column. The factorial effect totals, usually symbolized
by capital letters within third brackets come out in the usual order. Factorial effect totals divided
by 22 will give the means of the corresponding factorial effects, while squares of an effect totals
divided by 22 r results the sum of squares due to corresponding treatment combination. The whole
process is summarized in the following table:

Table. Yate's Method for a 22-Factorial Experiment


Treatment Total from Sum of squares
combination all replicates (Col.4) 2
=
(1) (2) (3) (4) 22 r

I [I] [I]+[a] = u1 u1 + u2 = GT

a [a] [b]+[ab] = u2 u3 + u4 = [A] [ A ]2


SS(A)=
22 r
b [b] [a]+[I] = u3 u2 + u1 = [B] [ B]2
SS(B)=
22 r
ab [ab] [ab]-[b] = u4 u4-u3= [AB] [ AB ]2
SS(A)=
22 r

The error sum of squares in obtained as usual by subtraction, i.e. SS(Error) = SS(Total) - SS(A) -
SS(B) - SS (AB) are tested by examining the significance of the corresponding F-ratio.

b) Method II (Multiplier Method):


Factorial effect totals and sum of squares due to different factors and their interactions can be
computed using multipliers as shown below:

Table 3.5: Computation of Main Effects, Interaction and Their Sum of Squares
Treatment totals and Diviser Effect Sum of Squares
multipliers to get (EffectTotal) 2
Effects [I] [a] [b] [ab] mean Total Mean SS=
22 r
A - + - + 2r [A] SS(A)= [A]2 / 22.r
B - - + + 2r [B] SS(B)= [B]2 / 22.r
AB + - - + 2r [AB] SS(AB)= [AB]2 / 22.r

Note: Methods I and II above can be used only if all factors are at two levels
(c) Method III (General Methods):
The experimental results are summarized in a 2-way Table (A x B table) usually known as
interaction table as shown below:

Table. Table of Interaction between Factors A and B

Levels of A
Levels of B a0 a1 Total
b0 [a0b0] [a1b0] [B0]

b1 [a0b1] [a1b1] [B1]


Total [A0] [A1] G.T.

where [a1bj] is the total of all replicates for the ith level of A and jth level of B.

[Ai] = [a i b j ] and [Bj] =  [a i b j ]


j i

G.T. = [Ai ]  [B j ]


i j

Sum of Squares due to the main effects of A and B and their interaction AB are computed as -
2
1 2 (G.T.)
SS (A) =  [ Ai ] 
2r i 4r
2
1 2 (G.T.)
SS (B) = [B j ] 
2r j 4r

1 (G.T.)2
SS (AB) =  [a i b j ]2  - SS (A) - SS (B)
r i j 4r

The analysis of variance is summarized in the ANOVA table as usual.


Example: An experiment was conducted to see the effect of urea (N) and phosphate (P) fertilizers
on the yield of a certain variety of rice; N and P were both at 2 levels (0 and 1). Results of the
experiment in completely randomized design are given below:

Yield in Quintals per Hectare


Levels of Nitrogen
Levels of
Phosphate n0 n1 Total
24 24 32 28 224
p0 25 30 30 31
46 36 30 36 284
p1 35 39 30 32
Total 259 249 508
(508) 2
Correction term C.T. = = 16129
16
SS (Total) = 16624 - 16129 = 495

Sum of squares due to N, P and NP interaction will be computed in three different methods as
mentioned above.
a) Yate's Method:
Treatment Total of all Column-3 Column-4 Sum of squares
2
combi- repli-cates (Effect Totals) ( EffectTotal)
=
nation 16

I 103 103+121=224 224+284=508 = GT


n 121 156+128=284 18-28= -10 = [N] SS(N) = 6.25

p 156 121-103=18 284-224= 60 = [P] SS(P) = 225.0


np 128 128-156 = -28 -28-18 = -46 [NP] SS(NP) = 132.25

508
b) Multiplier Method:

Treatment Totals and Multipliers Sum of squares


2
Effects [I]= [n]= [p]= [np] Effect Totals ( EffectTotal)
=
(103) (121) (156) (128) 16
N -1 +1 -1 +1 [N]= -10 SS(N) = 6.25
P -1 -1 +1 +1 [P]= 60 SS(P) = 225.0
NP +1 -1 -1 +1 [NP]= -46 SS(NP) = 132.25

c) General Method:
Levels of N
Levels of P n0 n1 Total
p0 103 121 224
p1 156 128 284
Total 259 249 508

1 (508) 2
SS (N) = (2592 + 2492) - = 6.25
8 16
1 (508) 2
SS (P) = (2242 + 2842) - = 225.0
8 16
1 (508) 2
SS (NP) = (1032 + 1212 + 1562 + 1282) - - SS (N) - SS (P)
4 16
= 132.25

Now SS (Error) = SS (Total) - SS (N) - SS (P) - SS (NP)


= 495.0 - 6.25 - 225.0 - 132.25 = 131.50

The analysis of variance is summarized in the following Table:


ANOVA
S.V. d.f. S.S. M.S. F
Treatment 3 363.5 121.17 11.06**
N 1 6.25 6.25 0.57
P 1 225.0 225.0 20.53**
NP 1 132.25 132.25 12.07**
Error 12 131.50 10.96
Total 15 495.0
** Significant with p < 0.01
Though the effect of N is found insignificant, the effect of P and the interaction NP are both found
to have significant effect on the yield of rice.

Note: Since the first two methods (Yate's method and Multiplier method) are applicable only for
2-levelled factorial experiment, future illustrations will be based on the general method.

22-Factorial Experiment in Randomized Block Design:


The linear model for a 22-factorial experiment in randomized block design is
yijk = µ + i + j + ()ij + k + ijk
The analysis of variance for r blocks is summarized below.

Table: ANOVA for 22-Factorial Experiment in RBD:


S.V. d.f. S.S. M.S. F-ratio
Block r-1
Treatment 3 SST MST Ft
A 1 SSA MSA FA
B 1 SSB MSB FB
AB Interaction 1 SS(AB) MS(AB) FAB
Error 3(r- SSE MSE
1)
Total 4r-1

Example:
Let the experiment in example 5.1 be conducted in randomized block design. Let the yields for 4
blocks are as given below:

Block I Block II Block III Block IV


I np n p np I p n
24 30 28 36 30 25 39 31
n p I np p n np I
32 46 24 36 35 30 32 30
Total 132 124 120 132

(508) 2
G.T. = 508 Correction Term (C.T.) = = 16129
16
SS (Total) = 16624 - 16127 = 495
1 (508) 2
SS (Block) = (1322 + 1242 + 1202 + 1322) - = 27
4 16
Now the experimental results are summarized in a 2-way table as below:

N x P (Interaction) Table
Levels of N
Levels of P n0 n1 Total
p0 103 121 224

p1 156 128 284


Total 259 249 508

1 (508) 2
SS (N) = (2592 + 2492) - = 6.25
8 16

1 (508) 2
SS (P) = (2242 + 2842) - = 225.0
8 16
1 (508) 2
SS (NP) = (1032 + 1212 + 1562 + 1282) - - SS (N) - SS (P)
4 16
= 132.25

 Now SS (Error) = SS (Total) -SS (Block) - SS (N) - SS (P) - SS (NP)

= 495.0 - 27.0 - 6.25 - 225.0 - 132.25 = 104.5.

ANOVA
S.V. d.f. S.S. M.S. F-ratio
Block 3 27.0 9.0
Treatment 3
A 1 6.25 6.25 0.45
B 1 225.0 225.0 19.38**
AB 1 132.25 132.25 11.39**
Error 9 104.5 11.61
Total 15 495.0
** Indicate significance at 1% level of significance.

The mean yield of the 8 treatment combinations (General mean) is defined as:
1
M = (abc + ab + ac + bc + a + b + c + 1)
8
1
= (a + 1) (b + 1) (c + 1)
8
The general mean M and the 7 mutually orthogonal contrasts A, B, C, AB, AC, BC, ABC can be
summarized in terms of mean of treatment combinations as shown in the following table:

Table: Main Effects and Interactions Expressed in Terms of Individual Treatment Means
23-Factorial.

Treatment Combinations
Effect I a b ab c ac bc abc Diviser
M + + + + + + + + 8
A - + - + - + - + 4
B - - + + - - + + 4
AB + - - + + - - + 4
C - - - - + + + + 4
AC + - + - - + - + 4
BC + + - - - - + + 4
ABC - + + - + - - + 4

[Note: Plus sign is assigned to each of the treatment means whenever the corresponding factor is
at the second level, otherwise a minus sign is assigned. For interaction sign to be assigned
is obtained by multiplying the signs of the corresponding main effects].

Model and Analysis of 23-Factorial Experiment in Randomized Block Design:

The linear model for an experiment with factors A, B and C, each at 2 levels conducted in
randomized block design is -

yijkl =  + i + j + ()ij + k + ()ik +()jk + ()ijk + l + ijkl

i, j, k = 0, 1 and l = 1, 2, ..........., r

The total sum of squares is partitioned as

SS (Total) =SS (A) + SS (B) + SS (AB) + SS (C) + SS (AC) +


SS (BC) + SS (ABC) + SS (Block) + SS (Error)

Analysis of variance for r blocks is summarized as below:

Table: ANOVA for 23-Factorial Experiment in RBD

S.V. d.f. S.S. M.S. F-ratio


Block r-1 SSB
Treatment 7 SST MST
A 1 SSA MSA
B 1 SSB MSB
AB 1 SS(AB) MS(AB)
C 1 SSC MSC
AC 1 SS(AC) MS(AC)
BC 1 SS(BC) MS(BC)
ABC 1 SS(ABC) MS(ABC)
Error 8(r-1) SSE MSE
Total 8r-1

Example
An experiment on paddy was conducted with three fertilizers N, P and K each at 2 levels (absence
and presence) in a randomized block design. The results are given below:

Per hectare yields in quintals


Treatment Block Block Block Block
combinations I II III IV Total
I 40 45 50 48 183
n 56 50 58 53 217
p 62 61 60 57 240
np 70 68 65 66 269
k 60 55 53 55 223
nk 70 72 69 73 284
pk 72 81 70 78 301
npk 80 85 84 82 331
Total 510 517 509 512 2048
(2048)2
Correction Term (C.T.) = = 131072
32
Calculation of sum of squares:
SS (Total) = 4452
1
SS (Block) = (5102 + 5172 + 5092 + 5122) - C.T.
8
= 4.75
1
SS (Treat.) = (1832 + 2172 + .......... + 3312) - C.T.
4
= 4199.5
SS (Error) = SS (Total) - SS (Block) - SS (Treatment)
= 4452 - 4.75 - 4199.5 = 247.75

Splitting of Treatment Sum of Squares:

(a) Yate's Method:

Treatment Totals (Col.3) 2


Combi-nation from all Column 1 Column 2 Column S.S.=
32
blocks 3

I 183 400 909 2048 =GT


n 217 509 1139 154=[N] 741.125

p 240 507 63 234=[P] 1711.125


np 269 632 91 -36=[NP] 40.50

k 223 34 109 230=[K] 1653.125


nk 284 29 125 28=[NK] 24.50

pk 301 61 -5 16=[PK] 8.00


npk 331 30 -31 -26=[NPK] 21.125
b) Multiplier Method:

Treatment totals and multipliers SS =


[I] [n] [p] [np] [k] [nk] [pk] [npk] ( Effect Total ) 2
Effect 183 217 240 269 223 284 301 331 Effect 4 x 23
totals
I - + + - + - - +
N - + - + - + - + 154 741.125
P - - + + - - + + 234 1711.125
NP + - - + + - - + -36 40.50
K - - - - + + + + 230 1653.125
NK + - + - - + - + 28 24.50
PK + + - - - - + + 16 8.00
NPK - + + - + - - + -26 21.125

c) General Method:
1
SS (Treat) = (1832 + 2172 + 2402 + 2692 + 2232 + 2842 + 3012 +
4
(2048)2
3312) - = 4199.50
32
1 20482
SS (N) = (9472+11012) - = 741.125
16 32
N x P Table 1 20482
SS (P) = (9072+11412) - = 1711.125
n0 n1 16 32
1
SS (NP) = (4062+5012+5412+6002)
8
p0 40 501 907 20482
- - SS(N) - SS(P)
6 32
= 2492.75 - 741.125 - 1711.125 = 40.50
p1 600 114
54 1
1
94 110 204
7 1 8

1 20482
SS (K) = (9092+11392) - = 1653.125
16 32
N x K Table
1
n0 n1 SS (NK) = (4232+4862 + 5242 + 6152)
8
20482
- - SS(N) - SS(K)
32
= 2418.75 - 741.125 - 1653.125 = 24.50
k0 42 486 909
3
k1 615 113
52 9
4
94 110 204
7 1 8

1
SS (PK) = { (4002+5092 + 5072 + 6322) - C.T.}
P x K Table 8
p0 p1 - SS (P) - SS (K)
= 3372.25 - 1711.125 - 1653.125
k0 40 509 909 = 8.00
0
k1 632 113
50 9
7
90 114 204
7 1 8

SS (NPK) = SS (Treat.) - SS (N) - SS (P) - SS (NP) - SS (K) - SS (NK) - SS (PK)


= 4199.5 - 741.125 - 1711.125 - 40.5 - 1653.125 - 24.50 - 8.00 = 21.125

Analysis of variance is summarized in the following table:


ANOVA

Source of Degrees of Sum of Squares Mean Square F-ratio


variation Freedom
Total 31 4452.00
Block 3 4.75
Treatment 7 4199.50 599.929 50.499**
N 1 741.125 741.125 62.384**
P 1 1711.125 1711.125 144.034
NP 1 40.50 40.50 3.409
K 1 1653.125 1653.125 139.125**
NK 1 24.50 24.50 2.062
PK 1 8.00 8.00 0.673
NPK 1 21.125 21.125 1.778
Error 21 247.75 11.88
** Significant with p < 0.01
Main effects of all the factors N, P and K are found to be highly significant but none of the
interaction are so.
p x q Factorial Experiment
In general, if one of the factors of a 2-factor experiment has p levels and the other factor has q
levels, there will be pq treatment combinations and the experiment is known as p x q factorial
experiment.

p x q-Factorial Experiment in Randomized Block Design:

Consider two factors A and B such that A has p levels and B has q levels. The linear model for a p x
q factorial experiment in RBD is

i = 1, 2, ....., p
yijk = µ + i + j + ()ij + k + ijk ;
j = 1, 2, ....., q
k = 1, 2, ....., r

where k (k = 1, 2, ......, r) is the effect of the kth block.

yijk, is the effect of the ith level of A and jth level of B in the kth block.
i, j and ijk are as defined in the model for p x q factorial experiment.

The analysis of a p x q factorial experiment in r blocks is given below:

Table: ANOVA for a p x q Factorial Experiment in RBD


S.V. d.f. S.S. M.S. F-ratio
Total pqr-1
Block r-1
Treatment pq-1
A p-1 SS(A) MS(A) FA = MS(A)/MSE
B q-1 SS(B) MS(B) FB = MS(B)/MSE
AB (p-1)(q-1) SS(AB) MS(AB) FAB = MS(AB)/MSE
Error (pq-1)(r-1) SSE MSE

Now for testing H0: 1 = 2 = ............ = p (which is equivalent to testing H0: 2 = 0) the F-
statistic is -
MS(A )
FA = with (p-1) and (pq-1)(r-1)d.f.
MSE

For testing H0: 1 = 2 = ................ = q the test statistic is


MS(B)
FB = with d.f. (q-1) and (pq-1)(r-1)
MSE

For testing null hypothesis of interaction between A and B, the test statistic is
MS(AB )
FAB = with (p-1)(q-1) and (pq-1)(r-1)d.f.
MSE

Example: Results of a 2-factor experiment each at three levels conducted in randomized block
design are given below. Factors are different (rates) manures applied on three varieties of grass.

Per Plot Yield of Paddy in kg


Ma- Block-I Block-II Block-III Block-IV
nure v1 v2 v3 v1 v2 v3 v1 v2 v3 v1 v2 v3
m1 10 15 11 7 14 14 8 16 12 11 15 13
m2 19 25 30 16 18 22 15 22 24 18 29 26
m3 22 25 22 12 27 34 14 20 26 24 27 28
Block 179 164 157 191
totals
GT=691
Calculation of sum of squares:
6912
SS (Total) = 14925 - = 14925 - 13263.361 = 1661.639
36
1 6912
SS (Block) = (1792 + 1642 + 1572 + 1912) -
9 36
= 13340.778 - 13263.361 = 77.417.
Interaction M x V Table
Levels of M
Levels of V m1 m2 m3 Total
v1 36 68 72 176
v2 60 94 99 253
v3 50 102 110 262
Total 146 264 281 691
1 6912
SS (M) = (1462 + 2642 + 2812) -
4x3 36
= 14164.417 - 13263.361 = 901.056
1 2 2 2 6912
SS (V) = (176 + 253 + 262 ) -
4x3 36
= 13635.75 - 13263.361 = 372.389
1 6912
SS (M x V) = (362 + 602 + ........... + 1102) - - SS (M) – SS (V)
4 36

= 14586.25 - 13263.361 - 901.056 - 372.389 = 49.444

SS (Error) = SS (Total) – SS (Block) – SS (M) – SS (V) – SS (MV)


= 1661.639 - 77.417 - 901.056 - 372.389 - 49.444
= 261.333

ANOVA
Source of Degrees of Sum of squares Mean squares F-ratio
variation freedom
Block 3 77.417
Manure 2 901.056 450.528 41.375**
Variety 2 372.389 186.195 17.10**
MxV 4 49.444 12.361 1.135
Error 24 261.333 10.889
Total 35 1661.639
** Significant with p < 0.01
Significant difference is observed between the effects of different levels of manure and also
between the effects of different varieties.
Experiments with 3-Levelled Factors (3n-Series):

The scope of the experiment obviously increases if the factors are taken at three levels instead of
two, it becomes more informative. The pattern of change of response with the increase of the level
values of the factor can be studied in a better way. However, the number of levels cannot be
increased too much as the size of the experiment increases too rapidly with them.

In a 3n-factorial experiment there are n factors being at three levels. As before, the factors and their
interactions are denoted by capital letters and levels by the digits 0, 1 and 2.

32-Factorial Experiment:
The simplest in a 3n-series is the 32-factorial experiment which involves two factors each at three
levels. The treatment combinations for a 32-factorial experiment are shown in the figure below:
Levels of Factor B

2 02 12 22

1 01 11 21

0 00 10 20

0 1 2
Levels of Factor A
Figure: Treatment combinations in a 32-Experiment.
Since there are 32 = 9 treatment combinations, there are eight degrees of freedom them. The main
effects A and B has two degrees of freedom for each and the AB interaction has four degrees of
freedom. Before computing the analysis of variance table for a 32-factorial experiment (say), it is
necessary to obtain the replicate, treatment and grand totals and to construct an interaction (A x B)
table. The sum of squares due to replicate and total are computed in the usual manner. Sum of
squares due to the main effects of factors A and B and their interaction are computed using the
interaction table.
The analysis of variance for r replicates is summarized in the following table.

Table: ANOVA for 32-Factorial Experiment


Source of Degrees of Sum of squares Mean squares F-ratio
variation freedom
Replicate r-1 SS(Rep.)
Treatment 32-1=8 SS (Treat.) MS (Treat.)
A 3-1=2 SS(A) MS(A) MS(A )
FA=
MSE

B 3-1=2 SS(B) MS(B) FB=


MS(B)
MSE

AB interaction 4 SS(AB) MS(AB) MS( AB )


FAB=
MSE

Error (32-1)(r-1) SSE MSE


Total 32 r-1

The sum of squares for A, B and AB may also be computed by an alternative method, an extension
of Yate's algorithm to 3n-series. The sum of squares for any main effect may be partitioned into a
linear and a quadratic component, each with a single degree of freedom, using the orthogonal
contrast constants. Of course, this is only meaningful if the factor is quantitative and if the levels
are equally spaced. Extension of Yate's method and the partition of sum of squares due main effects
are not discussed here. Interested readers may consult Montogomery (1997), Anderson and
McLean (1974) and John (1971).
Split-Plot Design
In 2-factor experiments it is often desirable to get precise information on one factor and its
interaction with a second factor but to forego such precision on the second factor. For example, in
a variety-cum-fertilizer experiment the researcher may be interested mainly to study the effects of
fertilizers. He may use several varieties of the crop with a view to increasing the scope of the
experiment. He is interested mainly to study how the fertilizer effects vary from variety to variety
and not to study the main effects of varieties.

In some experiments it may be necessary to use two types of units for the two factors - one of the
factors may require larger units for convenient management of the experiment. For example, in
irrigation cum varietal experiment, bigger units are necessary for the factor irrigation, while
varieties may be conveniently assigned to smaller units.
In first case, each block is first divided into a number of whole plots depending on the levels of
the less important factor (variety) and the levels of these factors, often known as the whole-plot
factor is randomly assigned the whole plots. Each whole plot then is splitted into a number of
subplots or split plots depending on the levels of the factor of interest, usually known as the split
plot factor and the levels of the split plot factor are randomly assigned to the split plots within each
whole plot.

The design thus obtained is known as split-plot design. Many examples of factors, which require
large experimental units, are available in all areas of research. A few are listed below to illustrate
the diversity of material for which split-plot designs are appropriate:

(i) In research on milking machines a relatively large amount of milk is required. Methods of
cooling or pasteurizing require smaller amounts of milk and may be utilized as a split plot
treatment.
For certain types of material, such as the above, and for cases in which the experimental procedure
is not clearly stated or understood, it is sometimes assumed that the p x q factorial is arranged in a
randomized block design, when it is actually a split-plot design. Such a mistake leads to
inappropriate analysis of variance and results to incorrect tests of hypotheses. In order to make
correct analysis, the conduct of an experiment need to the clearly understood.
Advantages and Disadvantages of Split-Plot Design:
Advantages:
(i) Bigger experimental units may be utilized to compare subsidiary treatments.

(ii) Increased precision is attained on the split-plot treatment and the interaction of split-plot and
whole plot treatments in comparison with randomized block design of the pq treatments.

(iii) The overall precision of the split-plot design relative to the randomized block design of the
pq treatments may be increased by designing the whole plot treatments in a Latin square
design or in an incomplete Latin square design.

Disadvantages:
(i) The whole plot treatments are measured with less precision than they are in randomized block
design of pq treatments.

(ii) When missing data occur, the increase in completely of the analysis for the split-plot design
is greater than for the randomized block design.

Linear Model and Statistical Analysis:


Consider a 2-factor experiment with factor A at p levels assigned to the whole plot units and the
factor B at q levels assigned to the split-plot units. The linear model of the experiment in r blocks
is given as
yijk =  + i + j + ij + k + ()jk + ijk
i = 1, 2, ................., r
j = 1, 2, ................., p
k = 1, 2, ................., q

where yijk is the effect of the jth level of A and kth level of B in the ith block,
 is the general mean
j is the effect of the jth level of A
k is the effect of the kth level of B
()jk is the interaction between the jth level of A and the kth
level of B
ij is random error NID (0, A2) and
ijk is random error NID (0, B2).
Total sum of squares can be partitioned as
SS (Total) = SS (Block) + SS (A) + SS (Error I) + SS (B) + SS (AB) + SS (Error II)

where Error I corresponds to the whole plot error and Error II corresponds to the split-plot error.

The analysis of variance in split-plot design is summarized in the following table:


Table: Analysis of Variance in Split-Plot Design
Source of d.f. Sum of squares Mean square F-ratio
variation
2 SS(Block )
1 2 Y...
 Yi ..  r 1
Block r-1 pq j pqr

2 SS(A) MSA
1 2 Y...
 Y. j.  r 1 MSE I
A p-1 qr j pqr

2 SS(Error I)
1 2 Y...
  Yij .  (p  1)(r  1)
Error I (p-1)(r-1) q i j pqr

2 SS(B) MSB
1 2 Y...
 Y..  q 1 MSE II
B q-1 pr k pqr

2 SS(AB) MSAB
1 2 Y...
 Y.ik  (p  1)(q  1) MSE II
AxB (p-1)(q-1) r pqr

SS(Error II)
p(q  1)(r  1)
Error II p(q-1)(r-1) By subtraction
Y...2
Yijk 2 
Total pqr-1 pqr

MS (Error I) is used to test the null hypothesis on the effect of different levels of the whole plot
factor A while MS (Error II) is used to test the hypothesis relating to the split-plot factor B and the
interaction AB.
A two-way table for blocks and the whole plot factor A is formed to compute the sum of squares
for Block, whole plot factor A and Error I.
Another 2-way table for the whole plot factor and the split-plot factor, popularly known as
interaction table is formed in order to
Example
An experiment was conducted to compare the effects of 4 cutting schemes on the yield of 3
varieties of alfalfa (adopted from Pearce et al. 1988). The plan (slightly changed) and the yields in
tons per acre are given below:
Block I Block II Block III
v1 v3 v2 v2 v3 v1 v3 v1 v2
t1 t2 t3 t4 t1 t2 t2 t3 t4
2.17 1.52 2.27 1.81 1.95 1.26 1.80 1.67 2.01
t3 t1 t2 t2 t3 t4 t4 t1 t2
2.29 1.75 1.38 1.30 1.61 2.01 1.99 1.62 1.85
t4 t3 t1 t3 t2 t1 t1 t2 t3
2.23 1.55 2.33 1.70 1.47 1.88 2.13 1.22 1.81
t2 t4 t3 t1 t4 t3 t3 t4 t1
1.58 1.56 1.86 2.01 1.72 1.60 1.82 1.82 1.70
22.49 20.32 21.44

Block IV Block V Block VI


v3 v2 v1 v1 v2 v3 v2 v1 v3
t1 t3 t2 t4 t3 t2 t3 t2 t4
1.78 1.54 1.59 1.66 1.67 1.01 0.88 0.94 1.33
t3 t1 t4 t2 t1 t3 t1 t4 t2
1.56 1.78 2.10 1.25 1.42 1.23 1.35 1.10 1.31
t4 t2 t1 t1 t4 t4 t2 t3 t1
1.55 1.09 2.34 1.58 1.31 1.51 1.06 1.12 1.30
t2 t4 t3 t3 t2 t1 t4 t1 t3
1.37 1.40 1.91 1.39 1.13 1.31 1.06 1.66 1.13
20.01 16.47 14.24
Analysis of Variance:
(114.97) 2
Grand total (GT) = 114.97, Correction term (CT) = =183.58
72

SS (Total) = 9.12

The data are arranged in a 2-way (Block x Variety) table as below:

Block Total
Variety I II III IV V VI
1 8.27 6.75 6.33 7.94 5.88 4.82 39.99
2 7.84 6.82 7.37 5.81 5.53 4.35 37.72
3 6.38 6.75 7.74 6.26 5.06 5.07 37.26
Total 22.49 20.32 21.44 20.01 16.47 14.24 114.97

1 114.97 2
SS (Block) = (22.492 + ........... + 14.242) - = 4.1498
12 72

1 114.97 2
SS (Variety) = (39.992 + 37.722 + 37.262) - = 0.1781
24 72

1 114.97 2
SS (Error I) = (8.272 + ... + 5.072) - - SS (Block)
4 72

- SS (Variety) = 1.3622

Now the data are arranged in an interaction (V x T) Table:


Variety Treatment Total
1 2 3 4
1 11.25 7.84 9.98 10.92 39.99
2 10.59 7.81 9.46 9.86 37.72
3 10.22 8.48 8.90 9.66 37.26
Total 32.06 24.13 28.34 30.44 114.97
1 114.97 2
SS (Treatment) = (32.062 + ......... + 30.442) - = 1.9625
18 72
114.97 2
SS (V x T) = 1 (11.252 + ......... + 9.662) - - SS (Variety)-
6 72

SS (Treatment) = 0.2105
SS (Error II) = SS (Total) - SS (Block) - SS (Variety) - SS (Error I)
- SS (Treatment) - SS (V x T)
= 1.2586
ANOVA
S.V. d.f. S.S. M.S. F
Block 5 4.1499 0.8300
Variety 2 0.1781 0.0890
Error I 10 1.3622 0.1362
Treatment 3 1.9625 0.6542 23.36**
VxT 6 0.2105 0.0351 1.26
Error II 45 1.2586 0.0280
Total 71 9.1218

** Indicates significance with p < 0.01


0.0280
Standard error for a treatment mean = = 0.039.
18

Analysis of Covariance
The three basic principles (randomization, replication and local control) form the basis of a valid
experimental design. These direct control principles for increasing the precision of an experiment
can be applied by grouping or blocking the experimental units into homogeneous groups. In
analysis of variance for a single variable, the variation can be partitioned into different components
depending on the design of experiment. In a randomized block experiment, we can sort out the
variance components attributable to blocks, treatments and error. In Latin square experiment, we
sort out the variance components into rows, columns, treatments, and error. The same is true for
correlated variability or covariation of two variables; the mechanism for sorting out the covariance
effects is known as analysis of covariance (ANOCOVA).
Analysis of covariance requires measurement of the character of primary interest plus the
measurement of one or more correlated variables known as covariates or concomitant variates. It
also requires that the functional relation of the covariates with the character of primary interest is
known beforehand. An indirect statistical error control may be achieved by using covariates. By
measuring the covariates the variation associated with the covariate can be deducted from the
experimental error. Thus the analysis of covariance is a technique that combines the features of
analysis of variance and linear regression.

Analysis of Covariance in Completely Randomized Design:


The completely randomized design consists of p treatments randomly allotted to N =  ri
i

experimental units with ri replicates on the ith treatment.

The linear mathematical model for covariance is


yij. = µ + i + xij + ij
= µ* + i + (xij - x.. ) + ij
where µ* = µ +  x..
i = 1, 2, ….., p and j = 1, 2, ………, ri
yij, µ, i and ij have usual meanings and  is the common regression coefficient for all the
treatments.
Suppose the experimental results are tabulated as shown below:
Treatment
Repli- 1 2 ....... i ....... p
cation x y x y x y x y
1 x11 y11 x21 y21 …... xi1 yi1 …... xp1 yp1
2 x12 y12 x22 y22 …... xi2 yi2 …... xp2 yp2
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
j x1j y1j x2j y2j …... xij yij …... xpj ypj
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
ri x 1r1 y1r1 x 2r2 y 2r2 …... x iri y iri …... x prp y prp

Treat-
X1. Y1 . X2. Y2. …... Xi. Yi. …... Xp. Yp.
ment
Total

Analysis of covariance includes the following steps

Step 1. Computation of correction terms:


(  x ij ) 2
Correction term for x: CTx =
N
(  y ij ) 2
Correction term for y: CTy =
N

  x ij .   y ij
Correction term for xy: CTxy =
N

The underlying assumptions of covariance analysis are:


(i) The regression of y on x is linear and independent of treatment. So the treatment effect and
the regression effect are addition.

(ii) The errors are independently and normally distributed with common variance.

(iii) The covariate x is fixed and is measured without error.

If the covariate were ignored the model (1) could be written as


yij = µ + i + ij ………………………............... (2)

Thus if ij = xij + ij , model (1) represents a situation in which the residual ij are further
decomposed and thus the error mean square based on ij will, in general, be smaller than the
residual mean square based upon ij.
Step 2. Computation of Sum of Squares:
Total S.S. for x: Sxx = xij2 - C.Tx
1
Treat. S.S. for x: Txx =  X i2. - C.Tx
r
i i

Error S.S. for x: Exx = Sxx - Txx


Total S.S. for y: Syy = yij2 - C.Ty
1
Treat. S.S. for y: Tyy =  Yi2. - CTy
i ri

Error S.S. for y: Eyy = Syy - Tyy


Step 3. Computation of sum of products (S.P.):
Total S.P. (xy): Sxy = xijyij - CTxy
1
Treat. S.P. (xy): Txy =  X i .Yi . - CTxy
r
i i

Error SP (xy): Exy = Sxy - Txy


The analysis of covariance is summarized in the following table
Table: ANOCOVA
Adjustment for regression
S.V. d.f. SS(x) SS(y) SP(xy) d.f. S.S. M.S.
Total N-1 Sxx Syy Sxy
Treat. p-1 Txx Tyy Txy
(E xy ) 2
Error N-P Exx Eyy Exy N-P-1 Eyy' =Eyy -
E xx S'e2
Treat. N-1 (S xy ) 2
Syy' = Syy -
+ Error N-2 S xx
Adjusted Treatment P-1 Tyy' = Syy' - Eyy' S't2

For testing H0: t1 = t2 = …………. = tp, the test statistic is


S' t 2 Tyy ' /(p  1)
F=  with (p-1) and (N-p-1) d.f.
S' e 2 E yy ' /( N  p  1)
Example
A feeding trial experiment was conducted to compare the effects of 5 different feeds on the weight
gain or goats in a farm. Feeds were given to selected goats for 3 months and gain in weight were
recorded. Tabulated results of the experiment showing initial weight x (kg.) and gain in weight y
(kg.) are given below:
Feeds
f1 f2 f3 f4 f5
x y x y x y x y x y
5.5 1.0 7.5 2.5 4.5 1.8 5.5 2.0 6.5 3.5
6.5 2.0 5.0 1.5 7.0 2.0 6.0 2.6 6.0 3.0
4.5 1.0 6.5 1.8 7.2 2.0 7.5 3.0 5.0 2.5
7.5 1.6 6.0 2.0 5.5 1.8 5.0 2.0 5.5 2.0
X1. Y1. X2. Y2. X3. Y3. X4. Y4. X5. Y5.
Total 24.0 5.6 25.0 7.8 24.2 7.6 24.0 9.6 23.0 11.0
x1. y1. x 2. y2. x 3. y3 . x 4. y4. x5. y5 .

Mean 6.0 1.40 6.25 1.95 6.05 1.90 6.0 2.4 5.75 2.75

X.. = 120.2 and Y.. = 41.6

Analysis of covariance for testing, H0: f1 = f2 = ……………. = f5

Step 1. Computation of correction terms:

(X..) 2 (120.2) 2
CTx =  = 722.402
5x4 20

(Y..) 2 (41.6) 2
CTy =  = 86.528
5x4 20

X.. x Y.. 120.2 x 41.6


CTxy =  = 250.016
5x4 20
Step 2. Computation of sum of squares:

Total SS for x: Sxx = xij2 - CTx = 740.84 - 722.402 = 18.438


1
Feed SS for x: Fxx = Xi.2 - CTx = 722.91 - 722.402 = 0.508
4
Error SS for x: Exx = Sxx - Txx = 18.438 - 0.508 = 17.93

Total SS for y: Syy = yij2 - CTy = 94.04 - 86.528 = 7.512


1
Feed SS for y: Fyy = Yi.2 - CTy = 90.78 - 86.528 = 4.252
4
Error SS for y: Eyy = Syy - Tyy = 7.512 - 4.252 = 3.26

Step 3: Computation of sum of products:

Total sum of products: Sxy = xijyij - CTxy = 254.7 - 250.016 = 4.684


1
Treat. sum of products : Txy = Xi.Yi. - CTxy
4
= 249.18 - 250.016 = -0.836
Error sum of products: Exy = Sxy - Txy = 4.684 - (-0.836) = 5.52

Adjustment for Regression:


S xy 2 (4.684) 2
Adjusted Total SS for y: Syy' = Syy -  7.512  =6.322
S xx 18.438

E xy 2 (5.52) 2
Adjusted Error SS for y: Eyy' = Eyy -  3.26  = 1.561
E xx 17.93

Adjusted Feed SS : Fyy' = Syy' - Eyy' = 6.322 - 1.561 = 4.761


ANOCOVA
Adjustment for regression
S.V. d.f. SS(x) SS(y) SP(xy) d.f S.S. M.S.
Total 19 18.438 7.512 4.684
Feed 4 0.508 4.252 -0.836
Error 15 17.93 3.26 5.52 14 Eyy'=1.561 S'e2=0.112
Feed 19 18.438 7.512 4.684 18 Syy'=6.322
+ Error
Adjusted Feed 4 Fyy' = 4.761 S't2=1.19

1.19
Test statistic, F = = 10.625** with 4 and 14 d.f.
0.112

The null hypotheses is rejected with p<0.01 which indicates that the feed effects differ
significantly.

You might also like